Jobs
Interviews

214 Dataproc Jobs - Page 6

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 4.0 years

7 - 9 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

POSITION Senior Data Engineer / Data Engineer LOCATION Bangalore/Mumbai/Kolkata/Gurugram/Hyd/Pune/Chennai EXPERIENCE 2+ Years JOB TITLE: Senior Data Engineer / Data Engineer OVERVIEW OF THE ROLE: As a Data Engineer or Senior Data Engineer, you will be hands-on in architecting, building, and optimizing robust, efficient, and secure data pipelines and platforms that power business-critical analytics and applications. You will play a central role in the implementation and automation of scalable batch and streaming data workflows using modern big data and cloud technologies. Working within cross-functional teams, you will deliver well-engineered, high-quality code and data models, and drive best practices for data reliability, lineage, quality, and security. HASHEDIN BY DELOITTE 2025 Mandatory Skills: Hands-on software coding or scripting for minimum 3 years Experience in product management for at-least 2 years Stakeholder management experience for at-least 3 years Experience in one amongst GCP, AWS or Azure cloud platform Key Responsibilities: Design, build, and optimize scalable data pipelines and ETL/ELT workflows using Spark (Scala/Python), SQL, and orchestration tools (e.g., Apache Airflow, Prefect, Luigi). Implement efficient solutions for high-volume, batch, real-time streaming, and event-driven data processing, leveraging best-in-class patterns and frameworks. Build and maintain data warehouse and lakehouse architectures (e.g., Snowflake, Databricks, Delta Lake, BigQuery, Redshift) to support analytics, data science, and BI workloads. Develop, automate, and monitor Airflow DAGs/jobs on cloud or Kubernetes, following robust deployment and operational practices (CI/CD, containerization, infra-as-code). Write performant, production-grade SQL for complex data aggregation, transformation, and analytics tasks. Ensure data quality, consistency, and governance across the stack, implementing processes for validation, cleansing, anomaly detection, and reconciliation. Collaborate with Data Scientists, Analysts, and DevOps engineers to ingest, structure, and expose structured, semi-structured, and unstructured data for diverse use-cases. Contribute to data modeling, schema design, data partitioning strategies, and ensure adherence to best practices for performance and cost optimization. Implement, document, and extend data lineage, cataloging, and observability through tools such as AWS Glue, Azure Purview, Amundsen, or open-source technologies. Apply and enforce data security, privacy, and compliance requirements (e.g., access control, data masking, retention policies, GDPR/CCPA). Take ownership of end-to-end data pipeline lifecycle: design, development, code reviews, testing, deployment, operational monitoring, and maintenance/troubleshooting. Contribute to frameworks, reusable modules, and automation to improve development efficiency and maintainability of the codebase. Stay abreast of industry trends and emerging technologies, participating in code reviews, technical discussions, and peer mentoring as needed. Skills & Experience: Proficiency with Spark (Python or Scala), SQL, and data pipeline orchestration (Airflow, Prefect, Luigi, or similar). Experience with cloud data ecosystems (AWS, GCP, Azure) and cloud-native services for data processing (Glue, Dataflow, Dataproc, EMR, HDInsight, Synapse, etc.). © HASHEDIN BY DELOITTE 2025 Hands-on development skills in at least one programming language (Python, Scala, or Java preferred); solid knowledge of software engineering best practices (version control, testing, modularity). Deep understanding of batch and streaming architectures (Kafka, Kinesis, Pub/Sub, Flink, Structured Streaming, Spark Streaming). Expertise in data warehouse/lakehouse solutions (Snowflake, Databricks, Delta Lake, BigQuery, Redshift, Synapse) and storage formats (Parquet, ORC, Delta, Iceberg, Avro). Strong SQL development skills for ETL, analytics, and performance optimization. Familiarity with Kubernetes (K8s), containerization (Docker), and deploying data pipelines in distributed/cloud-native environments. Experience with data quality frameworks (Great Expectations, Deequ, or custom validation), monitoring/observability tools, and automated testing. Working knowledge of data modeling (star/snowflake, normalized, denormalized) and metadata/catalog management. Understanding of data security, privacy, and regulatory compliance (access management, PII masking, auditing, GDPR/CCPA/HIPAA). Familiarity with BI or visualization tools (PowerBI, Tableau, Looker, etc.) is an advantage but not core. Previous experience with data migrations, modernization, or refactoring legacy ETL processes to modern cloud architectures is a strong plus. Bonus: Exposure to open-source data tools (dbt, Delta Lake, Apache Iceberg, Amundsen, Great Expectations, etc.) and knowledge of DevOps/MLOps processes. Professional Attributes: Strong analytical and problem-solving skills; attention to detail and commitment to code quality and documentation. Ability to communicate technical designs and issues effectively with team members and stakeholders. Proven self-starter, fast learner, and collaborative team player who thrives in dynamic, fast-paced environments. Passion for mentoring, sharing knowledge, and raising the technical bar for data engineering practices. Desirable Experience: Contributions to open source data engineering/tools communities. Implementing data cataloging, stewardship, and data democratization initiatives. Hands-on work with DataOps/DevOps pipelines for code and data. Knowledge of ML pipeline integration (feature stores, model serving, lineage/monitoring integration) is beneficial. © HASHEDIN BY DELOITTE 2025 EDUCATIONAL QUALIFICATIONS: Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or related field (or equivalent experience). Certifications in cloud platforms (AWS, GCP, Azure) and/or data engineering (AWS Data Analytics, GCP Data Engineer, Databricks). Experience working in an Agile environment with exposure to CI/CD, Git, Jira, Confluence, and code review processes. Prior work in highly regulated or large-scale enterprise data environments (finance, healthcare, or similar) is a plus.

Posted 1 month ago

Apply

4.0 - 8.0 years

25 - 30 Lacs

Pune

Hybrid

So, what’s t he r ole all about? As a Data Engineer, you will be responsible for designing, building, and maintaining large-scale data systems, as well as working with cross-functional teams to ensure efficient data processing and integration. You will leverage your knowledge of Apache Spark to create robust ETL processes, optimize data workflows, and manage high volumes of structured and unstructured data. How will you make an impact? Design, implement, and maintain data pipelines using Apache Spark for processing large datasets. Work with data engineering teams to optimize data workflows for performance and scalability. Integrate data from various sources, ensuring clean, reliable, and high-quality data for analysis. Develop and maintain data models, databases, and data lakes. Build and manage scalable ETL solutions to support business intelligence and data science initiatives. Monitor and troubleshoot data processing jobs, ensuring they run efficiently and effectively. Collaborate with data scientists, analysts, and other stakeholders to understand business needs and deliver data solutions. Implement data security best practices to protect sensitive information. Maintain a high level of data quality and ensure timely delivery of data to end-users. Continuously evaluate new technologies and frameworks to improve data engineering processes. Have you got what it takes? 8-11 years of experience as a Data Engineer, with a strong focus on Apache Spark and big data technologies. Expertise in Spark SQL , DataFrames , and RDDs for data processing and analysis. Proficient in programming languages such as Python , Scala , or Java for data engineering tasks. Hands-on experience with cloud platforms like AWS , specifically with data processing and storage services (e.g., S3 , BigQuery , Redshift , Databricks ). Experience with ETL frameworks and tools such as Apache Kafka , Airflow , or NiFi . Strong knowledge of data warehousing concepts and technologies (e.g., Redshift , Snowflake , BigQuery ). Familiarity with containerization technologies like Docker and Kubernetes . Knowledge of SQL and relational databases, with the ability to design and query databases effectively. Solid understanding of distributed computing, data modeling, and data architecture principles. Strong problem-solving skills and the ability to work with large and complex datasets. Excellent communication and collaboration skills to work effectively with cross-functional teams. You will have an advantage if you also have: Knowledge of SQL and relational databases, with the ability to design and query databases effectively. Solid understanding of distributed computing, data modeling, and data architecture principles. Strong problem-solving skills and the ability to work with large and complex datasets. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7235 Reporting into: Tech Manager Role Type: Individual Contributor

Posted 1 month ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Mumbai

Work from Office

Google Cloud Infrastructure Support Engineer will be responsible for ensuring the reliability, performance, and security of our Google Cloud Platform (GCP) infrastructure. Work closely with cross-functional teams to troubleshoot issues, optimize infrastructure, and implement best practices for cloud architecture. Experience with Terraform for deploying and managing infrastructure templates. Administer BigQuery environments, including managing datasets, access controls, and optimize query performance. Be familiar with Vertex AI for monitoring and managing machine learning model deployments. Knowledge of GCPs Kubernetes Engine and its integration with the cloud ecosystem. Understanding of cloud security best practices and experience with implementing security measures. Knowledge of setting up and managing data clean rooms within BigQuery. Understanding of the Analytics Hub platform and how it integrates with data clean rooms to facilitate sensitive data-sharing use cases. Knowledge of DataPlex and how it integrates with other Google Cloud services such as BigQuery, Dataproc Metastore, and Data Catalog. Key Responsibilities: Provide technical support for our Google Cloud Platform infrastructure, including compute, storage, networking, and security services. Monitor system performance and proactively identify and resolve issues to ensure maximum uptime and reliability. Collaborate with cross-functional teams to design, implement, and optimize cloud infrastructure solutions. Automate repetitive tasks and develop scripts to streamline operations and improve efficiency. Document infrastructure configurations, processes, and procedures. Qualifications: Required: Strong understanding of GCP services, including Compute Engine, Kubernetes Engine, Cloud Storage, VPC networking, and IAM. Experience with BigQuery and VertexAI Proficiency in scripting languages such as Python, Bash, or PowerShell. Experience with infrastructure as code tools such as Terraform or Google Deployment Manager. Strong communication and collaboration skills. Bachelor's Degree in Computer Science or related discipline, or the equivalent in education and work experience Preferred: Google Cloud certification (e.g., Google Cloud Certified - Professional Cloud Architect, Google Cloud Certified - Professional Cloud DevOps Engineer)

Posted 1 month ago

Apply

3.0 - 7.0 years

5 - 10 Lacs

Pune

Work from Office

This role is for Engineer who is responsible for design, development, and unit testing software applications. The candidate is expected to ensure good quality, maintainable, scalable, and high performing software applications getting delivered to users in an Agile development environment. Candidate / Applicant should be coming from a strong technological background. The candidate should have goo working experience in Python and Spark technology. Should be hands on and be able to work independently requiring minimal technical/tool guidance. Should be able to technically guide and mentor junior resources in the team. As a developer you will bring extensive design and development skills to enforce the group of developers within the team. The candidate will extensively make use and apply Continuous Integration tools and practices in the context of Deutsche Banks digitalization journey. Your key responsibilities Design and discuss your own solution for addressing user stories and tasks. Develop and unit-test, Integrate, deploy, maintain, and improve software. Perform peer code review. Actively participate into the sprint activities and ceremonies e.g., daily stand-up/scrum meeting, Sprint planning, retrospectives, etc. Apply continuous integration best practices in general (SCM, build automation, unit testing, dependency management) Collaborate with other team members to achieve the Sprint objectives. Report progress/update Agile team management tools (JIRA/Confluence) Manage individual task priorities and deliverables. Responsible for quality of solutions candidate / applicant provides. Contribute to planning and continuous improvement activities & support PO, ITAO, Developers and Scrum Master. Your skills and experience Engineer with Good development experience in Big Data platform for at least 5 years. Hands own experience in Spark (Hive, Impala). Hands own experience in Python Programming language. Preferably, experience in BigQuery , Dataproc , Composer , Terraform , GKE , Cloud SQL and Cloud functions. Experience in set-up, maintenance, and ongoing development of continuous build/ integration infrastructure as a part of DevOps. Create and maintain fully automated CI build processes and write build and deployment scripts. Has experience with development platforms: OpenShift/ Kubernetes/Docker configuration and deployment with DevOps tools e.g., GIT, TeamCity, Maven, SONAR Good Knowledge about the core SDLC processes and tools such as HP ALM, Jira, Service Now. Strong analytical skills. Proficient communication skills. Fluent in English (written/verbal). Ability to work in virtual teams and in matrixed organizations. Excellent team player. Open minded and willing to learn business and technology. Keeps pace with technical innovation. Understands the relevant business area. Ability to share information, transfer knowledge to expertise the team members.

Posted 1 month ago

Apply

4.0 - 8.0 years

10 - 19 Lacs

Chennai

Hybrid

Greetings from Getronics! We have permanent opportunities for GCP Data Engineers in Chennai Location . Hope you are doing well! This is Abirami from Getronics Talent Acquisition team. We have multiple opportunities for GCP Data Engineers for our automotive client in Chennai Sholinganallur location. Please find below the company profile and Job Description. If interested, please share your updated resume, recent professional photograph and Aadhaar proof at the earliest to abirami.rsk@getronics.com. Company : Getronics (Permanent role) Client : Automobile Industry Experience Required : 4+ Years in IT and minimum 3+ years in GCP Data Engineering Location : Chennai (Elcot - Sholinganallur) Work Mode : Hybrid Position Description: We are currently seeking a seasoned GCP Cloud Data Engineer with 3 to 5 years of experience in leading/implementing GCP data projects, preferrable implementing complete data centric model. This position is to design & deploy Data Centric Architecture in GCP for Materials Management platform which would get / give data from multiple applications modern & Legacy in Product Development, Manufacturing, Finance, Purchasing, N-Tier supply Chain, Supplier collaboration Design and implement data-centric solutions on Google Cloud Platform (GCP) using various GCP tools like Storage Transfer Service, Cloud Data Fusion, Pub/Sub, Data flow, Cloud compression, Cloud scheduler, Gutil, FTP/SFTP, Dataproc, BigTable etc. • Build ETL pipelines to ingest the data from heterogeneous sources into our system • Develop data processing pipelines using programming languages like Java and Python to extract, transform, and load (ETL) data • Create and maintain data models, ensuring efficient storage, retrieval, and analysis of large datasets • Deploy and manage databases, both SQL and NoSQL, such as Bigtable, Firestore, or Cloud SQL, based on project requirements infrastructure. Skill Required: - GCP Data Engineer, Hadoop, Spark/Pyspark, Google Cloud Platform (Google Cloud Platform) services: BigQuery, DataFlow, Pub/Sub, BigTable, Data Fusion, DataProc, Cloud Compose, Cloud SQL, Compute Engine, Cloud Functions, and App Engine. - 4+ years of professional experience in: o Data engineering, data product development and software product launches. - 3+ years of cloud data/software engineering experience building scalable, reliable, and cost- effective production batch and streaming data pipelines using: Data warehouses like Google BigQuery. Workflow orchestration tools like Airflow. Relational Database Management System like MySQL, PostgreSQL, and SQL Server. Real-Time data streaming platform like Apache Kafka, GCP Pub/Sub. Education Required: Any Bachelors' degree Candidate should be willing to take GCP assessment (1-hour online video test) LOOKING FOR IMMEDIATE TO 30 DAYS NOTICE CANDIDATES ONLY. Regards, Abirami Getronics Recruitment team

Posted 1 month ago

Apply

5.0 - 10.0 years

16 - 31 Lacs

Pune, Bengaluru

Work from Office

GCP Data Lead Experience - 4- 8 Years Location -Pune / Bengluru Required Past Experience: 5+ years of overall experience in architecting, developing, testing & implementing Data Platform projects using GCP Components (e.g. PySpark, SQL, GCP EcoSystem(Biq Query, Cloud Composer, DataProc) Good Understanding of Data Structures. Worked with large datasets and solving difficult analytical problems. Experience working with GIT for Source Code Management Worked with Structured and Unstructured data E2E Data Engineering and Lifecycle (including non-functional requirements and operations) management. Worked with client teams to design and implement modern, scalable data solutions using a range of new and emerging technologies from the Google Cloud Platform Automating manual processes to speed up delivery. Good Understanding of Data Pipeline (Batch and Streaming) and Data Governance Experience in code deployment from lower environment to production. Good communication skills to understand business requirements. Required Skills and Abilities: Mandatory Skills - BigQuery ,Composer, Python, GCP Fundamentals. Secondary Skills PySpark, SQL, GCP EcoSystem(Biq Query, Cloud Composer, DataProc) Knowledge of ETL Migration from On-Premises to GCP Cloud SQL Performance Tuning Batch/Streaming Data Processing Fundamentals of Kafka,Pub/Sub to handle real-time data feeds. Good To Have - Certifications in any of the following: GCP Professional Cloud Architect, GCP Professional Data Engineer Ability to communicate with customers, developers, and other stakeholders. Mentor and guide team members Good Presentation skills Strong Team Player

Posted 1 month ago

Apply

5.0 - 6.0 years

55 - 60 Lacs

Pune

Work from Office

At Capgemini Invent, we believe difference drives change. As inventive transformation consultants, we blend our strategic, creative and scientific capabilities,collaborating closely with clients to deliver cutting-edge solutions. Join us to drive transformation tailored to our client's challenges of today and tomorrow. Informed and validated by science and data. Superpowered by creativity and design. All underpinned by technology created with purpose. Data engineers are responsible for building reliable and scalable data infrastructure that enables organizations to derive meaningful insights, make data-driven decisions, and unlock the value of their data assets. - Grade Specific The role support the team in building and maintaining data infrastructure and systems within an organization. Skills (competencies) Ab Initio Agile (Software Development Framework) Apache Hadoop AWS Airflow AWS Athena AWS Code Pipeline AWS EFS AWS EMR AWS Redshift AWS S3 Azure ADLS Gen2 Azure Data Factory Azure Data Lake Storage Azure Databricks Azure Event Hub Azure Stream Analytics Azure Sunapse Bitbucket Change Management Client Centricity Collaboration Continuous Integration and Continuous Delivery (CI/CD) Data Architecture Patterns Data Format Analysis Data Governance Data Modeling Data Validation Data Vault Modeling Database Schema Design Decision-Making DevOps Dimensional Modeling GCP Big Table GCP BigQuery GCP Cloud Storage GCP DataFlow GCP DataProc Git Google Big Tabel Google Data Proc Greenplum HQL IBM Data Stage IBM DB2 Industry Standard Data Modeling (FSLDM) Industry Standard Data Modeling (IBM FSDM)) Influencing Informatica IICS Inmon methodology JavaScript Jenkins Kimball Linux - Redhat Negotiation Netezza NewSQL Oracle Exadata Performance Tuning Perl Platform Update Management Project Management PySpark Python R RDD Optimization SantOs SaS Scala Spark Shell Script Snowflake SPARK SPARK Code Optimization SQL Stakeholder Management Sun Solaris Synapse Talend Teradata Time Management Ubuntu Vendor Management Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fuelled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of "22.5 billion.

Posted 1 month ago

Apply

8.0 - 10.0 years

10 - 14 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

GCP (Dataproc, Bigquery) Secondary: Python, Spark

Posted 1 month ago

Apply

5.0 - 7.0 years

7 - 9 Lacs

Bengaluru

Work from Office

Skilled Multiple GCP services - GCS, BigQuery, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer etc. Must have Python and SQL work experience & Proactive, collaborative and ability to respond to critical situation Ability to analyse data for functional business requirements & front face customer Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 5 to 7 years of relevant experience working as technical analyst with Big Query on GCP platform. Skilled in multiple GCP services - GCS, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer You love collaborative environments that use agile methodologies to encourage creative design thinking and find innovative ways to develop with cutting edge technologies Ambitious individual who can work under their own direction towards agreed targets/goals and with creative approach to work Preferred technical and professional experience Create up to 3 bullets maxitive individual with an ability to manage change and proven time management Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Up-to-date technical knowledge by attending educational workshops, reviewing publications (encouraging then to focus on required skills)

Posted 1 month ago

Apply

5.0 - 7.0 years

7 - 9 Lacs

Bengaluru

Work from Office

Skilled Multiple GCP services - GCS, BigQuery, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer etc. Must have Python and SQL work experience & Proactive, collaborative and ability to respond to critical situation Ability to analyse data for functional business requirements & front face customer Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 5 to 7 years of relevant experience working as technical analyst with Big Query on GCP platform. Skilled in multiple GCP services - GCS, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer You love collaborative environments that use agile methodologies to encourage creative design thinking and find innovative ways to develop with cutting edge technologies Ambitious individual who can work under their own direction towards agreed targets/goals and with creative approach to work Preferred technical and professional experience Create up to 3 bullets maxitive individual with an ability to manage change and proven time management Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Up-to-date technical knowledge by attending educational workshops, reviewing publications (encouraging then to focus on required skills)

Posted 1 month ago

Apply

5.0 - 7.0 years

7 - 9 Lacs

Gurugram

Work from Office

Skilled Multiple GCP services - GCS, Big Query, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer etc. Must have Python and SQL work experience & Proactive, collaborative and ability to respond to critical situation Ability to analyse data for functional business requirements & front face customer Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 5 to 7 years of relevant experience working as technical analyst with Big Query on GCP platform. Skilled in multiple GCP services - GCS, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer You love collaborative environments that use agile methodologies to encourage creative design thinking and find innovative ways to develop with cutting edge technologies Ambitious individual who can work under their own direction towards agreed targets/goals and with creative approach to work Preferred technical and professional experience Create up to 3 bullets MA Intuitive individual with an ability to manage change and proven time management Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Up-to-date technical knowledge by attending educational workshops, reviewing publications (encouraging then to focus on required skills

Posted 1 month ago

Apply

5.0 - 7.0 years

7 - 9 Lacs

Gurugram

Work from Office

Skilled Multiple GCP services - GCS, Big Query, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer etc. Must have Python and SQL work experience & Proactive, collaborative and ability to respond to critical situation Ability to analyse data for functional business requirements & front face customer Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 5 to 7 years of relevant experience working as technical analyst with Big Query on GCP platform. Skilled in multiple GCP services - GCS, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer You love collaborative environments that use agile methodologies to encourage creative design thinking and find innovative ways to develop with cutting edge technologies Ambitious individual who can work under their own direction towards agreed targets/goals and with creative approach to work Preferred technical and professional experience Create up to 3 bullets MA Intuitive individual with an ability to manage change and proven time management Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Up-to-date technical knowledge by attending educational workshops, reviewing publications (encouraging then to focus on required skills

Posted 1 month ago

Apply

5.0 - 10.0 years

20 - 35 Lacs

Kolkata

Hybrid

Role - GCP Data Engineer Experience:4+ years Preferred - Data Engineering Background Location -Kolkata ( Face to Face Interview) Required Skills - GCP DE Experience, Big query, SQL, Cloud compressor/Python, Cloud functions, Dataproc+pyspark, Python injection, Dataflow+PUB/SUB Job description - > Have Implemented and Architected solutions on Google Cloud Platform using the components of GCP >Experience with Apache Beam/Google Dataflow/Apache Spark in creating end to end data pipelines. > Experience in some of the following: Python, Hadoop, Spark, SQL, Big Query, Big Table Cloud Storage, Datastore, Spanner, Cloud SQL, Machine Learning. > Experience programming in Java, Python, etc. > Expertise in at least two of these technologies: Relational Databases, Analytical Databases, NoSQL databases. > Certified in Google Professional Data Engineer/ Solution Architect is a major Advantage Skills Required: 3+ years experience in IT or professional services experience in IT delivery or large-scale IT analytics projects Candidates must have expertise knowledge of Google Cloud Platform; the other cloud platforms are nice to have. Expert knowledge in SQL development. Expertise in building data integration and preparation tools using cloud technologies (like Snaplogic, Google Dataflow, Cloud Dataprep, Python, etc). Experience with Apache Beam/Google Dataflow/Apache Spark in creating end to end data pipelines. Experience in some of the following: Python, Hadoop, Spark, SQL, Big Query, Big Table Cloud Storage, Datastore, Spanner, Cloud SQL, Machine Learning. Experience programming in Java, Python, etc. Identify downstream implications of data loads/migration (e.g., data quality, regulatory, etc.) Implement data pipelines to automate the ingestion, transformation, and augmentation of data sources, and provide best practices for pipeline operations. Capability to work in a rapidly changing business environment and to enable simplified user access to massive data by building scalable data solutions Advanced SQL writing and experience in data mining (SQL, ETL, data warehouse, etc.) and using databases in a business environment with complex datasets Interested candidate please revert with your updates CV to aruna.b@tredence.com

Posted 1 month ago

Apply

5.0 - 9.0 years

20 - 35 Lacs

Noida, Gurugram, Delhi / NCR

Hybrid

Salary: 20 to 35 LPA Exp: 5 to 8 years Location: Gurgaon (Hybrid) Notice: Immediate to 30 days..!! Roles and Responsibilities Design, develop, test, deploy, and maintain large-scale data pipelines using GCP services such as BigQuery, Data Flow, PubSub, Dataproc, and Cloud Storage. Collaborate with cross-functional teams to identify business requirements and design solutions that meet those needs. Develop complex SQL queries to extract insights from large datasets stored in Google Cloud SQL databases. Troubleshoot issues related to data processing workflows and provide timely resolutions. Desired Candidate Profile 5-9 years of experience in Data Engineering with expertise GCP & Biq query data engineering. Strong understanding of GCP Cloud Platform Administration including Compute Engine (Dataproc), Kubernetes Engine (K8s), Cloud Storage, Cloud SQL etc. . Experience working on big data analytics projects involving ETL processes using tools like Airflow or similar technologies.

Posted 1 month ago

Apply

5.0 - 9.0 years

20 - 35 Lacs

Noida, Gurugram, Delhi / NCR

Hybrid

Salary: 20 to 35 LPA Exp: 5 to 8 years Location: Gurgaon (Hybrid) Notice: Immediate to 30 days..!! Roles and Responsibilities Design, develop, test, deploy, and maintain large-scale data pipelines using GCP services such as BigQuery, Data Flow, PubSub, Dataproc, and Cloud Storage. Collaborate with cross-functional teams to identify business requirements and design solutions that meet those needs. Develop complex SQL queries to extract insights from large datasets stored in Google Cloud SQL databases. Troubleshoot issues related to data processing workflows and provide timely resolutions. Desired Candidate Profile 5-9 years of experience in Data Engineering with expertise GCP & Biq query data engineering. Strong understanding of GCP Cloud Platform Administration including Compute Engine (Dataproc), Kubernetes Engine (K8s), Cloud Storage, Cloud SQL etc. . Experience working on big data analytics projects involving ETL processes using tools like Airflow or similar technologies.

Posted 1 month ago

Apply

4.0 - 8.0 years

20 - 35 Lacs

Noida, Gurugram, Delhi / NCR

Hybrid

Salary: 20 to 35 LPA Exp: 4 to 8 years Location: Gurgaon Notice: Immediate to 30 days..!! Key Skills: GCP, Cloud, Pubsub, Data Engineer

Posted 1 month ago

Apply

10.0 - 15.0 years

22 - 37 Lacs

Bengaluru

Work from Office

Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. As GCP Data Engineer at Kyndryl, you will be responsible for designing and developing data pipelines, participating in architectural discussions, and implementing data solutions in a cloud environment using GCP data services. You will collaborate with global architects and business teams to design and deploy innovative solutions, supporting data analytics, automation, and transformation needs. Responsibilities: Design, develop, and maintain scalable data pipelines using GCP services such as BigQuery, Dataflow, Pub/Sub, and Cloud Storage. Participate in architectural discussions, conduct system analysis, and suggest optimal solutions that are scalable, future-proof, and aligned with business requirements. Collaborate with stakeholders to gather requirements and create high-level and detailed technical designs. Design data models suitable for both transactional and big data environments, supporting Machine Learning workflows. Build and optimize ETL/ELT infrastructure using a variety of data sources and GCP services. Develop and maintain Python / PySpark for data processing and integrate with GCP services for seamless data operations. Develop and optimize SQL queries for data analysis and reporting. Monitor and troubleshoot data pipeline issues to ensure timely resolution. Implement data governance and security best practices within GCP. Perform data quality checks and validation to ensure accuracy and consistency. Support DevOps automation efforts to ensure smooth integration and deployment of data pipelines. Provide design expertise in Master Data Management (MDM), Data Quality, and Metadata Management. Provide technical support and guidance to junior data engineers and other team members. Participate in code reviews and contribute to continuous improvement of data engineering practices. Implement best practices for cost management and resource utilization within GCP. If you're ready to embrace the power of data to transform our business and embark on an epic data adventure, then join us at Kyndryl. Together, let's redefine what's possible and unleash your potential. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical and Professional Experience: Bachelor’s or master’s degree in computer science, Engineering, or a related field with over 8 years of experience in data engineering More than 3 years of experience with the GCP data ecosystem Hands-on experience and Strong proficiency in GCP components such as Dataflow, Dataproc, BigQuery, Cloud Functions, Composer, Data Fusion. Excellent command of SQL with the ability to write complex queries and perform advanced data transformation. Strong programming skills in PySpark and/or Python, specifically for building cloud-native data pipelines. Familiarity with GCP tools like Looker, Airflow DAGs, Data Studio, App Maker, etc. Hands-on experience implementing enterprise-wide cloud data lake and data warehouse solutions on GCP. Knowledge of data governance, security, and compliance best practices. Experience with private and public cloud architectures, pros/cons, and migration considerations. Excellent problem-solving, analytical, and critical thinking skills. Ability to manage multiple projects simultaneously, while maintaining a high level of attention to detail. Communication Skills: Must be able to communicate with both technical and nontechnical. Able to derive technical requirements with the stakeholders. Ability to work independently and in agile teams. Preferred Technical And Professional Experience GCP Data Engineer Certification is highly preferred. Professional certification, e.g., Open Certified Technical Specialist with Data Engineering Specialization. Experience working as a Data Engineer and/or in cloud modernization. Knowledge of Databricks, Snowflake, for data analytics. Experience in NoSQL databases Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). Familiarity with BI dashboards and Google Data Studio is a plus. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.

Posted 1 month ago

Apply

4.0 - 8.0 years

6 - 16 Lacs

Hyderabad, Chennai

Hybrid

Role & responsibilities Bachelors degree or four or more years of work experience. Four or more years of work experience. Experience with Data Warehouse concepts and Data Management life cycle. Experience in any DBMS Experience in Shell scripting, Spark, Scala. Experience in GCP/Big Query, composer, Airflow. Experience in real time streaming Experience in DevOps

Posted 1 month ago

Apply

0.0 - 4.0 years

5 - 10 Lacs

Mumbai

Work from Office

Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to dive headfirst into the captivating world of data engineering at Kyndryl? As a Data Engineer, you'll be the visionary behind our data platforms, crafting them into powerful tools for decision-makers. Your role? Ensuring a treasure trove of pristine, harmonized data is at everyone's fingertips. As a Data Engineer at Kyndryl, you'll be at the forefront of the data revolution, crafting and shaping data platforms that power our organization's success. This role is not just about code and databases; it's about transforming raw data into actionable insights that drive strategic decisions and innovation. In this role, you'll be engineering the backbone of our data infrastructure, ensuring the availability of pristine, refined data sets. With a well-defined methodology, critical thinking, and a rich blend of domain expertise, consulting finesse, and software engineering prowess, you'll be the mastermind of data transformation. Your journey begins by understanding project objectives and requirements from a business perspective, converting this knowledge into a data puzzle. You'll be delving into the depths of information to uncover quality issues and initial insights, setting the stage for data excellence. But it doesn't stop there. You'll be the architect of data pipelines, using your expertise to cleanse, normalize, and transform raw data into the final dataset—a true data alchemist. Armed with a keen eye for detail, you'll scrutinize data solutions, ensuring they align with business and technical requirements. Your work isn't just a means to an end; it's the foundation upon which data-driven decisions are made – and your lifecycle management expertise will ensure our data remains fresh and impactful. So, if you're a technical enthusiast with a passion for data, we invite you to join us in the exhilarating world of data engineering at Kyndryl. Let's transform data into a compelling story of innovation and growth. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Skills and Experience •Expertise in data mining, data storage and Extract-Transform-Load (ETL) processes •Experience in data pipelines development and tooling, e.g., Glue, Databricks, Synapse, or Dataproc •Experience with both relational and NoSQL databases, PostgreSQL, DB2, MongoDB •Excellent problem-solving, analytical, and critical thinking skills •Ability to manage multiple projects simultaneously, while maintaining a high level of attention to detail •Communication Skills: Must be able to communicate with both technical and non-technical colleagues, to derive technical requirements from business needs and problems Preferred Skills and Experience •Experience working as a Data Engineer and/or in cloud modernization •Experience in Data Modelling, to create conceptual model of how data is connected and how it will be used in business processes •Professional certification, e.g., Open Certified Technical Specialist with Data Engineering Specialization •Cloud platform certification, e.g., AWS Certified Data Analytics– Specialty, Elastic Certified Engineer, Google CloudProfessional Data Engineer, or Microsoft Certified: Azure Data Engineer Associate •Experience working with Kafka , ElasticSearch, Kibana & maintaining data lake •Managing interfaces, monitoring for production deployment including log shipping tool. •Experience in updates, upgrade, patches, VA closure, support with industry best tools •Degree in a scientific discipline, such as Computer Science, Software Engineering, or Information Technology Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.

Posted 1 month ago

Apply

5.0 - 7.0 years

13 - 17 Lacs

Bengaluru

Work from Office

Skilled Multiple GCP services - GCS, BigQuery, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer etc. Must have Python and SQL work experience & Proactive, collaborative and ability to respond to critical situation Ability to analyse data for functional business requirements & front face customer Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 5 to 7 years of relevant experience working as technical analyst with Big Query on GCP platform. Skilled in multiple GCP services - GCS, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer You love collaborative environments that use agile methodologies to encourage creative design thinking and find innovative ways to develop with cutting edge technologies Ambitious individual who can work under their own direction towards agreed targets/goals and with creative approach to work Preferred technical and professional experience Create up to 3 bullets maxitive individual with an ability to manage change and proven time management Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Up-to-date technical knowledge by attending educational workshops, reviewing publications (encouraging then to focus on required skills)

Posted 1 month ago

Apply

5.0 - 10.0 years

25 - 35 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Job Description Data Engineer/Lead Required Minimum Qualifications Bachelors degree in computer science, CIS, or related field 5-10 years of IT experience in software engineering or related field Experience on project(s) involving the implementation of software development life cycles (SDLC) Primary Skills : PySpark, SQL, GCP EcoSystem(Biq Query, Cloud Composer, DataProc) Design and develop data-ingestion frameworks, real-time processing solutions, and data processing and transformation frameworks leveraging open source tools and data processing frameworks. Hands-on on technologies such as Kafka, Apache Spark (SQL, Scala, Java), Python, Hadoop Platform, Hive, airflow Experience in GCP Cloud Composer, Big Query, DataProc Offer system support as part of a support rotation with other team members. Operationalize open-source data-analytic tools for enterprise use. Ensure data governance policies are followed by implementing or validating data lineage, quality checks, and data classification. Understand and follow the company development lifecycle to develop, deploy and deliver.

Posted 1 month ago

Apply

5.0 - 10.0 years

0 Lacs

Pune, Bengaluru

Hybrid

Job Summary We are seeking a highly skilled Hadoop Developer / Lead Data Engineer to join our data engineering team based in Bangalore or Pune. The ideal candidate will have extensive experience with Hadoop ecosystem technologies and cloud-based big data platforms, particularly on Google Cloud Platform (GCP). This role involves designing, developing, and maintaining scalable data ingestion, processing, and transformation frameworks to support enterprise data needs. Minimum Qualifications Bachelor's degree in computer science, Computer Information Systems, or related technical field. 5-10 years of experience in software engineering or data engineering, with a strong focus on big data technologies. Proven experience in implementing software development life cycles (SDLC) in enterprise environments. Technical Skills & Expertise Big Data Technologies: Expertise in Hadoop platform, Hive , and related ecosystem tools. Strong experience with Apache Spark (using SQL, Scala, and/or Java). Experience with real-time data streaming using Kafka . Programming Languages & Frameworks: Proficient in PySpark and SQL for data processing and transformation. Strong coding skills in Python . Cloud Technologies (Google Cloud Platform): Experience with BigQuery for data warehousing and analytics. Familiarity with Cloud Composer (Airflow) for workflow orchestration. Hands-on with DataProc for managed Spark and Hadoop clusters. Responsibilities Design, develop, and implement scalable data ingestion and transformation pipelines using Hadoop and GCP services. Build real-time and batch data processing solutions leveraging Spark, Kafka, and related technologies. Ensure data quality, governance, and lineage by implementing automated validation and classification frameworks. Collaborate with cross-functional teams to deploy and operationalize data analytics tools at enterprise scale. Participate in production support and on-call rotations to maintain system reliability. Follow established SDLC practices to deliver high-quality, maintainable solutions. Preferred Qualifications Experience leading or mentoring data engineering teams. Familiarity with CI/CD pipelines and DevOps best practices for big data environments. Strong communication skills with an ability to collaborate across teams.

Posted 1 month ago

Apply

3.0 - 5.0 years

5 - 8 Lacs

Bengaluru

Work from Office

What you’ll be doing: Assist in developing machine learning models based on project requirements Work with datasets by preprocessing, selecting appropriate data representations, and ensuring data quality. Performing statistical analysis and fine-tuning using test results. Support training and retraining of ML systems as needed. Help build data pipelines for collecting and processing data efficiently. Follow coding and quality standards while developing AI/ML solutions Contribute to frameworks that help operationalize AI models What we seek in you: Strong on programming languages like Python, Java One cloud hands-on experience (GCP preferred) Experience working with Dockers Environments managing (e.g venv, pip, poetry, etc.) Experience with orchestrators like Vertex AI pipelines, Airflow, etc Understanding of full ML Cycle end-to-end Data engineering, Feature Engineering techniques Experience with ML modelling and evaluation metrics Experience with Tensorflow, Pytorch or another framework Experience with Models monitoring Advance SQL knowledge Aware of Streaming concepts like Windowing, Late arrival, Triggers etc Storage: CloudSQL, Cloud Storage, Cloud Bigtable, Bigquery, Cloud Spanner, Cloud DataStore, Vector database Ingest: Pub/Sub, Cloud Functions, AppEngine, Kubernetes Engine, Kafka, Micro services Schedule: Cloud Composer, Airflow Processing: Cloud Dataproc, Cloud Dataflow, Apache Spark, Apache Flink CI/CD: Bitbucket+Jenkins / Gitlab, Infrastructure as a tool: Terraform Life at Next: At our core, we're driven by the mission of tailoring growth for our customers by enabling them to transform their aspirations into tangible outcomes. We're dedicated to empowering them to shape their futures and achieve ambitious goals. To fulfil this commitment, we foster a culture defined by agility, innovation, and an unwavering commitment to progress. Our organizational framework is both streamlined and vibrant, characterized by a hands-on leadership style that prioritizes results and fosters growth. Perks of working with us: Clear objectives to ensure alignment with our mission, fostering your meaningful contribution. Abundant opportunities for engagement with customers, product managers, and leadership. You'll be guided by progressive paths while receiving insightful guidance from managers through ongoing feedforward sessions. Cultivate and leverage robust connections within diverse communities of interest. Choose your mentor to navigate your current endeavors and steer your future trajectory. Embrace continuous learning and upskilling opportunities through Nexversity. Enjoy the flexibility to explore various functions, develop new skills, and adapt to emerging technologies. Embrace a hybrid work model promoting work-life balance. Access comprehensive family health insurance coverage, prioritizing the well-being of your loved ones. Embark on accelerated career paths to actualize your professional aspirations. Who we are? We enable high growth enterprises build hyper personalized solutions to transform their vision into reality. With a keen eye for detail, we apply creativity, embrace new technology and harness the power of data and AI to co-create solutions tailored made to meet unique needs for our customers. Join our passionate team and tailor your growth with us!

Posted 1 month ago

Apply

3.0 - 7.0 years

14 - 18 Lacs

Bengaluru

Work from Office

Req ID: 284544 We are currently seeking a Systems Integration Advisor -Technical Architecture-Cloud Services-AWS to join our team in Bangalore, Karntaka (IN-KA), India (IN). : We are seeking a highly skilled and motivated Mid-Level AI/DS Specialist to join our dynamic team at NTT Data. The ideal candidate will have a strong background in artificial intelligence and data science, with expertise in natural language processing (NLP), generative AI (Gen-AI), and conversational AI. The candidate should be well-versed with the Microsoft AI platform, OpenAI, Databricks, Python, and common data science libraries and tools. Additionally, the candidate should be capable of fine-tuning large language models (LLMs) and familiar with AI/ML engineering and prompt engineering. Key Responsibilities: Develop and implement AI/DS solutions to enhance business processes and customer experiences. Utilize NLP, Gen-AI, and conversational AI techniques to build and optimize AI models. Work with the Microsoft AI platform, OpenAI, and Databricks to develop and deploy AI solutions. Write efficient and scalable code in Python, leveraging common data science libraries and tools. Fine-tune LLM models to meet specific project requirements. Collaborate with cross-functional teams to integrate AI/DS solutions into existing systems. Stay updated with the latest advancements in AI/DS and apply them to ongoing projects. Conduct prompt engineering to improve the performance and accuracy of AI models. Qualifications: Bachelor's or Master's degree in Computer Science, Data Science, AI, or a related field. Proven experience in AI/DS, with a focus on NLP, Gen-AI, and conversational AI. Proficiency in the Microsoft AI platform, OpenAI, Databricks, and Python. Strong knowledge of common data science libraries and tools. Experience in fine-tuning LLM models. Familiarity with AI/ML engineering and prompt engineering. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills.

Posted 1 month ago

Apply

2.0 - 4.0 years

8 - 12 Lacs

Mumbai

Work from Office

The SAS to Databricks Migration Developer will be responsible for migrating existing SAS code, data processes, and workflows to the Databricks platform. This role requires expertise in both SAS and Databricks, with a focus on converting SAS logic into scalable PySpark and Python code. The developer will design, implement, and optimize data pipelines, ensuring seamless integration and functionality within the Databricks environment. Collaboration with various teams is essential to understand data requirements and deliver solutions that meet business needs

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies