Jobs
Interviews

491 Data Pipeline Jobs - Page 11

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 6.0 years

3 - 7 Lacs

Navi Mumbai

Work from Office

Python Developer with a strong understanding and practical experience in leveraging AI for code execution, generation, and optimization, specifically combined with expertise in Optical Character Recognition (OCR).

Posted 1 month ago

Apply

3.0 - 6.0 years

20 - 30 Lacs

Bengaluru

Work from Office

Job Title: Data Engineer II (Python, SQL) Experience: 3 to 6 years Location: Bangalore, Karnataka (Work from office, 5 days a week) Role: Data Engineer II (Python, SQL) As a Data Engineer II, you will work on designing, building, and maintaining scalable data pipelines. Youll collaborate across data analytics, marketing, data science, and product teams to drive insights and AI/ML integration using robust and efficient data infrastructure. Key Responsibilities: Design, develop and maintain end-to-end data pipelines (ETL/ELT). Ingest, clean, transform, and curate data for analytics and ML usage. Work with orchestration tools like Airflow to schedule and manage workflows. Implement data extraction using batch, CDC, and real-time tools (e.g., Debezium, Kafka Connect). Build data models and enable real-time and batch processing using Spark and AWS services. Collaborate with DevOps and architects for system scalability and performance. Optimize Redshift-based data solutions for performance and reliability. Must-Have Skills & Experience: 3+ years in Data Engineering or Data Science with strong ETL and pipeline experience. Expertise in Python and SQL . Strong experience in Data Warehousing , Data Lakes , Data Modeling , and Ingestion . Working knowledge of Airflow or similar orchestration tools. Hands-on with data extraction techniques like CDC , batch-based, using Debezium, Kafka Connect, AWS DMS . Experience with AWS Services : Glue, Redshift, Lambda, EMR, Athena, MWAA, SQS, etc. Knowledge of Spark or similar distributed systems. Experience with queuing/messaging systems like SQS , Kinesis , RabbitMQ .

Posted 1 month ago

Apply

8.0 - 10.0 years

15 - 18 Lacs

Pune

Work from Office

Hiring a Solution Architect with 810 yrs experience in data, AI & GenAI. Must have strong cloud (AWS/Azure/GCP), LLM, ML (TensorFlow/PyTorch), and full-stack skills. Role involves designing scalable architectures and leading technical teams.

Posted 1 month ago

Apply

5.0 - 8.0 years

25 - 30 Lacs

Pune, Gurugram, Bengaluru

Work from Office

NYU Manager - Owais UR Delivery Manager - Laxmi Title: Senior Data Developer with Strong MS/Oracle SQL, Python Skills and Critical Thinking Description: The EDA team seeks a dedicated and detail-oriented Senior Developer I to join our dynamic team. The responsibility of the successful candidate will be to handle repetitive technical tasks, such as Healthy Planet MS SQL file loads into a data warehouse, monitor Airflow DAGs, manage alerts, and rerun failed processes. Additionally, the role will require the analyst to monitor various daily and weekly jobs, which may include generation of revenue cycle reports and data delivery to external vendors. The perfect candidate will have a robust experience with MS/Oracle SQL, Python, Epic Health Systems, and other relevant technologies. Overview: As a Senior Developer I at NYU EDA team, you will play a vital role to improve the operation of our data load and management processes. Your primary responsibilities will be to ensure the accuracy and timeliness of data loads, maintain the health of data pipelines, and monitor that all scheduled jobs are completed successfully. You will collaborate with cross-functional teams to identify and resolve issues, improve processes, and maintain a high standard of data integrity. Responsibilities: Manage and perform Healthy Planet file loads into a data warehouse. Monitor Airflow DAGs for successful completion, manage alerts, and rerun failed tasks as necessary. Monitor and oversee other daily and weekly jobs, including FGP cash reports and external reports. Collaborate with the data engineering team to streamline data processing workflows. Develop automation scripts to reduce manual intervention in repetitive tasks using SQL and Python. Ensure all data-related tasks are performed accurately and on time. Investigate and resolve data discrepancies and processing issues. Prepare and maintain documentation for processes and workflows. Conduct periodic data audits to ensure data integrity and compliance with defined standards. Skillset Requirements: MS/Oracle SQL Python Data warehousing and ETL processes Monitoring tools such as Apache Airflow Data quality and integrity assurance Strong analytical and problem-solving abilities Excellent written and verbal communication Additional Skillset Familiarity with monitoring and managing Apache Airflow DAGs. Experience: Minimum of 5 years experience in a similar role, with a focus on data management and process automation. Proven track record of successfully managing complex data processes and meeting deadlines. Education: Bachelor's degree in Computer Science, Information Technology, Data Science, or a related field. Certifications: Epic Cogito MS/Oracle SQL, Python, or data management are a plus.

Posted 1 month ago

Apply

8.0 - 11.0 years

35 - 37 Lacs

Kolkata, Ahmedabad, Bengaluru

Work from Office

Dear Candidate, We are hiring a Data Engineer to build and maintain data pipelines for our analytics platform. Perfect for engineers focused on data processing and scalability. Key Responsibilities: Design and implement ETL processes Manage data warehouses and ensure data quality Collaborate with data scientists to provide necessary data Optimize data workflows for performance Required Skills & Qualifications: Proficiency in SQL and Python Experience with data pipeline tools like Apache Airflow Familiarity with big data technologies (Spark, Hadoop) Bonus: Knowledge of cloud data services (AWS Redshift, Google BigQuery) Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies

Posted 1 month ago

Apply

7.0 - 12.0 years

25 - 40 Lacs

Gurugram

Remote

Job Title: Senior Data Engineer Location: Remote Job Type: Fulltime YoE: 7 to 10 years relevant experience Shift: 6.30pm to 2.30am IST Job Purpose: The Senior Data Engineer designs, builds, and maintains scalable data pipelines and architectures to support the Denials AI workflow under the guidance of the Team Lead, Data Management. This role ensures data is reliable, compliant with HIPAA, and optimized. Duties & Responsibilities: Collaborate with the Team Lead and crossfunctional teams to gather and refine data requirements for Denials AI solutions. Design, implement, and optimize ETL/ELT pipelines using Python, Dagster, DBT, and AWS data services (Athena, Glue, SQS). Develop and maintain data models in PostgreSQL; write efficient SQL for querying and performance tuning. Monitor pipeline health and performance; troubleshoot data incidents and implement preventive measures. Enforce data quality and governance standards, including HIPAA compliance for PHI handling. Conduct code reviews, share best practices, and mentor junior data engineers. Automate deployment and monitoring tasks using infrastructure-as-code and AWS CloudWatch metrics and alarms. Document data workflows, schemas, and operational runbooks to support team knowledge transfer. Qualifications: Bachelors or Masters degree in Computer Science, Data Engineering, or related field. 5+ years of handson experience building and operating productiongrade data pipelines. Solid experience with workflow orchestration tools (Dagster) and transformation frameworks (DBT) or other similar tools such (Microsoft SSIS, AWS Glue, Air Flow). Strong SQL skills on PostgreSQL for data modeling and query optimization or any other similar technologies (Microsoft SQL Server, Oracle, AWS RDS). Working knowledge with AWS data services: Athena, Glue, SQS, SNS, IAM, and CloudWatch. Basic proficiency in Python and Python data frameworks (Pandas, PySpark). Experience with version control (GitHub) and CI/CD for data projects. Familiarity with healthcare data standards and HIPAA compliance. Excellent problemsolving skills, attention to detail, and ability to work independently. Strong communication skills, with experience mentoring or leading small technical efforts.

Posted 1 month ago

Apply

10.0 - 17.0 years

20 - 30 Lacs

Chennai

Work from Office

PFA JD Chennai- Guindy- Work from office Work location- Guindy Chennai Max budget- 30 LPA Imm joiners only Candidate should be ready to attend in person interview, based on panel members choice . out station candidates who can take F2F interview in BLR or Chennai office. Key Responsibilities: Design and implement transactional database architectures to support business processes. Optimize database performance for high-volume transactional systems. Develop and enforce database standards, governance, and best practices. Collaborate with developers to integrate database solutions into applications. Monitor and troubleshoot database performance issues. Ensure data integrity, security, and compliance with regulations. Conduct capacity planning and disaster recovery strategies. Document database designs and processes for future reference. Required Skills and Qualifications: Strong expertise in transactional database systems (e.g., MySQL, PostgreSQL). Proficiency in data modelling and database design principles. Experience with performance tuning and query optimization. Knowledge of database security and compliance standards. Familiarity with cloud-based database solutions (e.g., AWS RDS, Azure SQL). Excellent problem-solving and analytical skills. Bachelors degree in computer science, Information Technology, or a related field. Preferred Qualifications: Experience with distributed database systems and replication. Certification in database technologies (e.g., Oracle Certified Professional, Microsoft Certified Solutions Expert). Knowledge of emerging database technologies and trends. Please call varsha 7200847046 for more info Regards varsha 7200847046

Posted 1 month ago

Apply

3.0 - 8.0 years

4 - 8 Lacs

Hyderabad

Work from Office

We are looking for experienced ML Ops Engineer with 3+ years of experience, specializing in Machine Learning. In this role, you will be responsible for developing ML infrastructure around LLMs/ML Models. The ideal candidate will possess a deep understanding of the ML lifecycle and infrastructure. Key Responsibilities: Collaboration: Work closely with data scientists and ML engineers throughout the ML lifecycle, supporting model development, deployment, and monitoring. ML Ops Pipeline Development: Design, implement, and optimize ML Ops pipelines using tools and frameworks such as TensorFlow Serving, Kubeflow, MLflow, or similar technologies. Data Pipeline Engineering: Build and maintain data pipelines and infrastructure necessary for enterprise-scale machine learning applications, focusing on tasks like data ingestion, preprocessing, transformation, feature engineering, and model training. Cloud Implementation: Develop cloud-based ML Ops solutions on cloud platforms like AWS, Azure, GCP. Containerization Skills: Familiarity with containerization technologies like Docker and Kubernetes. Model Deployment and Monitoring: Deploy and monitor various machine learning models in production, including text/NLP and generative AI models. CI/CD Automation: Build and maintain CI/CD pipelines using tools such as GitLab CI, GitHub Actions, or Airflow to streamline the ML lifecycle. Model Review and Quality Assurance: Participate in data science model reviews, focusing on code optimization, containerization, deployment, versioning, and quality monitoring. Support Data Model Development: Contribute to data model development with an emphasis on auditability, versioning, and data security, implementing practices like lineage tracking and model explainability. Mentorship: Provide guidance and support to junior engineers, fostering a collaborative and innovative team environment. Experience: Minimum of 3+ years of relevant work experience in ML Ops. Domain Knowledge: Strong expertise in Generative AI, advanced NLP, and machine learning techniques. Production Experience: Proven experience in deploying and maintaining production-grade AI solutions. Communication Skills: Excellent communication and teamwork skills, with the ability to work independently when needed. Problem-Solving: Strong analytical and problem-solving capabilities. Continuous Learning: Stay informed about the latest advancements in ML Ops technologies and actively explore new tools and techniques to improve system performance and reliability.

Posted 1 month ago

Apply

12.0 - 17.0 years

20 - 25 Lacs

Hyderabad

Work from Office

About Persistent We are a trusted Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what’s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world including 12 of the 30 most innovative US companies, 80% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our growth trajectory continues, as we reported $1,231M annual revenue (16% Y-o-Y). Along with our growth, we’ve onboarded over 4900 new employees in the past year, bringing our total employee count to over 23,500+ people located in 19 countries across the globe. Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. For more details please login to www.persistent.com About The Position We are looking for an experienced Python Lead/ Architect to join our engineering team and help us create dynamic software applications for our clients. In this role, you will be responsible for writing and testing scalable code, developing back-end components, and integrating user-facing elements in collaboration with front-end developers. To be successful as a Python Lead, you should possess in-depth knowledge of object-relational mapping, experience with server-side logic, and above-average knowledge of Python programming. Ultimately, as a top-class Python developer, you should be able to design highly responsive web-applications that perfectly meet the needs of the client. What You?ll Do Be fully responsible for the quality of code for which the team is responsible (either through personal review or thoughtful delegation) Write code whenever required (this is not a pure management role) Participate in the development and evangelization of the Python coding standards within the organization Take full responsibility for delivering solutions into production (working through operations teams) Be responsible for training and mentoring developers on the team and recommending actions around hiring, firing and promotion Work with the technical project management to create and maintain a prioritized backlog and schedule for the team Be responsible for architectural decisions in consultation with other members of the engineering leadership Contribute to team effort by accomplishing related results as needed Display solid fiscal responsibility by managing and adhering to budgets and always seeking out operating efficiencies and economies Expertise You?ll Bring 4+ years in leading development teams Leading teams successfully in a dynamic, fast time to market and customer focused environment Leading initiatives where teams were comprised of onshore and offshore resources Working with HTML / CSS / JS Programming in Python Leveraging serverless architecture within AWS or similar cloud platforms Developing server-side web applications, REST APIs, and / or microservices Working with small, nimble development teams Cybersecurity concepts and technologies Data pipelines or distributed message queues Internet / Web technologies, such as web browsers, AJAX, HTTP, HTML / XML, REST, JavaScript, CSS, XSL / XSLT, XPATH etc Strong organizational ability, including quick responses in a fast-paced environment Commitment to quality and an eye for detail Passion for software security Eagerness to learn new technology and solve problems Inclusive, roll-up-your-sleeves work ethic ? show a willingness to participate in daily workloads when needed to meet deadlines Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment •We offer hybrid work options and flexible working hours to accommodate various needs and preferences. •Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. Let's unleash your full potential. See Beyond, Rise Above.

Posted 1 month ago

Apply

12.0 - 17.0 years

20 - 25 Lacs

Pune

Work from Office

About Persistent We are a trusted Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what’s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world including 12 of the 30 most innovative US companies, 80% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our growth trajectory continues, as we reported $1,231M annual revenue (16% Y-o-Y). Along with our growth, we’ve onboarded over 4900 new employees in the past year, bringing our total employee count to over 23,500+ people located in 19 countries across the globe. Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. For more details please login to www.persistent.com About The Position We are looking for an experienced Python Lead/ Architect to join our engineering team and help us create dynamic software applications for our clients. In this role, you will be responsible for writing and testing scalable code, developing back-end components, and integrating user-facing elements in collaboration with front-end developers. To be successful as a Python Lead, you should possess in-depth knowledge of object-relational mapping, experience with server-side logic, and above-average knowledge of Python programming. Ultimately, as a top-class Python developer, you should be able to design highly responsive web-applications that perfectly meet the needs of the client. What You?ll Do Be fully responsible for the quality of code for which the team is responsible (either through personal review or thoughtful delegation) Write code whenever required (this is not a pure management role) Participate in the development and evangelization of the Python coding standards within the organization Take full responsibility for delivering solutions into production (working through operations teams) Be responsible for training and mentoring developers on the team and recommending actions around hiring, firing and promotion Work with the technical project management to create and maintain a prioritized backlog and schedule for the team Be responsible for architectural decisions in consultation with other members of the engineering leadership Contribute to team effort by accomplishing related results as needed Display solid fiscal responsibility by managing and adhering to budgets and always seeking out operating efficiencies and economies Expertise You?ll Bring 4+ years in leading development teams Leading teams successfully in a dynamic, fast time to market and customer focused environment Leading initiatives where teams were comprised of onshore and offshore resources Working with HTML / CSS / JS Programming in Python Leveraging serverless architecture within AWS or similar cloud platforms Developing server-side web applications, REST APIs, and / or microservices Working with small, nimble development teams Cybersecurity concepts and technologies Data pipelines or distributed message queues Internet / Web technologies, such as web browsers, AJAX, HTTP, HTML / XML, REST, JavaScript, CSS, XSL / XSLT, XPATH etc Strong organizational ability, including quick responses in a fast-paced environment Commitment to quality and an eye for detail Passion for software security Eagerness to learn new technology and solve problems Inclusive, roll-up-your-sleeves work ethic ? show a willingness to participate in daily workloads when needed to meet deadlines Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment •We offer hybrid work options and flexible working hours to accommodate various needs and preferences. •Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. Let's unleash your full potential. See Beyond, Rise Above

Posted 1 month ago

Apply

12.0 - 17.0 years

20 - 25 Lacs

Bengaluru

Work from Office

About Persistent We are a trusted Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what?s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world including 14 of the 30 most innovative US companies, 80% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our growth trajectory continues, as we reported $1,186M annual revenue (13.2% Y-o-Y). Along with our growth, we?ve onboarded over 4900 new employees in the past year, bringing our total employee count to over 23,850+ people located in 21 countries across the globe. Throughout this market-leading growth, we?ve maintained strong employee satisfaction - over 94% of our employees approve of the CEO and 89% recommend working at Persistent to a friend. At Persistent, we embrace diversity to unlock everyone's potential. Our programs empower our workforce by harnessing varied backgrounds for creative, innovative problem-solving. Our inclusive environment fosters belonging, encouraging employees to unleash their full potential. For more details please login to www.persistent.com About The Position We are looking for an experienced Python Lead/ Architect to join our engineering team and help us create dynamic software applications for our clients. In this role, you will be responsible for writing and testing scalable code, developing back-end components, and integrating user-facing elements in collaboration with front-end developers. To be successful as a Python Lead, you should possess in-depth knowledge of object-relational mapping, experience with server-side logic, and above-average knowledge of Python programming. Ultimately, as a top-class Python developer, you should be able to design highly responsive web-applications that perfectly meet the needs of the client. What You?ll Do Be fully responsible for the quality of code for which the team is responsible (either through personal review or thoughtful delegation) Write code whenever required (this is not a pure management role) Participate in the development and evangelization of the Python coding standards within the organization Take full responsibility for delivering solutions into production (working through operations teams) Be responsible for training and mentoring developers on the team and recommending actions around hiring, firing and promotion Work with the technical project management to create and maintain a prioritized backlog and schedule for the team Be responsible for architectural decisions in consultation with other members of the engineering leadership Contribute to team effort by accomplishing related results as needed Display solid fiscal responsibility by managing and adhering to budgets and always seeking out operating efficiencies and economies Expertise You?ll Bring 4+ years in leading development teams Leading teams successfully in a dynamic, fast time to market and customer focused environment Leading initiatives where teams were comprised of onshore and offshore resources Working with HTML / CSS / JS Programming in Python Leveraging serverless architecture within AWS or similar cloud platforms Developing server-side web applications, REST APIs, and / or microservices Working with small, nimble development teams Cybersecurity concepts and technologies Data pipelines or distributed message queues Internet / Web technologies, such as web browsers, AJAX, HTTP, HTML / XML, REST, JavaScript, CSS, XSL / XSLT, XPATH etc Strong organizational ability, including quick responses in a fast-paced environment Commitment to quality and an eye for detail Passion for software security Eagerness to learn new technology and solve problems Inclusive, roll-up-your-sleeves work ethic ? show a willingness to participate in daily workloads when needed to meet deadlines Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Our company fosters a values-driven and people centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry?s best Let's unleash your full potential. See Beyond, Rise Above

Posted 1 month ago

Apply

0.0 - 5.0 years

0 Lacs

Pune

Remote

The candidate must be proficient in Python, libraries and frameworks. Good with Data Modeling, Pyspark, MySQL concepts, Power BI, AWS, Azure concepts Experience in optimizing large transactional DBs Data, visualization tools, Databricks, fast API.

Posted 1 month ago

Apply

8.0 - 13.0 years

25 - 30 Lacs

Chennai

Work from Office

Join us in bringing joy to customer experience. Five9 is a leading provider of cloud contact center software, bringing the power of cloud innovation to customers worldwide, Living our values everyday results in our team-first culture and enables us to innovate, grow, and thrive while enjoying the journey together. We celebrate diversity and foster an inclusive environment, empowering our employees to be their authentic selves, The Data Engineer will help design and implement a Google Cloud Platform (GCP) Data Lake, build scalable data pipelines, and ensure seamless access to data for business intelligence and data science tools. They will support a wide range of projects while collaborating closely with management teams and business leaders. The ideal candidate will have a strong understanding of data engineering principles, data warehousing concepts, and the ability to document technical knowledge into clear processes and procedures, This position is based out of one of the offices of our affiliate Acqueon Technologies in India, and will adopt the hybrid work arrangements of that location. You will be a member of the Acqueon team with responsibilities supporting Five9 products, collaborating with global teammates based primarily in the United States, Responsibilities. Design, implement, and maintain a scalable Data Lake on GCP to centralize structured and unstructured data from various sources (databases, APIs, cloud storage), Utilize GCP services including BigQuery, Dataflow, Pub/Sub, and Cloud Storage to optimize and manage data workflows, ensuring scalability, performance, and security, Collaborate closely with data analytics and data science teams to understand data needs, ensuring data is properly prepared for consumption by various systems (e-g. DOMO, Looker, Databricks). Implement best practices for data quality, consistency, and governance across all data pipelines and systems, ensuring compliance with internal and external standards, Continuously monitor, test, and optimize data workflows to improve performance, cost efficiency, and reliability, Maintain comprehensive technical documentation of data pipelines, systems, and architecture for knowledge sharing and future development, Requirements. Bachelor's degree in Computer Science, Data Engineering, Data Science, or a related quantitative field (e-g. Mathematics, Statistics, Engineering), 3+ years of experience using GCP Data Lake and Storage Services. Certifications in GCP are preferred (e-g. Professional Cloud Developer, Professional Cloud Database Engineer), Advanced proficiency with SQL, with experience in writing complex queries, optimizing for performance, and using SQL in large-scale data processing workflows, Proficiency in programming languages such as Python, Java, or Scala, with practical experience building data pipelines, automating data workflows, and integrating APIs for data ingestion, Five9 embraces diversity and is committed to building a team that represents a variety of backgrounds, perspectives, and skills. The more inclusive we are, the better we are. Five9 is an equal opportunity employer, View our privacy policy, including our privacy notice to California residents here: https://www,five9,/pt-pt/legal, Note: Five9 will never request that an applicant send money as a prerequisite for commencing employment with Five9, Show more Show less

Posted 1 month ago

Apply

8.0 - 13.0 years

25 - 30 Lacs

Chennai

Work from Office

Join us in bringing joy to customer experience. Five9 is a leading provider of cloud contact center software, bringing the power of cloud innovation to customers worldwide, Living our values everyday results in our team-first culture and enables us to innovate, grow, and thrive while enjoying the journey together. We celebrate diversity and foster an inclusive environment, empowering our employees to be their authentic selves, The Data Engineer will help design and implement a Google Cloud Platform (GCP) Data Lake, build scalable data pipelines, and ensure seamless access to data for business intelligence and data science tools. They will support a wide range of projects while collaborating closely with management teams and business leaders. The ideal candidate will have a strong understanding of data engineering principles, data warehousing concepts, and the ability to document technical knowledge into clear processes and procedures, This position is based out of one of the offices of our affiliate Acqueon Technologies in India, and will adopt the hybrid work arrangements of that location. You will be a member of the Acqueon team with responsibilities supporting Five9 products, collaborating with global teammates based primarily in the United States, Responsibilities. Design, implement, and maintain a scalable Data Lake on GCP to centralize structured and unstructured data from various sources (databases, APIs, cloud storage), Utilize GCP services including BigQuery, Dataflow, Pub/Sub, and Cloud Storage to optimize and manage data workflows, ensuring scalability, performance, and security, Collaborate closely with data analytics and data science teams to understand data needs, ensuring data is properly prepared for consumption by various systems (e-g. DOMO, Looker, Databricks). Implement best practices for data quality, consistency, and governance across all data pipelines and systems, ensuring compliance with internal and external standards, Continuously monitor, test, and optimize data workflows to improve performance, cost efficiency, and reliability, Maintain comprehensive technical documentation of data pipelines, systems, and architecture for knowledge sharing and future development, Requirements. Bachelor's degree in Computer Science, Data Engineering, Data Science, or a related quantitative field (e-g. Mathematics, Statistics, Engineering), 4+ years of experience using GCP Data Lake and Storage Services. Certifications in GCP are preferred (e-g. Professional Cloud Developer, Professional Cloud Database Engineer), Advanced proficiency with SQL, with experience in writing complex queries, optimizing for performance, and using SQL in large-scale data processing workflows, Proficiency in programming languages such as Python, Java, or Scala, with practical experience building data pipelines, automating data workflows, and integrating APIs for data ingestion, Five9 embraces diversity and is committed to building a team that represents a variety of backgrounds, perspectives, and skills. The more inclusive we are, the better we are. Five9 is an equal opportunity employer, View our privacy policy, including our privacy notice to California residents here: https://www,five9,/pt-pt/legal, Note: Five9 will never request that an applicant send money as a prerequisite for commencing employment with Five9, Show more Show less

Posted 1 month ago

Apply

2.0 - 5.0 years

4 - 8 Lacs

Kolkata, Mumbai, New Delhi

Work from Office

Job Title: Automation EngineerDatabricks. Job Type: Full-time, Contractor. Location: Hybrid Hyderabad | Pune| Delhi. About Us:. Our mission at micro1 is to match the most talented people in the world with their dream jobs. If you are looking to be at the forefront of AI innovation and work with some of the fastest-growing companies in Silicon Valley, we invite you to apply for a role. By joining the micro1 community, your resume will become visible to top industry leaders, unlocking access to the best career opportunities on the market.. Job Summary:. We are seeking a detail-oriented and innovative Automation EngineerDatabricks to join our customer's team. In this critical role, you will design, develop, and execute automated tests to ensure the quality, reliability, and integrity of data within Databricks environments. If you are passionate about data quality, thrive in collaborative environments, and excel at both written and verbal communication, we'd love to meet you.. Key Responsibilities:. Design, develop, and maintain robust automated test scripts using Python, Selenium, and SQL to validate data integrity within Databricks environments.. Execute comprehensive data validation and verification activities to ensure accuracy and consistency across multiple systems, data warehouses, and data lakes.. Create detailed and effective test plans and test cases based on technical requirements and business specifications.. Integrate automated tests with CI/CD pipelines to facilitate seamless and efficient testing and deployment processes.. Work collaboratively with data engineers, developers, and other stakeholders to gather data requirements and achieve comprehensive test coverage.. Document test cases, results, and identified defects; communicate findings clearly to the team.. Conduct performance testing to ensure data processing and retrieval meet established benchmarks.. Provide mentorship and guidance to junior team members, promoting best practices in test automation and data validation.. Required Skills and Qualifications:. Strong proficiency in Python, Selenium, and SQL for developing test automation solutions.. Hands-on experience with Databricks, data warehouse, and data lake architectures.. Proven expertise in automated testing of data pipelines, preferably with tools such as Apache Airflow, dbt Test, or similar.. Proficient in integrating automated tests within CI/CD pipelines on cloud platforms (AWS, Azure preferred).. Excellent written and verbal communication skills with the ability to translate technical concepts to diverse audiences.. Bachelor’s degree in Computer Science, Information Technology, or a related discipline.. Demonstrated problem-solving skills and a collaborative approach to teamwork.. Preferred Qualifications:. Experience with implementing security and data protection measures in data-driven applications.. Ability to integrate user-facing elements with server-side logic for seamless data experiences.. Demonstrated passion for continuous improvement in test automation processes, tools, and methodologies.. Show more Show less

Posted 1 month ago

Apply

2.0 - 6.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Flexing It is a freelance consulting marketplace that connects freelancers and independent consultants with organisations seeking independent talent.. Flexing It has partnered with Our client, a global leader in energy management and automation, is seeking a Data engineer to prepare data and make it available in an efficient and optimized format for their different data consumers, ranging from BI and analytics to data science applications. It requires to work with current technologies in particular Apache Spark, Lambda & Step Functions, Glue Data Catalog, and RedShift on AWS environment.. Key Responsibilities:. Design and develop new data ingestion patterns into IntelDS Raw and/or Unified data layers based on the requirements and needs for connecting new data sources or for building new data objects. Working in ingestion patterns allow to automate the data pipelines.. Participate to and apply DevSecOps practices by automating the integration and delivery of data pipelines in a cloud environment. This can include the design and implementation of end-to-end data integration tests and/or CICD pipelines.. Analyze existing data models, identify and implement performance optimizations for data. ingestion and data consumption. The objective is to accelerate data availability within the. platform and to consumer applications.. Support client applications in connecting and consuming data from the platform, and ensure they follow our guidelines and best practices.. Participate in the monitoring of the platform and debugging of detected issues and bugs. Skills required:. Minimum of 3 years prior experience as data engineer with proven experience on Big Data and Data Lakes on a cloud environment.. Bachelor or Master degree in computer science or applied mathematics (or equivalent). Proven experience working with data pipelines / ETL / BI regardless of the technology.. Proven experience working with AWS including at least 3 of: RedShift, S3, EMR, Cloud. Formation, DynamoDB, RDS, lambda.. Big Data technologies and distributed systems: one of Spark, Presto or Hive.. Python language: scripting and object oriented.. Fluency in SQL for data warehousing (RedShift in particular is a plus).. Good understanding on data warehousing and Data modelling concepts. Familiar with GIT, Linux, CI/CD pipelines is a plus.. Strong systems/process orientation with demonstrated analytical thinking, organization. skills and problem-solving skills.. Ability to self-manage, prioritize and execute tasks in a demanding environment.. Strong consultancy orientation and experience, with the ability to form collaborative,. productive working relationships across diverse teams and cultures is a must.. Willingness and ability to train and teach others.. Ability to facilitate meetings and follow up with resulting action items. Show more Show less

Posted 1 month ago

Apply

1.0 - 3.0 years

3 - 7 Lacs

Hyderabad

Work from Office

Job Title: Backend Developer Python. Job Type: Full-time. Location: On-site, Hyderabad, Telangana, India. Job Summary:. Join one of our top customer's team as a Backend Developer and help drive scalable, high-performance solutions at the intersection of machine learning and data engineering. You’ll collaborate with skilled professionals to design, implement, and maintain backend systems powering advanced AI/ML applications in a dynamic, onsite environment.. Key Responsibilities:. Develop, test, and deploy robust backend components and microservices using Python and PySpark.. Implement and optimize data pipelines leveraging Databricks and distributed computing frameworks.. Design and maintain efficient databases with MySQL, ensuring data integrity and high availability.. Integrate machine learning models into production-ready backend systems supporting AI-driven features.. Collaborate closely with data scientists and engineers to deliver end-to-end solutions aligned with business goals.. Monitor, troubleshoot, and enhance system performance, utilizing Redis for caching and improved scalability.. Write clear and maintainable documentation, and communicate effectively with team members both verbally and in writing.. Required Skills and Qualifications:. Proficiency in Python programming for backend development.. Hands-on experience with Databricks and PySpark in a production environment.. Strong understanding of MySQL database design, querying, and performance tuning.. Practical background in machine learning concepts and deploying ML models.. Experience with Redis for caching and state management.. Excellent written and verbal communication skills, with a keen attention to detail.. Demonstrated ability to work effectively in an on-site, collaborative setting in Hyderabad.. Preferred Qualifications:. Previous experience in high-growth AI/ML or data engineering projects.. Familiarity with additional backend technologies or cloud platforms.. Demonstrated leadership or mentorship in technical teams.. Show more Show less

Posted 1 month ago

Apply

2.0 - 4.0 years

3 - 7 Lacs

Hyderabad

Work from Office

Job Title: Automation Engineer. Job Type: Full-time, Contractor. About Us:. Our mission at micro1 is to match the most talented people in the world with their dream jobs. If you are looking to be at the forefront of AI innovation and work with some of the fastest-growing companies in Silicon Valley, we invite you to apply for a role. By joining the micro1 community, your resume will become visible to top industry leaders, unlocking access to the best career opportunities on the market.. Job Summary:. We are seeking a detail-oriented and innovative Automation Engineer to join our customer's team. In this critical role, you will design, develop, and execute automated tests to ensure the quality, reliability, and integrity of data within Databricks environments. If you are passionate about data quality, thrive in collaborative environments, and excel at both written and verbal communication, we'd love to meet you.. Key Responsibilities:. Design, develop, and maintain robust automated test scripts using Python, Selenium, and SQL to validate data integrity within Databricks environments.. Execute comprehensive data validation and verification activities to ensure accuracy and consistency across multiple systems, data warehouses, and data lakes.. Create detailed and effective test plans and test cases based on technical requirements and business specifications.. Integrate automated tests with CI/CD pipelines to facilitate seamless and efficient testing and deployment processes.. Work collaboratively with data engineers, developers, and other stakeholders to gather data requirements and achieve comprehensive test coverage.. Document test cases, results, and identified defects; communicate findings clearly to the team.. Conduct performance testing to ensure data processing and retrieval meet established benchmarks.. Provide mentorship and guidance to junior team members, promoting best practices in test automation and data validation.. Required Skills and Qualifications:. Strong proficiency in Python, Selenium, and SQL for developing test automation solutions.. Hands-on experience with Databricks, data warehouse, and data lake architectures.. Proven expertise in automated testing of data pipelines, preferably with tools such as Apache Airflow, dbt Test, or similar.. Proficient in integrating automated tests within CI/CD pipelines on cloud platforms (AWS, Azure preferred).. Excellent written and verbal communication skills with the ability to translate technical concepts to diverse audiences.. Bachelor’s degree in Computer Science, Information Technology, or a related discipline.. Demonstrated problem-solving skills and a collaborative approach to teamwork.. Preferred Qualifications:. Experience with implementing security and data protection measures in data-driven applications.. Ability to integrate user-facing elements with server-side logic for seamless data experiences.. Demonstrated passion for continuous improvement in test automation processes, tools, and methodologies.. Show more Show less

Posted 1 month ago

Apply

2.0 - 6.0 years

5 - 9 Lacs

Noida

Work from Office

About Us :. At TELUS Digital, we enable customer experience innovation through spirited teamwork, agile thinking, and a caring culture that puts customers first. TELUS Digital is the global arm of TELUS Corporation, one of the largest telecommunications service providers in Canada. We deliver contact center and business process outsourcing (BPO) solutions to some of the world's largest corporations in the consumer electronics, finance, telecommunications and utilities sectors. With global call center delivery capabilities, our multi-shore, multi-language programs offer safe, secure infrastructure, value-based pricing, skills-based resources and exceptional customer service all backed by TELUS, our multi-billion dollar telecommunications parent.. Required Skills:. Design, develop, and support data pipelines and related data products and platforms.. Design and build data extraction, loading, and transformation pipelines and data products across on-prem and cloud platforms.. Perform application impact assessments, requirements reviews, and develop work estimates.. Develop test strategies and site reliability engineering measures for data products and solutions.. Participate in agile development "scrums" and solution reviews.. Mentor junior Data Engineers.. Lead the resolution of critical operations issues, including post-implementation reviews.. Perform technical data stewardship tasks, including metadata management, security, and privacy by design.. Design and build data extraction, loading, and transformation pipelines using Python and other GCP Data Technologies. Demonstrate SQL and database proficiency in various data engineering tasks.. Automate data workflows by setting up DAGs in tools like Control-M, Apache Airflow, and Prefect.. Develop Unix scripts to support various data operations.. Model data to support business intelligence and analytics initiatives.. Utilize infrastructure-as-code tools such as Terraform, Puppet, and Ansible for deployment automation.. Expertise in GCP data warehousing technologies, including BigQuery, Cloud SQL, Dataflow, Data Catalog,. Cloud Composer, Google Cloud Storage, IAM, Compute Engine, Cloud Data Fusion and Dataproc (good to have).. Qualifications:. Bachelor's degree in Software Engineering, Computer Science, Business, Mathematics, or related field.. 4+ years of data engineering experience.. 2 years of data solution architecture and design experience.. GCP Certified Data Engineer (preferred).. Show more Show less

Posted 1 month ago

Apply

4.0 - 8.0 years

10 - 14 Lacs

Bengaluru

Work from Office

Who we are. About Stripe. Stripe is a financial infrastructure platform for businesses. Millions of companies—from the world’s largest enterprises to the most ambitious startups—use Stripe to accept payments, grow their revenue, and accelerate new business opportunities. Our mission is to increase the GDP of the internet, and we have a staggering amount of work ahead. That means you have an unprecedented opportunity to put the global economy within everyone’s reach while doing the most important work of your career.. About The Team. The Reporting Platform Data Foundations group maintains and evolves the core systems that power reporting data for Stripe's users. We're responsible for Aqueduct, the data ingestion and processing platform that powers core reporting data for millions of businesses on Stripe. We integrate with the latest Data Platform tooling, such as Falcon for real-time data. Our goal is to provide a robust, scalable, and efficient data infrastructure that enables clear and timely insights for Stripe's users.. What you'll do. As a Software Engineer on the Reporting Platform Data Foundations group, you will lead efforts to improve and redesign core data ingestion and processing systems that power reporting for millions of Stripe users. You'll tackle complex challenges in data management, scalability, and system architecture.. Responsibilities. Design and implement a new backfill model for reporting data that can handle hundreds of millions of row additions and updates efficiently. Revamp the end-to-end experience for product teams adding or changing API-backed datasets, improving ergonomics and clarity. Enhance the Aqueduct Dependency Resolver system, responsible for determining what critical data to update for Stripe’s users based on events. Areas include error management, observability, and delegation of issue resolution to product teams. Lead integration with the latest Data Platform tooling, such as Falcon for real-time data, while managing deprecation of older systems. Implement and improve data warehouse management practices, ensuring data freshness and reliability. Collaborate with product teams to understand their reporting needs and data requirements. Design and implement scalable solutions for data ingestion, processing, and storage. Onboard, spin up, and mentor engineers, and set the group’s technical direction and strategy. Who you are. We're looking for someone who meets the minimum requirements to be considered for the role. If you meet these requirements, you are encouraged to apply. The preferred qualifications are a bonus, not a requirement.. Minimum Requirements. 8+ years of professional experience writing high quality production level code or software programs.. Extensive experience in designing and implementing large-scale data processing systems. Strong background in distributed systems and data pipeline architectures. Proficiency in at least one modern programming language (e.g., Go, Java, Python, Scala). Experience with big data technologies (e.g., Hadoop, Flink, Spark, Kafka, Pinot, Trino, Iceberg). Solid understanding of data modeling and database systems. Excellent problem-solving skills and ability to tackle complex technical challenges. Strong communication skills and ability to work effectively with cross-functional teams. Experience mentoring other engineers and driving technical initiatives. Preferred Qualifications. Experience with real-time data processing and streaming systems. Knowledge of data warehouse technologies and best practices. Experience in migrating legacy systems to modern architectures. Contributions to open-source projects or technical communities. In-office expectations. Office-assigned Stripes in most of our locations are currently expected to spend at least 50% of the time in a given month in their local office or with users. This expectation may vary depending on role, team and location. For example, Stripes in Stripe Delivery Center roles in Mexico City, Mexico and Bengaluru, India work 100% from the office. Also, some teams have greater in-office attendance requirements, to appropriately support our users and workflows, which the hiring manager will discuss. This approach helps strike a balance between bringing people together for in-person collaboration and learning from each other, while supporting flexibility when possible.. Pay and benefits. Stripe does not yet include pay ranges in job postings in every country. Stripe strongly values pay transparency and is working toward pay transparency globally.. Show more Show less

Posted 1 month ago

Apply

8.0 - 13.0 years

25 - 30 Lacs

Bengaluru

Work from Office

Experience: 3+ Years. As a Senior Data Engineer, you’ll build robust data pipelines and enable data-driven decisions by developing scalable solutions for analytics and reporting. Perfect for someone with strong database and ETL expertise.. Job Responsibilities-. Design, build, and maintain scalable data pipelines and ETL processes.. Work with large data sets from diverse sources.. Develop and optimize data models, warehouses, and integrations.. Collaborate with data scientists, analysts, and product teams.. Ensure data quality, security, and compliance standards.. Qualifications-. Proficiency in SQL, Python, and data pipeline tools (Airflow, Spark).. Experience with data warehouses (Redshift, Snowflake, BigQuery).. Knowledge of cloud platforms (AWS/GCP/Azure).. Strong problem-solving and analytical skills.. Show more Show less

Posted 1 month ago

Apply

8.0 - 13.0 years

25 - 30 Lacs

Hyderabad

Work from Office

Job Title: Data Engineer. Job Type: Full-Time. Location: On-site Hyderabad, Telangana, India. Job Summary:. We are seeking an accomplished Data Engineer to join one of our top customer's dynamic team in Hyderabad. You will be instrumental in designing, implementing, and optimizing data pipelines that drive our business insights and analytics. If you are passionate about harnessing the power of big data, possess a strong technical skill set, and thrive in a collaborative environment, we would love to hear from you.. Key Responsibilities:. Develop and maintain scalable data pipelines using Python, PySpark, and SQL.. Implement robust data warehousing and data lake architectures.. Leverage the Databricks platform to enhance data processing and analytics capabilities.. Model, design, and optimize complex database schemas.. Collaborate with cross-functional teams to understand data requirements and deliver actionable insights.. Lead and mentor junior data engineers and establish best practices.. Troubleshoot and resolve data processing issues promptly.. Required Skills and Qualifications:. Strong proficiency in Python and PySpark.. Extensive experience with the Databricks platform.. Advanced SQL and data modeling skills.. Demonstrated experience in data warehousing and data lake architectures.. Exceptional problem-solving and analytical skills.. Strong written and verbal communication skills.. Preferred Qualifications:. Experience with graph databases, particularly MarkLogic.. Proven track record of leading data engineering teams.. Understanding of data governance and best practices in data management.. Show more Show less

Posted 1 month ago

Apply

1.0 - 4.0 years

4 - 8 Lacs

Chennai

Work from Office

JIDOKA SYSTEMS PRIVATE LIMITED is looking for Data Science Engineers to join our dynamic team and embark on a rewarding career journey Data Exploration and Preparation:Explore and analyze large datasets to understand patterns and trends Prepare and clean datasets for analysis and model development Feature Engineering:Engineer features from raw data to enhance the performance of machine learning models Collaborate with data scientists to identify relevant features for model training Model Development:Design and implement machine learning models to solve business problems Work on both traditional statistical models and modern machine learning algorithms Scalable Data Pipelines:Develop scalable and efficient data pipelines for processing and transforming data Utilize technologies like Apache Spark for large-scale data processing Model Deployment:Deploy machine learning models into production environments Collaborate with DevOps teams to integrate models into existing systems Performance Optimization:Optimize the performance of data pipelines and machine learning models Fine-tune models for accuracy, efficiency, and scalability Collaboration: Collaborate with cross-functional teams, including data scientists, software engineers, and business stakeholders Communicate technical concepts and findings to non-technical audiences Continuous Learning:Stay current with advancements in data science and engineering Implement new technologies and methodologies to improve data engineering processes

Posted 1 month ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Chennai

Remote

Who We Are For 20 years, we have been working with organizations large and small to help solve business challenges through technology. We bring a unique combination of engineering and strategy to Make Data Work for organizations. Our clients range from the travel and leisure industry to publishing, retail and banking. The common thread between our clients is their commitment to making data work as seen through their investment in those efforts. In our quest to solve data challenges for our clients, we work with large enterprise, cloud-based and marketing technology suites. We have a deep understanding of these solutions so we can help our clients make the most of their investment in an efficient way to have a data-driven business. Softcrylic now joins forces with Hexaware to Make Data Work in bigger ways! Why Work at Softcrylic? Softcrylic provides an engaging, team-focused, and rewarding work environment where people are excited about the work they do and passionate about delivering creative solutions to our clients. Work Timing: 12:30 pm to 9:30 pm (Flexible in work timing) Here's how to approach the interview: All technical interview rounds will be conducted virtually. The final round will be a face-to-face interview with HR in Chennai. However, there will be a 15-minute technical assessment/in-person technical discussion as part of the final round. Make sure to prepare accordingly for both virtual and in-person components. Job Description: 5 + years of experience in working as Data Engineer Experience in migrating existing datasets from Big Query to Databricks using Python scripts. Conduct thorough data validation and QA to ensure accuracy, completeness, parity, and consistency in reporting. Monitor the stability and status of migrated data pipelines, applying fixes as needed. Migrate data pipelines from Airflow to Airbyte/Dagster based on provided frameworks. Develop Python scripts to facilitate data migration and pipeline transformation. Perform rigorous testing on migrated data and pipelines to ensure quality and reliability. Required Skills: Strong experience in working on Python for scripting Good experience in working on Data Bricks and Big Query Familiarity with data pipeline tools such as Airflow, Airbyte, and Dagster. Strong understanding of data quality principles and validation techniques. Ability to work collaboratively with cross-functional teams. Dinesh M dinesh.m@softcrylic.com +9189255 18191

Posted 1 month ago

Apply

4.0 - 7.0 years

10 - 20 Lacs

Hyderabad

Work from Office

We are seeking a skilled Data Engineer with extensive experience in the Cloudera Data Platform (CDP) to join our dynamic team. The ideal candidate will have over four years of experience in designing, developing, and managing data pipelines, and will be proficient in big data technologies. This role requires a deep understanding of data engineering best practices and a passion for optimizing data flow and collection across a diverse range of sources. Required Skills and Qualifications: Experience: 4+ years of experience in data engineering, with a strong focus on big data technologies. Cloudera Expertise: Proficient in Cloudera Data Platform (CDP) and its ecosystem, including Hadoop, Spark, HDFS, Hive, Impala, and other relevant tools. Programming Languages: Strong programming skills in Python, Scala, or Java. ETL Tools: Experience with ETL tools and processes. Data Warehousing: Knowledge of data warehousing concepts and experience with data modeling. SQL: Advanced SQL skills for querying and manipulating large datasets. Linux/Unix: Proficiency in Linux/Unix shell scripting. Version Control: Familiarity with version control systems like Git. Problem-Solving: Strong analytical and problem-solving skills. Communication: Excellent verbal and written communication skills, with the ability to explain complex technical concepts to non-technical stakeholders. Preferred Qualifications: Cloud Experience: Experience with cloud platforms such as AWS, Azure, or Google Cloud. Data Streaming: Experience with real-time data streaming technologies like Kafka. DevOps: Familiarity with DevOps practices and tools such as Docker, Kubernetes, and CI/CD pipelines. Education: Bachelor’s degree in computer science, Information Technology, or a related field. Main Skill: Hadoop, Spark,Hive,Impala,Scala,Python,Java,Linux Roles and Responsibilities Develop and maintain scalable data pipelines using Cloudera Data Platform (CDP) components. Design and implement ETL processes to extract, transform, and load data from various data sources into the data lake or data warehouse. Optimize and troubleshoot data workflows for performance and efficiency. Manage and administer Hadoop clusters within the Cloudera environment. Monitor and ensure the health and performance of the Cloudera platform. Implement data security best practices, including encryption, data masking, and user access control. Work closely with data scientists, analysts, and other stakeholders to understand data requirements and provide the necessary support. Collaborate with cross-functional teams to design and deploy big data solutions that meet business needs. Participate in code reviews, provide feedback, and contribute to team knowledge sharing. Create and maintain comprehensive documentation of data engineering processes, data architecture, and system configurations. Provide support for production data pipelines, including troubleshooting and resolving issues as they arise. Train and mentor junior data engineers, fostering a culture of continuous learning and improvement. Stay up to date with the latest industry trends and technologies related to data engineering and big data. Propose and implement improvements to existing data pipelines and architectures. Explore and integrate new tools and technologies to enhance the capabilities of the data engineering team.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies