Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 8.0 years
9 - 19 Lacs
Hyderabad
Work from Office
We are looking for a skilled Data Engineer with strong expertise in Python, PySpark, SQL, AWS and Data Bricks to join our data engineering team. The ideal candidate will be responsible for building scalable data pipelines, transforming large datasets, and enabling data-driven decision-making across the organization. Role & responsibilities Data Pipeline Development: Design, build, and maintain scalable data pipelines for ingesting, processing, and transforming large datasets from diverse sources into usable formats. Performance Optimization: Optimize data processing and storage systems for cost efficiency and high performance, including managing compute resources and cluster configurations. Automation and Workflow Management: Automate data workflows using tools like Airflow, Databricks APIs, and other orchestration technologies to streamline data ingestion, processing, and reporting tasks. Data Quality and Validation: Implement data quality checks, validation rules, and transformation logic to ensure the accuracy, consistency, and reliability of data. Cloud Platform Management: Manage and optimize cloud infrastructure (AWS, Databricks) for data storage, processing, and compute resources, ensuring seamless data operations. Preferred candidate profile Strong proficiency in Python for scripting and data manipulation. Hands-on experience with PySpark for distributed data processing. Proficient in writing complex SQL queries for large-scale data extraction and transformation. Solid understanding and experience with AWS cloud ecosystem (especially S3, Glue, EMR, Lambda). Knowledge of data warehousing, data lakes, and ETL/ELT processes. Familiarity with version control tools like Git and workflow orchestration tools (e.g., Airflow) is a plus. Location - Hyderabad (Work From Office) Notice - Immediate or 15days
Posted 6 days ago
5.0 - 8.0 years
9 - 19 Lacs
Hyderabad
Work from Office
We are looking for a skilled Data Engineer with strong expertise in Python, PySpark, SQL, AWS and Data Bricks to join our data engineering team. The ideal candidate will be responsible for building scalable data pipelines, transforming large datasets, and enabling data-driven decision-making across the organization. Role & responsibilities Data Pipeline Development: Design, build, and maintain scalable data pipelines for ingesting, processing, and transforming large datasets from diverse sources into usable formats. Performance Optimization: Optimize data processing and storage systems for cost efficiency and high performance, including managing compute resources and cluster configurations. Automation and Workflow Management: Automate data workflows using tools like Airflow, Databricks APIs, and other orchestration technologies to streamline data ingestion, processing, and reporting tasks. Data Quality and Validation: Implement data quality checks, validation rules, and transformation logic to ensure the accuracy, consistency, and reliability of data. Cloud Platform Management: Manage and optimize cloud infrastructure (AWS, Databricks) for data storage, processing, and compute resources, ensuring seamless data operations. Preferred candidate profile Strong proficiency in Python for scripting and data manipulation. Hands-on experience with PySpark for distributed data processing. Proficient in writing complex SQL queries for large-scale data extraction and transformation. Solid understanding and experience with AWS cloud ecosystem (especially S3, Glue, EMR, Lambda). Knowledge of data warehousing, data lakes, and ETL/ELT processes. Familiarity with version control tools like Git and workflow orchestration tools (e.g., Airflow) is a plus. Location - Hyderabad (Work From Office) Notice - Immediate or 15days
Posted 6 days ago
8.0 - 13.0 years
25 - 30 Lacs
Chennai
Work from Office
Join us in bringing joy to customer experience. Five9 is a leading provider of cloud contact center software, bringing the power of cloud innovation to customers worldwide, Living our values everyday results in our team-first culture and enables us to innovate, grow, and thrive while enjoying the journey together. We celebrate diversity and foster an inclusive environment, empowering our employees to be their authentic selves, The Data Engineer will help design and implement a Google Cloud Platform (GCP) Data Lake, build scalable data pipelines, and ensure seamless access to data for business intelligence and data science tools. They will support a wide range of projects while collaborating closely with management teams and business leaders. The ideal candidate will have a strong understanding of data engineering principles, data warehousing concepts, and the ability to document technical knowledge into clear processes and procedures, This position is based out of one of the offices of our affiliate Acqueon Technologies in India, and will adopt the hybrid work arrangements of that location. You will be a member of the Acqueon team with responsibilities supporting Five9 products, collaborating with global teammates based primarily in the United States, Responsibilities. Design, implement, and maintain a scalable Data Lake on GCP to centralize structured and unstructured data from various sources (databases, APIs, cloud storage), Utilize GCP services including BigQuery, Dataflow, Pub/Sub, and Cloud Storage to optimize and manage data workflows, ensuring scalability, performance, and security, Collaborate closely with data analytics and data science teams to understand data needs, ensuring data is properly prepared for consumption by various systems (e-g. DOMO, Looker, Databricks). Implement best practices for data quality, consistency, and governance across all data pipelines and systems, ensuring compliance with internal and external standards, Continuously monitor, test, and optimize data workflows to improve performance, cost efficiency, and reliability, Maintain comprehensive technical documentation of data pipelines, systems, and architecture for knowledge sharing and future development, Requirements. Bachelor's degree in Computer Science, Data Engineering, Data Science, or a related quantitative field (e-g. Mathematics, Statistics, Engineering), 3+ years of experience using GCP Data Lake and Storage Services. Certifications in GCP are preferred (e-g. Professional Cloud Developer, Professional Cloud Database Engineer), Advanced proficiency with SQL, with experience in writing complex queries, optimizing for performance, and using SQL in large-scale data processing workflows, Proficiency in programming languages such as Python, Java, or Scala, with practical experience building data pipelines, automating data workflows, and integrating APIs for data ingestion, Five9 embraces diversity and is committed to building a team that represents a variety of backgrounds, perspectives, and skills. The more inclusive we are, the better we are. Five9 is an equal opportunity employer, View our privacy policy, including our privacy notice to California residents here: https://www,five9,/pt-pt/legal, Note: Five9 will never request that an applicant send money as a prerequisite for commencing employment with Five9, Show more Show less
Posted 6 days ago
8.0 - 13.0 years
25 - 30 Lacs
Chennai
Work from Office
Join us in bringing joy to customer experience. Five9 is a leading provider of cloud contact center software, bringing the power of cloud innovation to customers worldwide, Living our values everyday results in our team-first culture and enables us to innovate, grow, and thrive while enjoying the journey together. We celebrate diversity and foster an inclusive environment, empowering our employees to be their authentic selves, The Data Engineer will help design and implement a Google Cloud Platform (GCP) Data Lake, build scalable data pipelines, and ensure seamless access to data for business intelligence and data science tools. They will support a wide range of projects while collaborating closely with management teams and business leaders. The ideal candidate will have a strong understanding of data engineering principles, data warehousing concepts, and the ability to document technical knowledge into clear processes and procedures, This position is based out of one of the offices of our affiliate Acqueon Technologies in India, and will adopt the hybrid work arrangements of that location. You will be a member of the Acqueon team with responsibilities supporting Five9 products, collaborating with global teammates based primarily in the United States, Responsibilities. Design, implement, and maintain a scalable Data Lake on GCP to centralize structured and unstructured data from various sources (databases, APIs, cloud storage), Utilize GCP services including BigQuery, Dataflow, Pub/Sub, and Cloud Storage to optimize and manage data workflows, ensuring scalability, performance, and security, Collaborate closely with data analytics and data science teams to understand data needs, ensuring data is properly prepared for consumption by various systems (e-g. DOMO, Looker, Databricks). Implement best practices for data quality, consistency, and governance across all data pipelines and systems, ensuring compliance with internal and external standards, Continuously monitor, test, and optimize data workflows to improve performance, cost efficiency, and reliability, Maintain comprehensive technical documentation of data pipelines, systems, and architecture for knowledge sharing and future development, Requirements. Bachelor's degree in Computer Science, Data Engineering, Data Science, or a related quantitative field (e-g. Mathematics, Statistics, Engineering), 4+ years of experience using GCP Data Lake and Storage Services. Certifications in GCP are preferred (e-g. Professional Cloud Developer, Professional Cloud Database Engineer), Advanced proficiency with SQL, with experience in writing complex queries, optimizing for performance, and using SQL in large-scale data processing workflows, Proficiency in programming languages such as Python, Java, or Scala, with practical experience building data pipelines, automating data workflows, and integrating APIs for data ingestion, Five9 embraces diversity and is committed to building a team that represents a variety of backgrounds, perspectives, and skills. The more inclusive we are, the better we are. Five9 is an equal opportunity employer, View our privacy policy, including our privacy notice to California residents here: https://www,five9,/pt-pt/legal, Note: Five9 will never request that an applicant send money as a prerequisite for commencing employment with Five9, Show more Show less
Posted 6 days ago
5.0 - 9.0 years
7 - 11 Lacs
Gurugram
Work from Office
Dentsply Sirona is the world’s largest manufacturer of professional dental products and technologies, with a 130-year history of innovation and service to the dental industry and patients worldwide. Dentsply Sirona develops, manufactures, and markets a comprehensive solutions offering including dental and oral health products as well as other consumable medical devices under a strong portfolio of world class brands. Dentsply Sirona’s products provide innovative, high-quality and effective solutions to advance patient care and deliver better and safer dentistry. Dentsply Sirona’s global headquarters is located in Charlotte, North Carolina, USA. The company’s shares are listed in the United States on NASDAQ under the symbol XRAY, Bringing out the best in people. As advanced as dentistry is today, we are dedicated to making it even better. Our people have a passion for innovation and are committed to applying it to improve dental care. We live and breathe high performance, working as one global team, bringing out the best in each other for the benefit of dental patients, and the professionals who serve them. If you want to grow and develop as a part of a team that is shaping an industry, then we’re looking for the best to join us, Working At Dentsply Sirona You Are Able To. Develop faster with our commitment to the best professional development, Perform better as part of a high-performance, empowering culture, Shape an industry with a market leader that continues to drive innovation, Make a difference -by helping improve oral health worldwide, Scope. Role has global scope and includes managing and leading data flow and transformation development in the Data Engagement Platform (DEP). This role will lead work of DS employees as well as contractors, Key Responsibilities. Develop and maintain high quality data warehouse solution, Maintain accurate and complete technical architectural documents, Collaborate with BI Developers and Business Analysts for successful development of BI reporting and analysis, Work with business groups and technical teams to develop and maintain data warehouse platform for BI reporting, Develop scalable and maintainable data layer for BI applications to meet business objectives, To work in a small, smart, agile team – designing, developing and owning full solution for an assigned data area. Develop standards, patterns, best practices for reuse and acceleration, Perform maintenance and troubleshooting activities in Azure data platform, Analyze, plan and develop requirements and standards in reference to scheduled projects, Partake in process to define clear project deliverables, Coordinate the development of standards, patterns, best practices for reuse and acceleration, Typical Background. Education:University Degree or equivalent in MIS or similar. Years And Type Of Experience. 5-10 years working with BI and data warehouse solutions, Key Required Skills, Knowledge And Capabilities. Good understanding of business logic and understanding of their needs, Some experience with Databricks and dbt is desirable, Worked with Azure DevOps code repository, version control and task management, Strong proficiency with SQL and its variation among popular databases, Knowledge of best practices when dealing with relational databases. Capable of troubleshooting common database issues. You have knowledge of data design and analysis of BI systems and processes, Strong analytical and logical thinking. Internationally and culturally aware. Communicate well verbally and in writing in English. Key Leadership Behaviors. Dentsply Sirona managers are expected to successfully demonstrate behaviors aligned with the Competency model. See competencies below together with. a Key Specific. Behaviors For Success. Teamwork– Defines success in terms of the whole team. Customer Focus– Is dedicated to meeting the expectations and requirements of internal and external customers and seeking to make improvements with the customer in mind. Strategic Thinking– Applies experience, knowledge, and perspective of business and external or global factors to create new perspectives and fresh thinking. Talent Management– Actively seeks assignments that stretch her beyond comfort zone. Integrity– Raises potential ethical concerns to the right party. Problem Solving– Can analyze problems and put together a plan for resolution within her scope of responsibility. Drive for Results– Can be counted on to reach goals successfully. Accountability– Acts with a clear sense of ownership. Innovation and Creativity– Brings creative ideas to work and acts to take advantage of opportunities to improve business. Leading Change– Adapts to changing priorities and acts without having the total picture. DentsplySirona is an Equal Opportunity/ Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, age, sexual orientation, disability, or protected Veteran status. We appreciate your interest in DentsplySirona, If you need assistance with completing the online application due to a disability, please send an accommodation request to careers@dentsplysirona,. Please be sure to include “Accommodation Request” in the subject, Show more Show less
Posted 6 days ago
2.0 - 5.0 years
4 - 8 Lacs
Kolkata, Mumbai, New Delhi
Work from Office
Job Title: Automation EngineerDatabricks. Job Type: Full-time, Contractor. Location: Hybrid Hyderabad | Pune| Delhi. About Us:. Our mission at micro1 is to match the most talented people in the world with their dream jobs. If you are looking to be at the forefront of AI innovation and work with some of the fastest-growing companies in Silicon Valley, we invite you to apply for a role. By joining the micro1 community, your resume will become visible to top industry leaders, unlocking access to the best career opportunities on the market.. Job Summary:. We are seeking a detail-oriented and innovative Automation EngineerDatabricks to join our customer's team. In this critical role, you will design, develop, and execute automated tests to ensure the quality, reliability, and integrity of data within Databricks environments. If you are passionate about data quality, thrive in collaborative environments, and excel at both written and verbal communication, we'd love to meet you.. Key Responsibilities:. Design, develop, and maintain robust automated test scripts using Python, Selenium, and SQL to validate data integrity within Databricks environments.. Execute comprehensive data validation and verification activities to ensure accuracy and consistency across multiple systems, data warehouses, and data lakes.. Create detailed and effective test plans and test cases based on technical requirements and business specifications.. Integrate automated tests with CI/CD pipelines to facilitate seamless and efficient testing and deployment processes.. Work collaboratively with data engineers, developers, and other stakeholders to gather data requirements and achieve comprehensive test coverage.. Document test cases, results, and identified defects; communicate findings clearly to the team.. Conduct performance testing to ensure data processing and retrieval meet established benchmarks.. Provide mentorship and guidance to junior team members, promoting best practices in test automation and data validation.. Required Skills and Qualifications:. Strong proficiency in Python, Selenium, and SQL for developing test automation solutions.. Hands-on experience with Databricks, data warehouse, and data lake architectures.. Proven expertise in automated testing of data pipelines, preferably with tools such as Apache Airflow, dbt Test, or similar.. Proficient in integrating automated tests within CI/CD pipelines on cloud platforms (AWS, Azure preferred).. Excellent written and verbal communication skills with the ability to translate technical concepts to diverse audiences.. Bachelor’s degree in Computer Science, Information Technology, or a related discipline.. Demonstrated problem-solving skills and a collaborative approach to teamwork.. Preferred Qualifications:. Experience with implementing security and data protection measures in data-driven applications.. Ability to integrate user-facing elements with server-side logic for seamless data experiences.. Demonstrated passion for continuous improvement in test automation processes, tools, and methodologies.. Show more Show less
Posted 6 days ago
8.0 - 13.0 years
25 - 30 Lacs
Hyderabad
Work from Office
About us:. Our mission at micro1 is to match the most talented people in the world with their dream jobs. If you are looking to be at the forefront of AI innovation and work with some of the fastest growing companies in Silicon Valley, we invite you to apply for a role. By joining the micro1 community, your resume will become visible to top industry leaders, unlocking access to the best career opportunities on the market.. Job Summary:. Join our customer's team as a Software Developer and play a pivotal role in building high-impact backend solutions at the forefront of AI and data engineering. This is your chance to work in a collaborative, onsite environment where your technical expertise and communication skills will drive the success of next-generation AI/ML applications.. Key Responsibilities:. Develop, test, and maintain scalable backend components and microservices using Python and PySpark.. Build and optimize advanced data pipelines leveraging Databricks and distributed computing platforms.. Design and administer efficient MySQL databases, focusing on data integrity, availability, and performance.. Integrate machine learning models into production-grade backend systems powering innovative AI features.. Collaborate with data scientists and engineering peers to deliver comprehensive, business-driven solutions.. Monitor, troubleshoot, and enhance system performance using Redis for caching and scalability.. Create clear technical documentation and communicate proactively with the team, emphasizing both written and verbal skills.. Required Skills and Qualifications:. Proficient in Python for backend development with strong coding standards.. Practical experience with Databricks and PySpark in live production environments.. Advanced knowledge of MySQL database design, query optimization, and maintenance.. Solid foundation in machine learning concepts and deploying ML models in backend systems.. Experience utilizing Redis for effective caching and state management.. Outstanding written and verbal communication abilities with strong attention to detail.. Demonstrated success working collaboratively in a fast-paced onsite setting in Hyderabad.. Preferred Qualifications:. Background in high-growth AI/ML or complex data engineering projects.. Familiarity with additional backend technologies or cloud-based platforms.. Experience mentoring or leading technical teams.. Be a key contributor to our customer's team, delivering backend systems that seamlessly bridge data engineering and AI innovation. We value professionals who thrive on clear communication, technical excellence, and collaborative problem-solving.. Show more Show less
Posted 6 days ago
8.0 - 13.0 years
25 - 30 Lacs
Kolkata, Mumbai, New Delhi
Work from Office
Job Title: Senior Software Engineer. Job Type: Full-time, Contractor. About Us:. Our mission at micro1 is to match the most talented people in the world with their dream jobs. If you are looking to be at the forefront of AI innovation and work with some of the fastest growing companies in Silicon Valley, we invite you to apply for a role. By joining the micro1 community, your resume will become visible to top industry leaders, unlocking access to the best career opportunities on the market.. Job Summary:. We are seeking a highly skilled Senior Software Engineer to join one of our top customers., committed to designing and implementing high-performance microservices. The ideal candidate will have extensive experience with Python, FastAPI, task queues, web sockets and Kubernetes to build scalable solutions for our platforms. This is an exciting opportunity for those who thrive in challenging environments and have a passion for technology and innovation.. Key Responsibilities:. Design and develop backend services using Python, with an emphasis on FastAPI for high-performance applications.. Architect and orchestrate microservices to handle high concurrency I/O requests efficiently.. Deploy and manage applications on AWS, ensuring robust and scalable solutions are delivered.. Implement and maintain messaging queues using Celery, RabbitMQ, or AWS SQS.. Utilize WebSockets and asynchronous programming to enhance system responsiveness and performance.. Collaborate with cross-functional teams to ensure seamless integration of solutions.. Continuously improve system reliability, scalability, and performance through innovative design and testing.. Required Skills and Qualifications:. Proven experience in production deployments with user bases exceeding 10k.. Expertise in Python and FastAPI, with strong knowledge of microservices architecture.. Proficiency in working with queues and asynchronous programming.. Hands-on experience with databases such as Postgres, MongoDB, or Databricks.. Comprehensive knowledge of Kubernetes for running scalable microservices.. Exceptional written and verbal communication skills.. Consistent work history without overlapping roles or career gaps.. Preferred Qualifications:. Experience with GoLang for microservice development.. Familiarity with data lake technologies such as Iceberg.. Understanding of deploying APIs in Kubernetes environments.. Show more Show less
Posted 6 days ago
2.0 - 5.0 years
5 - 9 Lacs
Kolkata, Mumbai, New Delhi
Work from Office
Job Title: Full Stack Engineer. Job Type: Full-time, Contractor. About Us:. Our mission at micro1 is to match the most talented people in the world with their dream jobs. If you are looking to be at the forefront of AI innovation and work with some of the fastest growing companies in Silicon Valley, we invite you to apply for a role. By joining the micro1 community, your resume will become visible to top industry leaders, unlocking access to the best career opportunities on the market.. Job Summary:. We are seeking a highly skilled Full Stack Engineer to join our dynamic team, committed to designing and implementing high-performance microservices. The ideal candidate will have extensive experience with Python, FastAPI, task queues, web sockets and Kubernetes to build scalable solutions for our platforms. This is an exciting opportunity for those who thrive in challenging environments and have a passion for technology and innovation.. Key Responsibilities:. Design and develop backend services using Python, with an emphasis on FastAPI for high-performance applications.. Architect and orchestrate microservices to handle high concurrency I/O requests efficiently.. Deploy and manage applications on AWS, ensuring robust and scalable solutions are delivered.. Implement and maintain messaging queues using Celery, RabbitMQ, or AWS SQS.. Utilize WebSockets and asynchronous programming to enhance system responsiveness and performance.. Collaborate with cross-functional teams to ensure seamless integration of solutions.. Continuously improve system reliability, scalability, and performance through innovative design and testing.. Required Skills and Qualifications:. Proven experience in production deployments with user bases exceeding 10k.. Expertise in Python and FastAPI, with strong knowledge of microservices architecture.. Proficiency in working with queues and asynchronous programming.. Hands-on experience with databases such as Postgres, MongoDB, or Databricks.. Comprehensive knowledge of Kubernetes for running scalable microservices.. Exceptional written and verbal communication skills.. Consistent work history without overlapping roles or career gaps.. Preferred Qualifications:. Experience with GoLang for microservice development.. Familiarity with data lake technologies such as Iceberg.. Understanding of deploying APIs in Kubernetes environments.. Show more Show less
Posted 6 days ago
1.0 - 3.0 years
3 - 7 Lacs
Hyderabad
Work from Office
Job Title: Backend Developer Python. Job Type: Full-time. Location: On-site, Hyderabad, Telangana, India. Job Summary:. Join one of our top customer's team as a Backend Developer and help drive scalable, high-performance solutions at the intersection of machine learning and data engineering. You’ll collaborate with skilled professionals to design, implement, and maintain backend systems powering advanced AI/ML applications in a dynamic, onsite environment.. Key Responsibilities:. Develop, test, and deploy robust backend components and microservices using Python and PySpark.. Implement and optimize data pipelines leveraging Databricks and distributed computing frameworks.. Design and maintain efficient databases with MySQL, ensuring data integrity and high availability.. Integrate machine learning models into production-ready backend systems supporting AI-driven features.. Collaborate closely with data scientists and engineers to deliver end-to-end solutions aligned with business goals.. Monitor, troubleshoot, and enhance system performance, utilizing Redis for caching and improved scalability.. Write clear and maintainable documentation, and communicate effectively with team members both verbally and in writing.. Required Skills and Qualifications:. Proficiency in Python programming for backend development.. Hands-on experience with Databricks and PySpark in a production environment.. Strong understanding of MySQL database design, querying, and performance tuning.. Practical background in machine learning concepts and deploying ML models.. Experience with Redis for caching and state management.. Excellent written and verbal communication skills, with a keen attention to detail.. Demonstrated ability to work effectively in an on-site, collaborative setting in Hyderabad.. Preferred Qualifications:. Previous experience in high-growth AI/ML or data engineering projects.. Familiarity with additional backend technologies or cloud platforms.. Demonstrated leadership or mentorship in technical teams.. Show more Show less
Posted 6 days ago
5.0 - 9.0 years
13 - 17 Lacs
Gurugram
Work from Office
Dentsply Sirona is the world’s largest manufacturer of professional dental products and technologies, with a 130-year history of innovation and service to the dental industry and patients worldwide. Dentsply Sirona develops, manufactures, and markets a comprehensive solutions offering including dental and oral health products as well as other consumable medical devices under a strong portfolio of world class brands. Dentsply Sirona’s products provide innovative, high-quality and effective solutions to advance patient care and deliver better and safer dentistry. Dentsply Sirona’s global headquarters is located in Charlotte, North Carolina, USA. The company’s shares are listed in the United States on NASDAQ under the symbol XRAY.. Bringing out the best in people. As advanced as dentistry is today, we are dedicated to making it even better. Our people have a passion for innovation and are committed to applying it to improve dental care. We live and breathe high performance, working as one global team, bringing out the best in each other for the benefit of dental patients, and the professionals who serve them. If you want to grow and develop as a part of a team that is shaping an industry, then we’re looking for the best to join us.. Working At Dentsply Sirona You Are Able To. Develop faster with our commitment to the best professional development.. Perform better as part of a high-performance, empowering culture.. Shape an industry with a market leader that continues to drive innovation.. Make a difference -by helping improve oral health worldwide.. Scope. Role has global scope and includes managing and leading data flow and transformation development in the Data Engagement Platform (DEP). This role will lead work of DS employees as well as contractors.. Key Responsibilities. Develop and maintain high quality data warehouse solution.. Maintain accurate and complete technical architectural documents.. Collaborate with BI Developers and Business Analysts for successful development of BI reporting and analysis.. Work with business groups and technical teams to develop and maintain data warehouse platform for BI reporting.. Develop scalable and maintainable data layer for BI applications to meet business objectives.. To work in a small, smart, agile team – designing, developing and owning full solution for an assigned data area. Develop standards, patterns, best practices for reuse and acceleration.. Perform maintenance and troubleshooting activities in Azure data platform.. Analyze, plan and develop requirements and standards in reference to scheduled projects.. Partake in process to define clear project deliverables.. Coordinate the development of standards, patterns, best practices for reuse and acceleration.. Typical Background. Education: University Degree or equivalent in MIS or similar. Years And Type Of Experience. 5-10 years working with BI and data warehouse solutions.. Key Required Skills, Knowledge And Capabilities. Good understanding of business logic and understanding of their needs.. Some experience with Databricks and dbt is desirable.. Worked with Azure DevOps code repository, version control and task management.. Strong proficiency with SQL and its variation among popular databases.. Knowledge of best practices when dealing with relational databases. Capable of troubleshooting common database issues. You have knowledge of data design and analysis of BI systems and processes.. Strong analytical and logical thinking. Internationally and culturally aware. Communicate well verbally and in writing in English. Key Leadership Behaviors. Dentsply Sirona managers are expected to successfully demonstrate behaviors aligned with the Competency model. See competencies below together with. a Key Specific. Behaviors For Success. Teamwork – Defines success in terms of the whole team. Customer Focus – Is dedicated to meeting the expectations and requirements of internal and external customers and seeking to make improvements with the customer in mind. Strategic Thinking – Applies experience, knowledge, and perspective of business and external or global factors to create new perspectives and fresh thinking. Talent Management – Actively seeks assignments that stretch her beyond comfort zone. Integrity – Raises potential ethical concerns to the right party. Problem Solving – Can analyze problems and put together a plan for resolution within her scope of responsibility. Drive for Results – Can be counted on to reach goals successfully. Accountability – Acts with a clear sense of ownership. Innovation and Creativity – Brings creative ideas to work and acts to take advantage of opportunities to improve business. Leading Change – Adapts to changing priorities and acts without having the total picture. DentsplySirona is an Equal Opportunity/ Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, age, sexual orientation, disability, or protected Veteran status. We appreciate your interest in DentsplySirona.. If you need assistance with completing the online application due to a disability, please send an accommodation request to careers@dentsplysirona.com. Please be sure to include “Accommodation Request” in the subject.. Show more Show less
Posted 6 days ago
2.0 - 4.0 years
3 - 7 Lacs
Hyderabad
Work from Office
Job Title: Automation Engineer. Job Type: Full-time, Contractor. About Us:. Our mission at micro1 is to match the most talented people in the world with their dream jobs. If you are looking to be at the forefront of AI innovation and work with some of the fastest-growing companies in Silicon Valley, we invite you to apply for a role. By joining the micro1 community, your resume will become visible to top industry leaders, unlocking access to the best career opportunities on the market.. Job Summary:. We are seeking a detail-oriented and innovative Automation Engineer to join our customer's team. In this critical role, you will design, develop, and execute automated tests to ensure the quality, reliability, and integrity of data within Databricks environments. If you are passionate about data quality, thrive in collaborative environments, and excel at both written and verbal communication, we'd love to meet you.. Key Responsibilities:. Design, develop, and maintain robust automated test scripts using Python, Selenium, and SQL to validate data integrity within Databricks environments.. Execute comprehensive data validation and verification activities to ensure accuracy and consistency across multiple systems, data warehouses, and data lakes.. Create detailed and effective test plans and test cases based on technical requirements and business specifications.. Integrate automated tests with CI/CD pipelines to facilitate seamless and efficient testing and deployment processes.. Work collaboratively with data engineers, developers, and other stakeholders to gather data requirements and achieve comprehensive test coverage.. Document test cases, results, and identified defects; communicate findings clearly to the team.. Conduct performance testing to ensure data processing and retrieval meet established benchmarks.. Provide mentorship and guidance to junior team members, promoting best practices in test automation and data validation.. Required Skills and Qualifications:. Strong proficiency in Python, Selenium, and SQL for developing test automation solutions.. Hands-on experience with Databricks, data warehouse, and data lake architectures.. Proven expertise in automated testing of data pipelines, preferably with tools such as Apache Airflow, dbt Test, or similar.. Proficient in integrating automated tests within CI/CD pipelines on cloud platforms (AWS, Azure preferred).. Excellent written and verbal communication skills with the ability to translate technical concepts to diverse audiences.. Bachelor’s degree in Computer Science, Information Technology, or a related discipline.. Demonstrated problem-solving skills and a collaborative approach to teamwork.. Preferred Qualifications:. Experience with implementing security and data protection measures in data-driven applications.. Ability to integrate user-facing elements with server-side logic for seamless data experiences.. Demonstrated passion for continuous improvement in test automation processes, tools, and methodologies.. Show more Show less
Posted 6 days ago
3.0 - 6.0 years
9 - 13 Lacs
Gurugram
Work from Office
Dentsply Sirona is the world’s largest manufacturer of professional dental products and technologies, with a 130-year history of innovation and service to the dental industry and patients worldwide. Dentsply Sirona develops, manufactures, and markets a comprehensive solutions offering including dental and oral health products as well as other consumable medical devices under a strong portfolio of world class brands. Dentsply Sirona’s products provide innovative, high-quality and effective solutions to advance patient care and deliver better and safer dentistry. Dentsply Sirona’s global headquarters is located in Charlotte, North Carolina, USA. The company’s shares are listed in the United States on NASDAQ under the symbol XRAY.. Bringing out the best in people. As advanced as dentistry is today, we are dedicated to making it even better. Our people have a passion for innovation and are committed to applying it to improve dental care. We live and breathe high performance, working as one global team, bringing out the best in each other for the benefit of dental patients, and the professionals who serve them. If you want to grow and develop as a part of a team that is shaping an industry, then we’re looking for the best to join us.. Working At Dentsply Sirona You Are Able To. Develop faster with our commitment to the best professional development.. Perform better as part of a high-performance, empowering culture.. Shape an industry with a market leader that continues to drive innovation.. Make a difference -by helping improve oral health worldwide.. Scope of Role. In the role as Azure Data Engineer, you will have the opportunity to join us and become part of the team that works with development, enhancement, and maintenance of our Data Engagement Platform (DEP). You will work with advanced analytics and the latest technology and be part of our passionate team. Does this sound like something that would energize you, then come join us!. Our Global Data and Analytics department handles the collection and streamlining of data into (DEP), development of BI solutions and reports in the Dentsply Sirona group. The team consists of 20+ members and work cross-functionally, which means that you will interact with many functions such as finance, marketing, sales, commercial, supply and operations. We use Azure tools together with Databricks and dbt.. Responsibilities. Develop and maintain high quality data warehouse solution.. Collaborate with BI Developers and Business Analysts for successful development of BI reporting and analysis.. Develop scalable and maintainable data layer for BI applications to meet business objectives.. Work in a small, smart, agile team – design, develop and own full solution for an assigned data area.. Perform maintenance and troubleshooting activities in Azure data platform.. Take part in accurate and complete technical architectural documents.. Work closely with other members in the Global Data and Analytics team.. Maintain clear and coherent communication, both verbal and written, to understand data requirement needs.. Additional responsibilities as assigned.. Education. An academic background, with relevant university degree within Management Information System or similar.. Years And Type Of Experience. Minimum 5 year work experience in a BI position.. Experience with Databricks and dbt is desirable.. Experience with Azure DevOps code repository, version control and task management. Strong proficiency with SQL and its variation among popular databases. Knowledge of best practices when dealing with relational databases. Key Skills, Knowledge & Capabilities. Capable of troubleshooting common database issuesIs motivated by analyzing and understanding the business needs, translating it to technical solutions, assuring that both businessand technical-needs are met.. Strong analytical and logical thinking. Communicative skills, verbally and writing. English language – proficiency in verbal and written communication. How We Lead The DS Way. Actively articulates and promotes Dentsply Sirona’s vision, mission and values.. Advocates on behalf of the customer.. Promotes high performance, innovation and continual improvement.. Consistently meets Company standards, ethics and compliance requirements.. Clear and effective communication with stake holders, which span across multiple levels, socio-geographic areas and functional expertise.. DentsplySirona is an Equal Opportunity/ Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, age, sexual orientation, disability, or protected Veteran status. We appreciate your interest in DentsplySirona.. If you need assistance with completing the online application due to a disability, please send an accommodation request to careers@dentsplysirona.com. Please be sure to include “Accommodation Request” in the subject.. Show more Show less
Posted 6 days ago
2.0 - 4.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Job Description. Job Title: Backend Developer. Job Type: Full-time. Location: On-site, Hyderabad, Telangana, India. About us:. Our mission at micro1 is to match the most talented people in the world with their dream jobs. If you are looking to be at the forefront of AI innovation and work with some of the fastest growing companies in Silicon Valley, we invite you to apply for a role. By joining the micro1 community, your resume will become visible to top industry leaders, unlocking access to the best career opportunities on the market.. Job Summary:. Join our customer's team as a Backend Developer and play a pivotal role in building high-impact backend solutions at the forefront of AI and data engineering. This is your chance to work in a collaborative, onsite environment where your technical expertise and communication skills will drive the success of next-generation AI/ML applications.. Key Responsibilities:. Develop, test, and maintain scalable backend components and microservices using Python and PySpark.. Build and optimize advanced data pipelines leveraging Databricks and distributed computing platforms.. Design and administer efficient MySQL databases, focusing on data integrity, availability, and performance.. Integrate machine learning models into production-grade backend systems powering innovative AI features.. Collaborate with data scientists and engineering peers to deliver comprehensive, business-driven solutions.. Monitor, troubleshoot, and enhance system performance using Redis for caching and scalability.. Create clear technical documentation and communicate proactively with the team, emphasizing both written and verbal skills.. Required Skills and Qualifications:. Proficient in Python for backend development with strong coding standards.. Practical experience with Databricks and PySpark in live production environments.. Advanced knowledge of MySQL database design, query optimization, and maintenance.. Solid foundation in machine learning concepts and deploying ML models in backend systems.. Experience utilizing Redis for effective caching and state management.. Outstanding written and verbal communication abilities with strong attention to detail.. Demonstrated success working collaboratively in a fast-paced onsite setting in Hyderabad.. Preferred Qualifications:. Background in high-growth AI/ML or complex data engineering projects.. Familiarity with additional backend technologies or cloud-based platforms.. Experience mentoring or leading technical teams.. Be a key contributor to our customer's team, delivering backend systems that seamlessly bridge data engineering and AI innovation. We value professionals who thrive on clear communication, technical excellence, and collaborative problem-solving.. Show more Show less
Posted 6 days ago
2.0 - 5.0 years
4 - 8 Lacs
Kolkata, Mumbai, New Delhi
Work from Office
Job Title: Data Quality & Automation Engineer.. Job Type: Full-time, Contractor.. About Us:. Our mission at micro1 is to match the most talented people in the world with their dream jobs. If you are looking to be at the forefront of AI innovation and work with some of the fastest-growing companies in Silicon Valley, we invite you to apply for a role. By joining the micro1 community, your resume will become visible to top industry leaders, unlocking access to the best career opportunities on the market.. Job Summary. We are seeking a skilled and innovative Data Quality & Automation Engineer to join our customer's dynamic team. In this role, you will leverage your expertise to ensure the quality and reliability of our data processing systems, playing a crucial role in our commitment to excellence. We are looking for a candidate who possesses a keen eye for detail and a strong ability to communicate both verbally and in writing.. Key Responsibilities. Develop and execute automated test scripts using Python and Selenium to validate data processing systems.. Perform rigorous data validation and ensure data integrity across various data platforms.. Collaborate with data engineers and developers to identify and troubleshoot issues.. Maintain and enhance existing automation frameworks and scripts.. Utilize SQL for advanced data querying and validation tasks.. Implement and manage workflows using Apache Airflow.. Work with Databricks to test data pipelines and transformations.. Required Skills and Qualifications. Proven experience in automation testing with a focus on data quality.. Proficiency in Python programming and Selenium automation tools.. Strong understanding of SQL for data validation and reporting.. Experience with ALM.. Knowledge of data warehousing and data lake architectures.. Experience in leading and mentoring teams.. Experience with data testing tools (dbt Test).. Experience with Apache Airflow for workflow management.. Familiarity with Databricks for data processing and analytics.. Exceptional written and verbal communication skills.. Attention to detail and a proactive approach to problem-solving.. Preferred Qualifications. Experience with cloud platforms (AWS, Azure) and big data technologies.. Knowledge of continuous integration and deployment processes.. Certification in data engineering or software testing is a plus.. Show more Show less
Posted 6 days ago
6.0 - 11.0 years
15 - 30 Lacs
Gurugram, Bengaluru
Hybrid
Skill Expectation Skills (Must have) Python Hands on experience of using python programming to automate basic tasks. SQL Proficiency in SQL, with the ability to write complex queries for data validation. ETL Testing Experience with ETL data validation testing and Exposure to ETL Process/Data Lakes Skills (Good to have) Data Platform Exposure to Databricks/Azure platform and Databricks-specific testing tools and frameworks Load Testing Hands on experience with any load testing tools and techniques CI/CD Basics Basics understanding and experience with CI/CD Tooling (Gitlab , Github, Azure etc)
Posted 6 days ago
2.0 - 5.0 years
8 - 12 Lacs
Pune
Work from Office
Job Summary QA Specialist, Data & Analytics. We’re looking for a meticulous and detail-oriented QA Specialist who is passionate about data quality. You will collaborate with our analytics team to develop and execute comprehensive QA processes, validate data pipelines, and automate recurring QA processes. Your work will be key to ensuring our data and analytics deliverables meet the highest standards of accuracy and reliability.. Responsibilities:. Develop and execute comprehensive QA processes for data and analytics deliverables.. Validate the entire data pipeline, including data sources, ETL processes, extracts, calculations, visualizations, and application interfaces.. Perform functional, regression, performance, and tolerance-range testing across reports and data systems.. Simulate end-user journeys to ensure a seamless user experience with analytics outputs.. Validate application tracking functionality (data collection through application usage).. Validate calculations and metrics in Tableau, Power BI, and other BI tools.. Conduct database validations using SQL (Oracle, Big Query) and NoSQL (MongoDB) systems.. Automate recurring QA processes in the analytics/BI environment when feasible.. Identify and document data quality issues and discrepancies.. Collaborate with cross-functional teams, including data engineers, BI developers, and product managers, to ensure analytics quality.. Experience. 3+ years of experience in QA, data validation, or analytics testing.. Hands-on experience BI tools environment testing.. Proficiency in SQL (Advantage: experience with Oracle and Big Query databases).. Experience with NoSQL databases (Advantage: MongoDB). Technical Skills.. Familiarity with regression testing and simulating user interactions with BI tools.. Nice-to-Have Qualifications. Advantage: Familiarity with scripting languages like R or Python.. Advantage: Experience in automation testing within analytics or BI environments.. Advantage: Experience in Databricks environment. Collaboration and Leadership:. Excellent communication skills with the ability to collaborate effectively across departments.. Strong ability to present complex findings to both technical and non-technical audiences.. About Aumni Techworks. Aumni Techworks, established in 2016, is a Software Services Company that partners with Product companies to build and manage their dedicated teams in India. So, while you are working for a services company, you are working within a product team and growing with them.. We do not take projects, and we have long term (open ended) contracts with our clients. When our clients sign up with us, they are looking at a multi-year relationship. For e.g. Some of the clients we signed up 8 or 6 years, are still with us.. We do not move people across client teams and there is no concept of bench.. At Aumni, we believe in quality work, and we truly believe that Indian talent is at par with someone in NY, London or Germany. 300+ and growing. Benefits Of Working At Aumni Techworks. Our award-winning culture reminds us of our engineering days.. Medical insurance (including Parents), Life and disability insurance. 24 leaves + 10 public holidays + leaves for Hospitalization, maternity, paternity and bereavement.. On site Gym, TT, Carrom, Foosball and Pool table. Hybrid work culture. Fitness group / rewards. Friday Socials, Annual parties, treks.. Show more Show less
Posted 6 days ago
12.0 - 22.0 years
20 - 35 Lacs
Pune, Chennai, Bengaluru
Hybrid
Role & responsibilities Lead (Hands On) the team of Data Engineers Good Communication and strong in technical design decision making. Strong Experience in ETL / Dataware Ware House Strong Experience in Databricks, Unity Catalogue, Medallion Architecture, Data Lineage & Pyspark, CTE Good Experience in Data Analysis Strong Experience in SQL Queries, Stored Procedures, Views, Functions, UDF (User Defined Functions) Experience in Azure Cloud, ADF, Storage & Containers, Azure DB, Azure Datalake Experience in SQL Server Experince in Data Migration & Production Support"
Posted 6 days ago
3.0 - 5.0 years
10 - 15 Lacs
Noida
Hybrid
Key Responsibilities: Design, develop, and maintain scalable and efficient data pipelines using Azure Databricks Optimize and troubleshoot existing data pipelines to enhance performance and reliability. Ensure data quality, integrity, and consistency across various data sources. Implement ETL processes and manage data flows into Data Warehouses and Data Marts. Develop and optimize SQL queries on Snowflake for data processing and reporting. Utilize Python for data processing, transformation, and automation tasks. Monitor pipeline performance, proactively identify issues, and conduct necessary maintenance and updates. Maintain comprehensive documentation of data processes, architectures, and technical specifications. Required Skills: Azure Databricks PowerBI SSRS and MSSQL Snowflake Python ETL Development GitHub for version control and collaboration JIRA for work management Experience Range: 3 to 5 years Interpersonal Skills: Strong problem-solving and analytical abilities. Excellent written and verbal communication skills. Ability to work effectively within a team and collaborate.
Posted 6 days ago
3.0 - 6.0 years
10 - 17 Lacs
Pune
Hybrid
Software Engineer Baner, Pune, Maharashtra Department Software & Automation Employee Type Permanent Experience Range 3 - 6 Years Qualification: Bachelor's or master's degree in computer science, IT, or related field. Roles & Responsibilities: Tasks Facilitate Agile ceremonies and lead Scrum practices. Support the Product Owner in backlog management and team organization. Promote Agile best practices (Scrum, SAFe) and continuous delivery improvements. Develop and maintain scalable data pipelines using AWS and Databricks (secondary focus). Collaborate with architects and contribute to solution design (support role). Occasionally travel for global team collaboration. Scrum Master or Agile team facilitation experience. Familiarity with Python and Databricks (PySpark, SQL). Good AWS cloud exposure (S3, EC2 basics Good to Have: Certified Scrum Master (CSM) or equivalent. Experience with ETL pipelines or data engineering concepts. Multi-cultural team collaboration experience. Software Skills: JIRA Confluence Python (basic to intermediate) Databricks (basic)
Posted 6 days ago
5.0 - 10.0 years
15 - 30 Lacs
Chennai
Remote
Who We Are For 20 years, we have been working with organizations large and small to help solve business challenges through technology. We bring a unique combination of engineering and strategy to Make Data Work for organizations. Our clients range from the travel and leisure industry to publishing, retail and banking. The common thread between our clients is their commitment to making data work as seen through their investment in those efforts. In our quest to solve data challenges for our clients, we work with large enterprise, cloud-based and marketing technology suites. We have a deep understanding of these solutions so we can help our clients make the most of their investment in an efficient way to have a data-driven business. Softcrylic now joins forces with Hexaware to Make Data Work in bigger ways! Why Work at Softcrylic? Softcrylic provides an engaging, team-focused, and rewarding work environment where people are excited about the work they do and passionate about delivering creative solutions to our clients. Work Timing: 12:30 pm to 9:30 pm (Flexible in work timing) Here's how to approach the interview: All technical interview rounds will be conducted virtually. The final round will be a face-to-face interview with HR in Chennai. However, there will be a 15-minute technical assessment/in-person technical discussion as part of the final round. Make sure to prepare accordingly for both virtual and in-person components. Job Description: 5 + years of experience in working as Data Engineer Experience in migrating existing datasets from Big Query to Databricks using Python scripts. Conduct thorough data validation and QA to ensure accuracy, completeness, parity, and consistency in reporting. Monitor the stability and status of migrated data pipelines, applying fixes as needed. Migrate data pipelines from Airflow to Airbyte/Dagster based on provided frameworks. Develop Python scripts to facilitate data migration and pipeline transformation. Perform rigorous testing on migrated data and pipelines to ensure quality and reliability. Required Skills: Strong experience in working on Python for scripting Good experience in working on Data Bricks and Big Query Familiarity with data pipeline tools such as Airflow, Airbyte, and Dagster. Strong understanding of data quality principles and validation techniques. Ability to work collaboratively with cross-functional teams. Dinesh M dinesh.m@softcrylic.com +9189255 18191
Posted 1 week ago
8.0 - 13.0 years
30 - 45 Lacs
Hyderabad
Work from Office
Role : Were looking for a skilled Databricks Solution Architect who will lead the design and implementation of data migration strategies and cloud-based data and analytics transformation on the Databricks platform. This role involves collaborating with stakeholders, analyzing data, defining architecture, building data pipelines, ensuring security and performance, and implementing Databricks solutions for machine learning and business intelligence. Key Responsibilities: Define the architecture and roadmap for cloud-based data and analytics transformation on Databricks. Design, implement, and optimize scalable, high-performance data architectures using Databricks. Build and manage data pipelines and workflows within Databricks. Ensure that best practices for security, scalability, and performance are followed. Implement Databricks solutions that enable machine learning, business intelligence, and data science workloads. Oversee the technical aspects of the migration process, from planning through to execution. Create documentation of the architecture, migration processes, and solutions. Provide training and support to teams post-migration to ensure they can leverage Databricks. Preferred candidate profile: Experience: 7+ years of experience in data engineering, cloud architecture, or related fields. 3+ years of hands-on experience with Databricks, including the implementation of data engineering solutions, migration projects, and optimizing workloads. Strong experience with cloud platforms (e.g., AWS, Azure, GCP) and their integration with Databricks. Experience in end-to-end data migration projects involving large-scale data infrastructure. Familiarity with ETL tools, data lakes, and data warehousing solutions. Skills: Expertise in Databricks architecture and best practices for data processing. Strong knowledge of Spark, Delta Lake, DLT, Lakehouse architecture, and other latest Databricks components. Proficiency in Databricks Asset Bundles Expertise in design and development of migration frameworks using Databricks Proficiency in Python, Scala, SQL, or similar languages for data engineering tasks. Familiarity with data governance, security, and compliance in cloud environments. Solid understanding of cloud-native data solutions and services.
Posted 1 week ago
8.0 - 12.0 years
25 - 40 Lacs
Hyderabad
Work from Office
Key Responsibilities: Design and develop the migration strategies and processes Collaborate with stakeholders to understand business requirements and technical challenges. Analyze current data and scope for optimization during the migration process. Define the architecture and roadmap for cloud-based data and analytics transformation on Databricks. Design, implement, and optimize scalable, high-performance data architectures using Databricks. Build and manage data pipelines and workflows within Databricks. Ensure that best practices for security, scalability, and performance are followed. Implement Databricks solutions that enable machine learning, business intelligence, and data science workloads. Oversee the technical aspects of the migration process, from planning through to execution. Work closely with engineering and data teams to ensure proper migration of ETL processes, data models, and analytics workloads. Troubleshoot and resolve issues related to migration, data quality, and performance. Create documentation of the architecture, migration processes, and solutions. Provide training and support to teams post-migration to ensure they can leverage Databricks. Experience: 7+ years of experience in data engineering, cloud architecture, or related fields. 3+ years of hands-on experience with Databricks, including the implementation of data engineering solutions, migration projects, and optimizing workloads. Strong experience with cloud platforms (e.g., AWS, Azure, GCP) and their integration with Databricks. Experience in end-to-end data migration projects involving large-scale data infrastructure. Familiarity with ETL tools, data lakes, and data warehousing solutions. Skills: Expertise in Databricks architecture and best practices for data processing. Strong knowledge of Spark, Delta Lake, DLT, Lakehouse architecture, and other latest Databricks components. Proficiency in Databricks Asset Bundles Expertise in design and development of migration frameworks using Databricks Proficiency in Python, Scala, SQL, or similar languages for data engineering tasks. Familiarity with data governance, security, and compliance in cloud environments. Solid understanding of cloud-native data solutions and services.
Posted 1 week ago
5.0 - 10.0 years
11 - 20 Lacs
Hyderabad
Work from Office
Mandatory skills AWS,Python,Pyspark,SQL,Databricks Role & responsibilities Design, develop, and maintain robust and scalable data pipelines using AWS services and Databricks. Implement data processing solutions using PySpark and SQL to handle large volumes of data efficiently. Collaborate with cross-functional teams to gather requirements and deliver data solutions that meet business needs. Ensure data quality and integrity through rigorous testing and validation processes. Optimize data workflows for performance and cost-efficiency. Document data processes and provide support for data-related issues. Preferred candidate profile AWS Services: Proficiency in AWS services such as S3, EC2, Lambda, and Redshift. Programming: Strong experience in Python for data manipulation and scripting. Big Data Processing: Hands-on experience with PySpark for distributed data processing. SQL: Expertise in writing complex SQL queries for data extraction and transformation. Databricks: Experience in developing and managing workflows in Databricks environment.
Posted 1 week ago
6.0 - 9.0 years
25 - 30 Lacs
Pune, Mumbai (All Areas)
Work from Office
Role & responsibilities Develop and maintain scalable data pipelines using Databricks and PySpark . Collaborate with cross-functional teams to deliver effective data solutions. Optimize ETL processes for enhanced performance and reliability. Ensure adherence to data quality and governance best practices. Deploy and manage data solutions in cloud environments ( Azure/ AWS ). Preferred candidate profile Proven experience as a Data Engineer , with a focus on Databricks and PySpark . Strong proficiency in Python and SQL . Experience with cloud platforms such as Azure(mainly) or AWS . Familiarity with data warehousing and integration technologies .
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane