Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
20 - 35 Lacs
Kochi, Bengaluru
Work from Office
Job Summary: We are seeking a highly skilled and motivated Machine Learning Engineer with a strong foundation in programming and machine learning, hands-on experience with AWS Machine Learning services (especially SageMaker), and a solid understanding of Data Engineering and MLOps practices. You will be responsible for designing, developing, deploying, and maintaining scalable ML solutions in a cloud-native environment. Key Responsibilities: • Design and implement machine learning models and pipelines using AWS SageMaker and related services. • Develop and maintain robust data pipelines for training and inference workflows. • Collaborate with data scientists, engineers, and product teams to translate business requirements into ML solutions. • Implement MLOps best practices including CI/CD for ML, model versioning, monitoring, and retraining strategies. • Optimize model performance and ensure scalability and reliability in production environments. • Monitor deployed models for drift, performance degradation, and anomalies. • Document processes, architectures, and workflows for reproducibility and compliance. Required Skills & Qualifications: • Strong programming skills in Python and familiarity with ML libraries (e.g., scikitlearn, TensorFlow, PyTorch). • Solid understanding of machine learning algorithms, model evaluation, and tuning. • Hands-on experience with AWS ML services, especially SageMaker, S3, Lambda, Step Functions, and CloudWatch. • Experience with data engineering tools (e.g., Apache Airflow, Spark, Glue) and workflow orchestration. Machine Learning Engineer - Job Description • Proficiency in MLOps tools and practices (e.g., MLflow, Kubeflow, CI/CD pipelines, Docker, Kubernetes). • Familiarity with monitoring tools and logging frameworks for ML systems. • Excellent problem-solving and communication skills. Preferred Qualifications: • AWS Certification (e.g., AWS Certified Machine Learning Specialty). • Experience with real-time inference and streaming data. • Knowledge of data governance, security, and compliance in ML systems
Posted 3 weeks ago
6.0 - 10.0 years
15 - 30 Lacs
Gurugram
Work from Office
We are specifically looking for candidates with strong SQL skills, along with experience in Snowflake or Looker
Posted 3 weeks ago
5.0 - 10.0 years
10 - 16 Lacs
Navi Mumbai, Mumbai (All Areas)
Work from Office
Designation : Senior Data Engineer Experience: 5+ Years Location: Navi Mumbai (JUINAGAR) - WFO Immediate Joiners preferred. Interview : Face - 2 - Face (Only 1 Day Process) Job Description We are looking for an experienced and results-driven Senior Data Engineer to join our Data Engineering team . In this role, you will design, develop, and maintain robust data pipelines and infrastructure that enable efficient data flow across our systems. As a senior contributor, you will also help define best practices, mentor junior team members, and contribute to the long-term vision of our data platform. You will work closely with cross-functional teams to deliver reliable, scalable, and high-performance data systems that support critical business intelligence and analytics initiatives. Required Qualifications: Bachelors degree in Computer Science, Information Systems, or a related field; Masters degree is a plus. 5+ years of experience in data warehousing, ETL development, and data modeling. Strong hands-on experience with one or more databases: Snowflake, Redshift, SQL Server, Oracle, Postgres, Teradata, BigQuery. Proficiency in SQL and scripting languages (e.g., Python, Shell). Deep knowledge of data modeling techniques and ETL frameworks. Excellent communication, analytical thinking, and troubleshooting skills. Preferred Qualifications Experience with modern data stack tools like dbt, Fivetran, Stitch, Looker, Tableau,or PowerBI. Knowledge of data lakes, lakehouses, and real-time data streaming (e.g., Kafka). Agile/Scrum project experience and version control using Git. Sincerely, Sonia TS
Posted 3 weeks ago
7.0 - 12.0 years
10 - 20 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Skill: Data Engineer Experience: 7+ Years Location: Warangal, Bangalore, Chennai, Hyderabad, Mumbai, Pune, Delhi, Noida, Gurgaon, Kolkata, Jaipur, Jodhpur Notice Period: Immediate - 15 Days Job Description: Design & Build Data Pipelines Develop scalable ETL/ELT workflows to ingest, transform, and load data into Snowflake using SQL, Python, or data integration tools. Data Modeling Create and optimize Snowflake schemas, tables, views, and materialized views to support business analytics and reporting needs. Performance Optimization Tune Snowflake compute resources (warehouses), optimize query performance, and manage clustering and partitioning strategies. Data Quality & Validation Security & Access Control Automation & CI/CD Monitoring & Troubleshooting Documentation
Posted 3 weeks ago
6.0 - 10.0 years
20 - 25 Lacs
Pune, Bengaluru, Delhi / NCR
Work from Office
Urgently hiring for Data Engineer - AEP for our esteemed client. Location : PAN INDIA Experienced data modelers, SQL, ETL, with some development background to provide defining new data schemas, data ingestion for Adobe Experience Platform customers. Interface directly with enterprise customers and collaborate with internal teams. Must Have : 6-9 years of strong experience with data transformation & ETL on large data sets Experience with designing customer centric datasets (i.e., CRM, Call Center, Marketing, Offline, Point of Sale etc.) 4+ years of Data Modeling experience (i.e., Relational, Dimensional, Columnar, Big Data) 5+ years of complex SQL experience Experience in advanced Data Warehouse concepts Experience in industry ETL tools (i.e., Informatica, Unifi) Experience with Business Requirements definition and management, structured analysis, process design, use case documentation Demonstrate exceptional organizational skills and ability to multi-task simultaneous different customer projects Strong verbal & written communication skills to interface with Sales team & lead customers to successful outcome Must be self-managed, proactive and customer focused Degree in Computer Science, Information Systems, Data Science, or related field Special Consideration given for - Experience & knowledge with Adobe Experience Cloud solutions Experience & knowledge with Digital Analytics or Digital Marketing Experience in programming languages (Python, Java, or Bash scripting) Experience with Big Data technologies (i.e., Hadoop, Spark, Redshift, Snowflake, Hive, Pig etc.) Experience as an enterprise technical or engineer consultant 100% matching profiles can send resume on neha.sahu@sonyocareers.com
Posted 3 weeks ago
7.0 - 12.0 years
15 - 27 Lacs
Pune
Hybrid
Notice Period - Immediate joiner Responsibilities Lead, develop and support analytical pipelines to acquire, ingest and process data from multiple sources Debug, profile and optimize integrations and ETL/ELT processes Design and build data models to conform to our data architecture Collaborate with various teams to deliver effective, high value reporting solutions by leveraging an established DataOps delivery methodology Continually recommend and implement process improvements and tools for data collection, analysis, and visualization Address production support issues promptly, keeping stakeholders informed of status and resolutions Partner closely with on and offshore technical resources Provide on-call support outside normal business hours as needed Provide status updates to the stakeholders. Identify obstacles and seek assistance with enough lead time to ensure delivery on time Demonstrate technical ability, thoroughness, and accuracy in all assignments Document and communicate on proper operations, standards, policies, and procedures Keep abreast on all new tools and technologies that are related to our Enterprise data architecture Foster a positive work environment by promoting teamwork and open communication. Skills/Qualifications Bachelors degree in computer science with focus on data engineering preferable. 6+ years of experience in data warehouse development, building and managing data pipelines in cloud computing environments Strong proficiency in SQL and Python Experience with Azure cloud services, including Azure Data Lake Storage, Data Factory, and Databricks Expertise in Snowflake or similar cloud warehousing technologies Experience with GitHub, including GitHub Actions. Familiarity with data visualization tools, such as Power BI or Spotfire Excellent written and verbal communication skills Strong team player with interpersonal skills to interact at all levels Ability to translate technical information for both technical and non-technical audiences Proactive mindset with a sense of urgency and initiative Adaptability to changing priorities and needs If you are interested share your updated resume on mail - recruit5@focusonit.com. Also Request you to please spread this message across your Networks or Contacts.
Posted 3 weeks ago
4.0 - 8.0 years
0 Lacs
noida, uttar pradesh
On-site
As a highly motivated and experienced Data Engineer, you will be responsible for designing, developing, and implementing solutions that enable seamless data integration across multiple cloud platforms. Your expertise in data lake architecture, Iceberg tables, and cloud compute engines like Snowflake, BigQuery, and Athena will ensure efficient and reliable data access for various downstream applications. Your key responsibilities will include collaborating with stakeholders to understand data needs and define schemas, designing and implementing data pipelines for ingesting, transforming, and storing data. You will also be developing data transformation logic to make Iceberg tables compatible with the data access requirements of Snowflake, BigQuery, and Athena, as well as designing and implementing solutions for seamless data transfer and synchronization across different cloud platforms. Ensuring data consistency and quality across the data lake and target cloud environments will be crucial in your role. Additionally, you will be analyzing data patterns and identifying performance bottlenecks in data pipelines, implementing data optimization techniques to improve query performance and reduce data storage costs, and monitoring data lake health to proactively address potential issues. Collaboration and communication with architects, leads, and other stakeholders to ensure data quality meet specific requirements will also be an essential part of your role. To be successful in this position, you should have a minimum of 4+ years of experience as a Data Engineer, strong hands-on experience with data lake architectures and technologies, proficiency in SQL and scripting languages, and experience with data governance and security best practices. Excellent problem-solving and analytical skills, strong communication and collaboration skills, and familiarity with cloud-native data tools and services are also required. Additionally, certifications in relevant cloud technologies will be beneficial. In return, GlobalLogic offers exciting projects in industries like High-Tech, communication, media, healthcare, retail, and telecom. You will have the opportunity to collaborate with a diverse team of highly talented individuals in an open, laidback environment. Work-life balance is prioritized with flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional development opportunities include Communication skills training, Stress Management programs, professional certifications, and technical and soft skill trainings. GlobalLogic provides competitive salaries, family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance, NPS(National Pension Scheme), extended maternity leave, annual performance bonuses, and referral bonuses. Fun perks such as sports events, cultural activities, food on subsidized rates, corporate parties, dedicated GL Zones, rooftop decks, and discounts for popular stores and restaurants are also part of the vibrant office culture at GlobalLogic. About GlobalLogic: GlobalLogic is a leader in digital engineering, helping brands design and build innovative products, platforms, and digital experiences for the modern world. By integrating experience design, complex engineering, and data expertise, GlobalLogic helps clients accelerate their transition into tomorrow's digital businesses. Operating under Hitachi, Ltd., GlobalLogic contributes to driving innovation through data and technology for a sustainable society with a higher quality of life.,
Posted 3 weeks ago
6.0 - 10.0 years
0 Lacs
hyderabad, telangana
On-site
Skill Data Engineer PowerBI Band in Infosys 5 Role Technology Lead Qualification B.E/Btech Job Description 6 to 10 years relevant experience and able to fulfill a role of managing delivery, coach team members, and lead "best practices and procedures. Power-BI, SSRS, Power-BI report Builder, AAS (SSAS Tabular Model) and MSBI especially into SQL server developer, SSIS developer, ETL developer. Well experienced in creating Data models, power-bi reports on top of model and publishing them overpower-bi services. Experience in creating workspace Vendor Rate Work Location with Zip code Pune ,Hyderabad, Bhubneshwar,
Posted 3 weeks ago
5.0 - 10.0 years
10 - 20 Lacs
Hyderabad
Work from Office
• Looking for (4-15) years experience in Data Engineer Strong experience in SQL,TSQL,Azure Data Factory (ADF), Databricks. • Good to have experience in SSIS & Python Notice Period-Immediate Email- sachin@assertivebs.com
Posted 3 weeks ago
4.0 - 8.0 years
10 - 20 Lacs
Hyderabad
Work from Office
hands-on expertise with ETL tools and logic, with a strong preference for IDMC Application Development/Support: Demonstrated success in either application development or support roles. Python Proficiency: Strong understanding of Python, with practical coding experience AWS: Comprehensive knowledge of AWS services and their applications Airflow: creating and managing Airflow DAG scheduling. Unix & SQL: Solid command of Unix commands, shell scripting, and writing efficient SQL scripts Analytical & Troubleshooting Skills: Exceptional ability to analyze data and resolve complex issues. Development Tasks: Proven capability to execute a variety of development activities with efficiency Insurance Domain Knowledge: Familiarity with the Insurance sector is highly advantageous. Production Data Management: Significant experience in managing and processing production data Work Schedule Flexibility: Open to working in any shift, including 24/7 support, production support, as require Role & responsibilities.
Posted 3 weeks ago
4.0 - 6.0 years
5 - 12 Lacs
Pune
Work from Office
Expert in Python coding Review the existing code and optimize it Text parsers and scrappers Review the pipelines coming in Experience in ADF pipelines Ability to guide the team with tasks on regular basis and support them technically Support other leads and coordinate for faster deliverables How AI can be brought, use/create LLMs to better GTM
Posted 3 weeks ago
5.0 - 10.0 years
20 - 35 Lacs
Hyderabad, Chennai, Coimbatore
Hybrid
Our client is Global IT Service & Consulting Organization Data Software Engineer Exp:5 -12 years Skill: Python, Spark, Azure Databricks/GCP/AWS Location- Hyderabad, Chennai, Coimbatore Notice period: Immediate to 60 days F2F interview on 12th July ,Saturday
Posted 3 weeks ago
6.0 - 10.0 years
10 - 17 Lacs
Bengaluru
Remote
Job Summary: We are looking for a highly skilled Data Engineer with 6+ years of experience to join our team on a contract basis. The ideal candidate will have a strong background in data engineering with deep expertise in DBT (Data Build Tool) and Apache Airflow for building robust data pipelines. You will be responsible for designing, developing, and optimizing data workflows to support analytics and business intelligence initiatives. Key Responsibilities: Design, build, and maintain scalable and reliable data pipelines using DBT and Airflow . Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements. Optimize data transformation workflows to improve efficiency, quality, and maintainability. Implement and maintain data quality checks and validation logic. Monitor pipeline performance, troubleshoot failures, and ensure timely data delivery. Develop and maintain documentation for data processes, models, and flows. Work with cloud data warehouses (e.g., Snowflake , BigQuery , Redshift , etc.) for storage and transformation. Support ETL/ELT jobs and integrate data from multiple sources (APIs, databases, flat files). Ensure best practices in version control, CI/CD, and automation of data workflows. Required Skills and Experience: 6+ years of hands-on experience in data engineering . Strong proficiency with DBT for data modeling and transformation. Experience with Apache Airflow for orchestration and workflow scheduling. Solid understanding of SQL and relational data modeling principles. Experience working with modern cloud data platforms (e.g., Snowflake , BigQuery , Databricks , or Redshift ). Proficiency in Python or similar scripting language for data manipulation and automation. Familiarity with version control systems like Git and collaborative development workflows. Experience with CI/CD tools and automated testing frameworks for data pipelines. Excellent problem-solving skills and ability to work independently. Strong communication and documentation skills. Nice to Have: Experience with streaming data platforms (e.g., Kafka, Spark Streaming). Knowledge of data governance, security, and compliance best practices. Experience with dashboarding tools (e.g., Looker, Tableau, Power BI) for data validation. Exposure to agile development methodologies. Contract Terms: Commitment: Full-time, 8 hours/day Duration: 6 months, with possible extension Location: Remote
Posted 3 weeks ago
2.0 - 5.0 years
0 - 3 Lacs
Jaipur
Work from Office
Job Role Data Engineer Job Location Jaipur Job Type Permanent Experience Required- (2-5) Years As a Data Engineer, you will play a critical role in designing, developing, and maintaining our data pipelines and infrastructure. You will work closely with our data scientists, analysts, and other stakeholders to ensure data is accurate, timely, and accessible. Your contributions will directly impact our data-driven decision-making and support our growth. Key Responsibilities: Data Pipeline Development: Design, develop, and implement data pipelines using Azure Data Factory and Databricks to support the ingestion, transformation, and movement of data. ETL Processes: Develop and optimize ETL (Extract, Transform, Load) processes to ensure efficient data flow and transformation. Data Lake Management: Develop and maintain Azure Data Lake solutions, ensuring efficient storage and retrieval of large datasets. Data Warehousing: Work with Azure Synapse Analytics to build and manage scalable data warehousing solutions that enable advanced analytics and reporting. Data Integration: Integrate various data sources into MS-Fabric, ensuring data consistency, quality, and accessibility across different platforms. Performance Optimization: Optimize data processing workflows and storage solutions to improve performance and reduce costs. Database Management: Manage and optimize databases (SQL and NoSQL) to support high-performance queries and data storage requirements. Data Quality: Implement data quality checks and monitoring to ensure accuracy and consistency of data. Collaboration: Work closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver actionable insights. Documentation: Create and maintain comprehensive documentation for data processes, pipelines, and infrastructure, architecture and best practices. Troubleshooting and Support: Identify and resolve issues in data pipelines, data lakes, and warehousing solutions, providing timely support and maintenance. Qualifications: Experience: 2-4 years of experience in data engineering or a related field. Technical Skills: Proficiency with Azure Data Factory, Azure Synapse Analytics, Databricks, and Azure Data Lake Experience with Microsoft Fabric is a plus Strong SQL skills and experience with data warehousing concepts (DWH) Knowledge of data modeling, ETL processes, and data integration Experience with relational databases (e.g., MS-SQL, PostgreSQL, MySQL) Hands-on experience with ETL tools and frameworks (e.g., Apache Airflow, Talend) Knowledge of big data technologies (e.g., Hadoop, Spark) is a plus Familiarity with cloud platforms (e.g., AWS, Azure, Google Cloud) and associated data services (e.g., S3, Redshift, BigQuery) Familiarity with data visualization tools (e.g., Power BI) and experience with programming languages such as Python, Java, or Scala. Experience with schema design and dimensional data modeling Analytical Skills: Strong problem-solving abilities and attention to detail. Communication: Excellent verbal and written communication skills, with the ability to explain technical concepts to non-technical stakeholders. Education: Bachelors degree in computer science, Engineering, Mathematics, or a related field. Advanced degrees or certifications are a plus Thanks & Regards Sulabh Tailang HR-Talent Acquisition Manager |Celebal Technologies |91-9448844746 Sulabh.tailang@celebaltech.com|LinkedIn-sulabhtailang |Twitter-Ersulabh Website-www.celebaltech.com
Posted 3 weeks ago
5.0 - 10.0 years
15 - 25 Lacs
Pune
Remote
Role & responsibilities Minimum 5+ years of Developing, designing, and implementing of Data Engineering. Collaborate with data engineers and architects to design and optimize data models for Snowflake Data Warehouse. Optimize query performance and data storage in Snowflake by utilizing clustering, partitioning, and other optimization techniques. Experience working on projects were housed within an Amazon Web Services (AWS) cloud environment. Experience working on projects housed within a Tableau and DBT Work closely with business stakeholders to understand requirements and translate them into technical solutions. Excellent presentation and communication skills, both written and verbal, ability to problem solve and design in an environment with unclear requirements.
Posted 3 weeks ago
3.0 - 8.0 years
13 - 23 Lacs
Bengaluru
Hybrid
Job Title -Data Engineer Reporting Line Data Engineering and Solutions Role Type Permanent Experience 6+years Summary As a Data Engineer you will work on implementing complex data projects with a focus on collecting, parsing, managing, analysing, and visualising large sets of data to turn information into value using multiple platforms. You will work with business analysts and data scientists to understand customer business problems and needs, secure the data supply chain, implement analysis solutions, and visualise outcomes that support improved decision making for a customer. You will understand how to apply technologies to solve big data problems and to develop innovative big data solutions. Key Accountabilities Working with colleagues to understand and implement requirements Securing the data supply chain, understanding how data is ingested from different sources and combined / transformed into a single data set. Understanding how to analyse, cleanse, join and transform data. Implementing designed / specified solutions into the chosen platform Working with colleagues to ensure that the on-prem/cloud infrastructure available is capable of meeting the solution requirements. Planning, designing, and conducting tests of the implementations, correcting errors and re-testing to achieve an acceptable result. Appreciate how to manage the data including security, archiving, structure and storage. Key skills and Technical Competencies Degree level education in Mathematics, Scientific, Computing or Engineering discipline or equivalent experience 6+ years of experience at various levels of Data Engineering roles including 3 years in technical lead role managing end to end solution. Hands on Experience with Azure Databricks using PYSPARK with ability to do cluster capacity planning and workload optimization. Experience in designing solutions using databases and data storage technology using RDBMS (MS SQL Server) Experience building and optimizing Big Data data pipelines, architectures and data sets using MS Azure data management and processing components through IaaS/PaaS/SaaS implementation models implemented through custom Experience in Azure Data factory. Proficiency in Python scripting. Experienced in using different python modules used for data munging Be up to date with data processing technology / platforms such as Spark (Databricks) Experience in Data Modeling for optimizing the solution performance Experienced in Azure DevOps with exposure to CI/CD. Good understanding of infrastructure components and their fit in different types of data solutions Experience of designing solutions deployed on Microsoft and Linux operating systems Experience of working in an agile environment, within a self-organising team. Behavioural Competencies We are adopting a winning mindset, aligning around our strategy, and being guided by a clear set of behaviours PUT SAFETY FIRST Prioritising the safety of our people and products and supporting each other to speak up. DO THE RIGHT THING Supporting a culture of caring and belonging where we listen first, embrace feedback and act with integrity. KEEP IT SIMPLE Working together to share and execute ideas and staying adaptable to new ideas and solutions. MAKE A DIFFERENCE Thinking about the business impact of our choices and the business outcomes of our decisions and challenging ourselves to deliver excellence and efficiency every day on the things that matter.
Posted 4 weeks ago
8.0 - 13.0 years
15 - 25 Lacs
Bengaluru
Work from Office
Data Engineer: MDM Experience in or with 8+ years’ experience as a SE, Data Engineer, or Data Analyst 5+ years of experience in data management with at least 3 years of hands-on exp with Informatica MDM pl share cv on rakhi.ankush@talentcorner.in
Posted 4 weeks ago
8.0 - 12.0 years
25 - 32 Lacs
Hyderabad
Hybrid
Key Skills: Data Engineer, Cloud, Snowflake DB, Data modeling, SQL, DevOps Roles and Responsibilities: Data Solution Design & Modeling Develop Conceptual, Logical, and Physical Data Models based on business requirements and data architecture principles. Create data mappings, transformation rules, and maintain comprehensive metadata artifacts such as data dictionaries and business glossaries. Collaborate with business SMEs and data stewards to align models with business processes and terminologies. Define and enforce data modeling standards, naming conventions, and design patterns. Support the end-to-end software development lifecycle (SDLC), including testing, deployment, and post-production issue resolution. Technical Troubleshooting & Optimization Perform data profiling, root cause analysis, and long-term resolutions for recurring data issues. Conduct impact assessments for upstream and downstream changes in data pipelines or models. Integrate data from various sources (APIs, flat files, databases) into Snowflake, ensuring performance and scalability. DevOps & Operations Work closely with Data Engineers and Business Analysts to ensure version control, CI/CD pipelines, and automated deployments using Git, Bitbucket, and related DevOps tools. Ensure all data solutions are compliant with governance, security, and regulatory requirements. Skills Required: Minimum 8+ years of experience in data engineering and data modeling roles. Proven track record with cloud-based data platforms, particularly Snowflake. Hands-on expertise with SQL, data integration, and warehouse performance optimization. Experience in Life Insurance, Banking, or other regulated industries is a strong advantage. Skills & Competencies: Strong knowledge of data modeling concepts and frameworks (e.g., Kimball, Inmon). Deep expertise in Snowflake performance tuning, security, and architecture. Strong command over SQL for data analysis, transformation, and pipeline development. Familiarity with DevOps, CI/CD, and source control systems (Git, Bitbucket). Solid understanding of data governance, metadata, data lineage, and data quality frameworks. Ability to conduct stakeholder workshops, capture business requirements, and translate them into technical design. Excellent problem-solving, documentation, and communication skills. Preferred Knowledge: Experience in regulated industries such as insurance or banking. Understanding of data risk, regulatory expectations, and compliance frameworks in financial services. Education: Bachelor's degree in Computer Science, Information Systems, or a related field. SnowPro Core Certification is highly preferred.
Posted 4 weeks ago
3.0 - 8.0 years
9 - 19 Lacs
Hyderabad
Work from Office
We Advantum Health Pvt. Ltd - US Healthcare MNC looking for Senior AI/ML Engineer. We Advantum Health Private Limited is a leading RCM and Medical Coding company, operating since 2013. Our Head Office is located in Hyderabad, with branch operations in Chennai and Noida. We are proud to be a Great Place to Work certified organization and a recipient of the Telangana Best Employer Award. Our office spans 35,000 sq. ft. in Cyber Gateway, Hitech City, Hyderabad Job Title: Senior AI/ML Engineer Location: Hitech City, Hyderabad, India Work from office Ph: 9177078628, 7382307530, 9059683624 Address: Advantum Health Private Limited, Cyber gateway, Block C, 4th floor Hitech City, Hyderabad. Location: https://www.google.com/maps/place/Advantum+Health+India/@17.4469674,78.3747158,289m/data=!3m2!1e3!5s0x3bcb93e01f1bbe71:0x694a7f60f2062a1!4m6!3m5!1s0x3bcb930059ea66d1:0x5f2dcd85862cf8be!8m2!3d17.4467126!4d78.3767566!16s%2Fg%2F11whflplxg?entry=ttu&g_ep=EgoyMDI1MDMxNi4wIKXMDSoASAFQAw%3D%3D Job Summary: We are seeking a highly skilled and motivated Data Engineer to join our growing data team. In this role, you will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure to support analytics, machine learning, and business intelligence initiatives. You will work closely with data analysts, scientists, and engineers to ensure data availability, reliability, and quality across the organization. Key Responsibilities: Design, develop, and maintain robust ETL/ELT pipelines for ingesting and transforming large volumes of structured and unstructured data Build and optimize data infrastructure for scalability, performance, and reliability Collaborate with cross-functional teams to understand data needs and translate them into technical solutions Implement data quality checks, monitoring, and alerting mechanisms Manage and optimize data storage solutions (data warehouses, data lakes, databases) Ensure data security, compliance, and governance across all platforms Automate data workflows and optimize data delivery for real-time and batch processing Participate in code reviews and contribute to best practices for data engineering Required Skills and Qualifications: Bachelors or Masters degree in Computer Science, Engineering, Information Systems, or a related field 3+ years of experience in data engineering or related roles Strong programming skills in Python, Java, or Scala Proficiency with SQL and working with relational databases (e.g., PostgreSQL, MySQL) Experience with data pipeline and workflow orchestration tools (e.g., Airflow, Prefect, Luigi) Hands-on experience with cloud platforms (AWS, GCP, or Azure) and cloud data services (e.g., Redshift, BigQuery, Snowflake) Familiarity with distributed data processing tools (e.g., Spark, Kafka, Hadoop) Solid understanding of data modeling, warehousing concepts, and data governance Preferred Qualifications: Experience with CI/CD and DevOps practices for data engineering Knowledge of data privacy regulations such as GDPR, HIPAA, etc. Experience with version control systems like Git Familiarity with containerization (Docker, Kubernetes) Follow us on LinkedIn, Facebook, Instagram, Youtube and Threads for all updates: Advantum Health Linkedin Page: https://www.linkedin.com/showcase/advantum-health-india/ Advantum Health Facebook Page: https://www.facebook.com/profile.php?id=61564435551477 Advantum Health Instagram Page: https://www.instagram.com/reel/DCXISlIO2os/?igsh=dHd3czVtc3Fyb2hk Advantum Health India Youtube link: https://youtube.com/@advantumhealthindia-rcmandcodi?si=265M1T2IF0gF-oF1 Advantum Health Threads link: https://www.threads.net/@advantum.health.india HR Dept, Advantum Health Pvt Ltd Cybergateway, Block C, Hitech City, Hyderabad Ph: 9177078628, 7382307530, 9059683624
Posted 4 weeks ago
5.0 - 8.0 years
22 - 32 Lacs
Bengaluru
Work from Office
Work with the team to define high-level technical requirements and architecture for the back-end services ,Data components,data monetization component Develop new application features & enhance existing one Develop relevant documentation and diagram Required Candidate profile min 5+ yr of exp in Python development, with a focus on data-intensive application exp with Apache Spark & PySpark for large-scale data process understand of SQL & exp working with relational database
Posted 4 weeks ago
5.0 - 10.0 years
15 - 25 Lacs
Pune
Remote
Role & responsibilities Minimum 5+ years of Developing, designing, and implementing of Data Engineering. Collaborate with data engineers and architects to design and optimize data models for Snowflake Data Warehouse. Optimize query performance and data storage in Snowflake by utilizing clustering, partitioning, and other optimization techniques. Experience working on projects were housed within an Amazon Web Services (AWS) cloud environment. Experience working on projects housed within a Tableau and DBT Work closely with business stakeholders to understand requirements and translate them into technical solutions. Excellent presentation and communication skills, both written and verbal, ability to problem solve and design in an environment with unclear requirements.
Posted 4 weeks ago
5.0 - 10.0 years
15 - 25 Lacs
Pune
Remote
Role & responsibilities At least 5 years of experience in data engineering with a strong background on Azure Databricks and Scala/Python and Streamlit •Experience in handling unstructured data processing and transformation with programming knowledge. •Hands on experience in building data pipelines using Scala/Python •Big data technologies such as Apache Spark, Structured Streaming, SQL, Databricks Delta Lake •Strong analytical and problem solving skills with the ability to troubleshoot spark applications and resolve data pipeline issues. •Familiarity with version control systems like Git, CICD pipelines using Jenkins.
Posted 4 weeks ago
7.0 - 12.0 years
6 - 16 Lacs
Bengaluru
Remote
5+ years’ experience with a strong proficiency with SQL query/development skills Hands-on experience with ETL tools Experience working in the healthcare industry with PHI/PII
Posted 4 weeks ago
4.0 - 7.0 years
5 - 13 Lacs
Hyderabad
Hybrid
Summary: Design, develop and implement scalable batch/real time data pipelines (ETLs) to integrate data from a variety of sources into Data Warehouse and Data Lake Design and implement data model changes that align with warehouse dimensional modeling standards. Proficient in Data Lake, Data Warehouse Concepts and Dimensional Data Model. Responsible for maintenance and support of all database environments, design and develop data pipelines, workflow, ETL solutions on both on-prem and cloud-based environments. Design and develop SQL stored procedures, functions, views, and triggers. Design, code, test, document and troubleshoot deliverables. Collaborate with others to test and resolve issues with deliverables. Maintain awareness of and ensure adherence to Zelis standards regarding privacy. Create and maintain Design documents, Source to Target mappings, unit test cases, data seeding. Ability to perform Data Analysis and Data Quality tests and create audit for the ETLs. Perform Continuous Integration and deployment using Azure DevOps and Git. Requirements: 3+ Years Microsoft BI Stack (SSIS, SSRS, SSAS) 3+ Years data engineering experience to include data analysis. 3+ years programming SQL objects (procedures, triggers, views, functions) in SQL Server. Experience optimizing SQL queries. Advanced understanding of T-SQL, indexes, stored procedures, triggers, functions, views, etc. Experience designing and implementing Data Warehouse. Working Knowledge of Azure/AWS Architecture, Data Lake Must be detail oriented. Must work under limited supervision. Must demonstrate good analytical skills as it relates to data identification and mapping and excellent oral communication skills. Must be flexible and able to multi-task and be able to work within deadlines; must be team-oriented, but also be able to work independently. Preferred Skills: Experience working with an ETL tool (DBT preferred) Working Experience designing and developing Azure/AWS Data Factory Pipelines. Working understanding of Columnar MPP Cloud data warehouse using Snowflake. Working knowledge managing data in the Data Lake. Business analysis experience to analyze data to write code and drive solutions. Working knowledge of: Git, Azure DevOps, Agile, Jira and Confluence. Healthcare and/or Payment processing experience. Independence/ Accountability: Requires minimal daily supervision. Receives detailed instruction on new assignments and determines next steps with guidance. Regularly reviews goals and objectives with supervisor. Demonstrates competence in relevant job responsibilities which allows for increasing level of independence. Ability to manage and prioritize multiple tasks. Ability to work under pressure and meet deadlines. Problem Solving: Makes logical suggestions of likely causes of problems and independently suggests solutions. Excellent organizational skills are required to prioritize responsibilities, thus completing work in a timely fashion. Outstanding ability to multiplex tasks as required. Excellent project management and/or business analysis skills. Attention to detail and concern for impact is essential.
Posted 4 weeks ago
7.0 - 12.0 years
25 - 40 Lacs
Gurugram
Remote
Job Title: Senior Data Engineer Location: Remote Job Type: Fulltime YoE: 7 to 10 years relevant experience Shift: 6.30pm to 2.30am IST Job Purpose: The Senior Data Engineer designs, builds, and maintains scalable data pipelines and architectures to support the Denials AI workflow under the guidance of the Team Lead, Data Management. This role ensures data is reliable, compliant with HIPAA, and optimized. Duties & Responsibilities: Collaborate with the Team Lead and crossfunctional teams to gather and refine data requirements for Denials AI solutions. Design, implement, and optimize ETL/ELT pipelines using Python, Dagster, DBT, and AWS data services (Athena, Glue, SQS). Develop and maintain data models in PostgreSQL; write efficient SQL for querying and performance tuning. Monitor pipeline health and performance; troubleshoot data incidents and implement preventive measures. Enforce data quality and governance standards, including HIPAA compliance for PHI handling. Conduct code reviews, share best practices, and mentor junior data engineers. Automate deployment and monitoring tasks using infrastructure-as-code and AWS CloudWatch metrics and alarms. Document data workflows, schemas, and operational runbooks to support team knowledge transfer. Qualifications: Bachelors or Masters degree in Computer Science, Data Engineering, or related field. 5+ years of handson experience building and operating productiongrade data pipelines. Solid experience with workflow orchestration tools (Dagster) and transformation frameworks (DBT) or other similar tools such (Microsoft SSIS, AWS Glue, Air Flow). Strong SQL skills on PostgreSQL for data modeling and query optimization or any other similar technologies (Microsoft SQL Server, Oracle, AWS RDS). Working knowledge with AWS data services: Athena, Glue, SQS, SNS, IAM, and CloudWatch. Basic proficiency in Python and Python data frameworks (Pandas, PySpark). Experience with version control (GitHub) and CI/CD for data projects. Familiarity with healthcare data standards and HIPAA compliance. Excellent problemsolving skills, attention to detail, and ability to work independently. Strong communication skills, with experience mentoring or leading small technical efforts.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough