Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
15 - 25 Lacs
Pune
Remote
Role & responsibilities Minimum 5+ years of Developing, designing, and implementing of Data Engineering. Collaborate with data engineers and architects to design and optimize data models for Snowflake Data Warehouse. Optimize query performance and data storage in Snowflake by utilizing clustering, partitioning, and other optimization techniques. Experience working on projects were housed within an Amazon Web Services (AWS) cloud environment. Experience working on projects housed within a Tableau and DBT Work closely with business stakeholders to understand requirements and translate them into technical solutions. Excellent presentation and communication skills, both written and verbal, ability to problem solve and design in an environment with unclear requirements.
Posted 2 months ago
3.0 - 8.0 years
13 - 23 Lacs
Bengaluru
Hybrid
Job Title -Data Engineer Reporting Line Data Engineering and Solutions Role Type Permanent Experience 6+years Summary As a Data Engineer you will work on implementing complex data projects with a focus on collecting, parsing, managing, analysing, and visualising large sets of data to turn information into value using multiple platforms. You will work with business analysts and data scientists to understand customer business problems and needs, secure the data supply chain, implement analysis solutions, and visualise outcomes that support improved decision making for a customer. You will understand how to apply technologies to solve big data problems and to develop innovative big data solutions. Key Accountabilities Working with colleagues to understand and implement requirements Securing the data supply chain, understanding how data is ingested from different sources and combined / transformed into a single data set. Understanding how to analyse, cleanse, join and transform data. Implementing designed / specified solutions into the chosen platform Working with colleagues to ensure that the on-prem/cloud infrastructure available is capable of meeting the solution requirements. Planning, designing, and conducting tests of the implementations, correcting errors and re-testing to achieve an acceptable result. Appreciate how to manage the data including security, archiving, structure and storage. Key skills and Technical Competencies Degree level education in Mathematics, Scientific, Computing or Engineering discipline or equivalent experience 6+ years of experience at various levels of Data Engineering roles including 3 years in technical lead role managing end to end solution. Hands on Experience with Azure Databricks using PYSPARK with ability to do cluster capacity planning and workload optimization. Experience in designing solutions using databases and data storage technology using RDBMS (MS SQL Server) Experience building and optimizing Big Data data pipelines, architectures and data sets using MS Azure data management and processing components through IaaS/PaaS/SaaS implementation models implemented through custom Experience in Azure Data factory. Proficiency in Python scripting. Experienced in using different python modules used for data munging Be up to date with data processing technology / platforms such as Spark (Databricks) Experience in Data Modeling for optimizing the solution performance Experienced in Azure DevOps with exposure to CI/CD. Good understanding of infrastructure components and their fit in different types of data solutions Experience of designing solutions deployed on Microsoft and Linux operating systems Experience of working in an agile environment, within a self-organising team. Behavioural Competencies We are adopting a winning mindset, aligning around our strategy, and being guided by a clear set of behaviours PUT SAFETY FIRST Prioritising the safety of our people and products and supporting each other to speak up. DO THE RIGHT THING Supporting a culture of caring and belonging where we listen first, embrace feedback and act with integrity. KEEP IT SIMPLE Working together to share and execute ideas and staying adaptable to new ideas and solutions. MAKE A DIFFERENCE Thinking about the business impact of our choices and the business outcomes of our decisions and challenging ourselves to deliver excellence and efficiency every day on the things that matter.
Posted 2 months ago
8.0 - 13.0 years
15 - 25 Lacs
Bengaluru
Work from Office
Data Engineer: MDM Experience in or with 8+ years’ experience as a SE, Data Engineer, or Data Analyst 5+ years of experience in data management with at least 3 years of hands-on exp with Informatica MDM pl share cv on rakhi.ankush@talentcorner.in
Posted 2 months ago
8.0 - 12.0 years
25 - 32 Lacs
Hyderabad
Hybrid
Key Skills: Data Engineer, Cloud, Snowflake DB, Data modeling, SQL, DevOps Roles and Responsibilities: Data Solution Design & Modeling Develop Conceptual, Logical, and Physical Data Models based on business requirements and data architecture principles. Create data mappings, transformation rules, and maintain comprehensive metadata artifacts such as data dictionaries and business glossaries. Collaborate with business SMEs and data stewards to align models with business processes and terminologies. Define and enforce data modeling standards, naming conventions, and design patterns. Support the end-to-end software development lifecycle (SDLC), including testing, deployment, and post-production issue resolution. Technical Troubleshooting & Optimization Perform data profiling, root cause analysis, and long-term resolutions for recurring data issues. Conduct impact assessments for upstream and downstream changes in data pipelines or models. Integrate data from various sources (APIs, flat files, databases) into Snowflake, ensuring performance and scalability. DevOps & Operations Work closely with Data Engineers and Business Analysts to ensure version control, CI/CD pipelines, and automated deployments using Git, Bitbucket, and related DevOps tools. Ensure all data solutions are compliant with governance, security, and regulatory requirements. Skills Required: Minimum 8+ years of experience in data engineering and data modeling roles. Proven track record with cloud-based data platforms, particularly Snowflake. Hands-on expertise with SQL, data integration, and warehouse performance optimization. Experience in Life Insurance, Banking, or other regulated industries is a strong advantage. Skills & Competencies: Strong knowledge of data modeling concepts and frameworks (e.g., Kimball, Inmon). Deep expertise in Snowflake performance tuning, security, and architecture. Strong command over SQL for data analysis, transformation, and pipeline development. Familiarity with DevOps, CI/CD, and source control systems (Git, Bitbucket). Solid understanding of data governance, metadata, data lineage, and data quality frameworks. Ability to conduct stakeholder workshops, capture business requirements, and translate them into technical design. Excellent problem-solving, documentation, and communication skills. Preferred Knowledge: Experience in regulated industries such as insurance or banking. Understanding of data risk, regulatory expectations, and compliance frameworks in financial services. Education: Bachelor's degree in Computer Science, Information Systems, or a related field. SnowPro Core Certification is highly preferred.
Posted 2 months ago
3.0 - 8.0 years
9 - 19 Lacs
Hyderabad
Work from Office
We Advantum Health Pvt. Ltd - US Healthcare MNC looking for Senior AI/ML Engineer. We Advantum Health Private Limited is a leading RCM and Medical Coding company, operating since 2013. Our Head Office is located in Hyderabad, with branch operations in Chennai and Noida. We are proud to be a Great Place to Work certified organization and a recipient of the Telangana Best Employer Award. Our office spans 35,000 sq. ft. in Cyber Gateway, Hitech City, Hyderabad Job Title: Senior AI/ML Engineer Location: Hitech City, Hyderabad, India Work from office Ph: 9177078628, 7382307530, 9059683624 Address: Advantum Health Private Limited, Cyber gateway, Block C, 4th floor Hitech City, Hyderabad. Location: https://www.google.com/maps/place/Advantum+Health+India/@17.4469674,78.3747158,289m/data=!3m2!1e3!5s0x3bcb93e01f1bbe71:0x694a7f60f2062a1!4m6!3m5!1s0x3bcb930059ea66d1:0x5f2dcd85862cf8be!8m2!3d17.4467126!4d78.3767566!16s%2Fg%2F11whflplxg?entry=ttu&g_ep=EgoyMDI1MDMxNi4wIKXMDSoASAFQAw%3D%3D Job Summary: We are seeking a highly skilled and motivated Data Engineer to join our growing data team. In this role, you will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure to support analytics, machine learning, and business intelligence initiatives. You will work closely with data analysts, scientists, and engineers to ensure data availability, reliability, and quality across the organization. Key Responsibilities: Design, develop, and maintain robust ETL/ELT pipelines for ingesting and transforming large volumes of structured and unstructured data Build and optimize data infrastructure for scalability, performance, and reliability Collaborate with cross-functional teams to understand data needs and translate them into technical solutions Implement data quality checks, monitoring, and alerting mechanisms Manage and optimize data storage solutions (data warehouses, data lakes, databases) Ensure data security, compliance, and governance across all platforms Automate data workflows and optimize data delivery for real-time and batch processing Participate in code reviews and contribute to best practices for data engineering Required Skills and Qualifications: Bachelors or Masters degree in Computer Science, Engineering, Information Systems, or a related field 3+ years of experience in data engineering or related roles Strong programming skills in Python, Java, or Scala Proficiency with SQL and working with relational databases (e.g., PostgreSQL, MySQL) Experience with data pipeline and workflow orchestration tools (e.g., Airflow, Prefect, Luigi) Hands-on experience with cloud platforms (AWS, GCP, or Azure) and cloud data services (e.g., Redshift, BigQuery, Snowflake) Familiarity with distributed data processing tools (e.g., Spark, Kafka, Hadoop) Solid understanding of data modeling, warehousing concepts, and data governance Preferred Qualifications: Experience with CI/CD and DevOps practices for data engineering Knowledge of data privacy regulations such as GDPR, HIPAA, etc. Experience with version control systems like Git Familiarity with containerization (Docker, Kubernetes) Follow us on LinkedIn, Facebook, Instagram, Youtube and Threads for all updates: Advantum Health Linkedin Page: https://www.linkedin.com/showcase/advantum-health-india/ Advantum Health Facebook Page: https://www.facebook.com/profile.php?id=61564435551477 Advantum Health Instagram Page: https://www.instagram.com/reel/DCXISlIO2os/?igsh=dHd3czVtc3Fyb2hk Advantum Health India Youtube link: https://youtube.com/@advantumhealthindia-rcmandcodi?si=265M1T2IF0gF-oF1 Advantum Health Threads link: https://www.threads.net/@advantum.health.india HR Dept, Advantum Health Pvt Ltd Cybergateway, Block C, Hitech City, Hyderabad Ph: 9177078628, 7382307530, 9059683624
Posted 2 months ago
5.0 - 8.0 years
22 - 32 Lacs
Bengaluru
Work from Office
Work with the team to define high-level technical requirements and architecture for the back-end services ,Data components,data monetization component Develop new application features & enhance existing one Develop relevant documentation and diagram Required Candidate profile min 5+ yr of exp in Python development, with a focus on data-intensive application exp with Apache Spark & PySpark for large-scale data process understand of SQL & exp working with relational database
Posted 2 months ago
5.0 - 10.0 years
15 - 25 Lacs
Pune
Remote
Role & responsibilities Minimum 5+ years of Developing, designing, and implementing of Data Engineering. Collaborate with data engineers and architects to design and optimize data models for Snowflake Data Warehouse. Optimize query performance and data storage in Snowflake by utilizing clustering, partitioning, and other optimization techniques. Experience working on projects were housed within an Amazon Web Services (AWS) cloud environment. Experience working on projects housed within a Tableau and DBT Work closely with business stakeholders to understand requirements and translate them into technical solutions. Excellent presentation and communication skills, both written and verbal, ability to problem solve and design in an environment with unclear requirements.
Posted 2 months ago
5.0 - 10.0 years
15 - 25 Lacs
Pune
Remote
Role & responsibilities At least 5 years of experience in data engineering with a strong background on Azure Databricks and Scala/Python and Streamlit •Experience in handling unstructured data processing and transformation with programming knowledge. •Hands on experience in building data pipelines using Scala/Python •Big data technologies such as Apache Spark, Structured Streaming, SQL, Databricks Delta Lake •Strong analytical and problem solving skills with the ability to troubleshoot spark applications and resolve data pipeline issues. •Familiarity with version control systems like Git, CICD pipelines using Jenkins.
Posted 2 months ago
7.0 - 12.0 years
6 - 16 Lacs
Bengaluru
Remote
5+ years’ experience with a strong proficiency with SQL query/development skills Hands-on experience with ETL tools Experience working in the healthcare industry with PHI/PII
Posted 2 months ago
4.0 - 7.0 years
5 - 13 Lacs
Hyderabad
Hybrid
Summary: Design, develop and implement scalable batch/real time data pipelines (ETLs) to integrate data from a variety of sources into Data Warehouse and Data Lake Design and implement data model changes that align with warehouse dimensional modeling standards. Proficient in Data Lake, Data Warehouse Concepts and Dimensional Data Model. Responsible for maintenance and support of all database environments, design and develop data pipelines, workflow, ETL solutions on both on-prem and cloud-based environments. Design and develop SQL stored procedures, functions, views, and triggers. Design, code, test, document and troubleshoot deliverables. Collaborate with others to test and resolve issues with deliverables. Maintain awareness of and ensure adherence to Zelis standards regarding privacy. Create and maintain Design documents, Source to Target mappings, unit test cases, data seeding. Ability to perform Data Analysis and Data Quality tests and create audit for the ETLs. Perform Continuous Integration and deployment using Azure DevOps and Git. Requirements: 3+ Years Microsoft BI Stack (SSIS, SSRS, SSAS) 3+ Years data engineering experience to include data analysis. 3+ years programming SQL objects (procedures, triggers, views, functions) in SQL Server. Experience optimizing SQL queries. Advanced understanding of T-SQL, indexes, stored procedures, triggers, functions, views, etc. Experience designing and implementing Data Warehouse. Working Knowledge of Azure/AWS Architecture, Data Lake Must be detail oriented. Must work under limited supervision. Must demonstrate good analytical skills as it relates to data identification and mapping and excellent oral communication skills. Must be flexible and able to multi-task and be able to work within deadlines; must be team-oriented, but also be able to work independently. Preferred Skills: Experience working with an ETL tool (DBT preferred) Working Experience designing and developing Azure/AWS Data Factory Pipelines. Working understanding of Columnar MPP Cloud data warehouse using Snowflake. Working knowledge managing data in the Data Lake. Business analysis experience to analyze data to write code and drive solutions. Working knowledge of: Git, Azure DevOps, Agile, Jira and Confluence. Healthcare and/or Payment processing experience. Independence/ Accountability: Requires minimal daily supervision. Receives detailed instruction on new assignments and determines next steps with guidance. Regularly reviews goals and objectives with supervisor. Demonstrates competence in relevant job responsibilities which allows for increasing level of independence. Ability to manage and prioritize multiple tasks. Ability to work under pressure and meet deadlines. Problem Solving: Makes logical suggestions of likely causes of problems and independently suggests solutions. Excellent organizational skills are required to prioritize responsibilities, thus completing work in a timely fashion. Outstanding ability to multiplex tasks as required. Excellent project management and/or business analysis skills. Attention to detail and concern for impact is essential.
Posted 2 months ago
7.0 - 12.0 years
25 - 40 Lacs
Gurugram
Remote
Job Title: Senior Data Engineer Location: Remote Job Type: Fulltime YoE: 7 to 10 years relevant experience Shift: 6.30pm to 2.30am IST Job Purpose: The Senior Data Engineer designs, builds, and maintains scalable data pipelines and architectures to support the Denials AI workflow under the guidance of the Team Lead, Data Management. This role ensures data is reliable, compliant with HIPAA, and optimized. Duties & Responsibilities: Collaborate with the Team Lead and crossfunctional teams to gather and refine data requirements for Denials AI solutions. Design, implement, and optimize ETL/ELT pipelines using Python, Dagster, DBT, and AWS data services (Athena, Glue, SQS). Develop and maintain data models in PostgreSQL; write efficient SQL for querying and performance tuning. Monitor pipeline health and performance; troubleshoot data incidents and implement preventive measures. Enforce data quality and governance standards, including HIPAA compliance for PHI handling. Conduct code reviews, share best practices, and mentor junior data engineers. Automate deployment and monitoring tasks using infrastructure-as-code and AWS CloudWatch metrics and alarms. Document data workflows, schemas, and operational runbooks to support team knowledge transfer. Qualifications: Bachelors or Masters degree in Computer Science, Data Engineering, or related field. 5+ years of handson experience building and operating productiongrade data pipelines. Solid experience with workflow orchestration tools (Dagster) and transformation frameworks (DBT) or other similar tools such (Microsoft SSIS, AWS Glue, Air Flow). Strong SQL skills on PostgreSQL for data modeling and query optimization or any other similar technologies (Microsoft SQL Server, Oracle, AWS RDS). Working knowledge with AWS data services: Athena, Glue, SQS, SNS, IAM, and CloudWatch. Basic proficiency in Python and Python data frameworks (Pandas, PySpark). Experience with version control (GitHub) and CI/CD for data projects. Familiarity with healthcare data standards and HIPAA compliance. Excellent problemsolving skills, attention to detail, and ability to work independently. Strong communication skills, with experience mentoring or leading small technical efforts.
Posted 2 months ago
10.0 - 17.0 years
25 - 30 Lacs
Mumbai, Thane
Work from Office
Manage end to end deliveries for data Engineering, EDW and Data Lake platform. Data modelling 3+ Exp in writing complex SQL queries/procedures/Views/Functions and database objects. Minimum 3 years exp required into cloud computing.
Posted 2 months ago
6.0 - 10.0 years
12 - 20 Lacs
Pune, Delhi / NCR, Mumbai (All Areas)
Hybrid
Role & responsibilities (Exp is required 6+ Years) Job Description: Enterprise Business Technology is on a mission to support and create enterprise software for our organization. We're a highly collaborative team that interlocks with corporate functions such as Finance and Product teams to deliver value with innovative technology solutions. Each day, thousands of people rely on Enlyte's technology and services to help their customers during challenging life events. We're looking for a remote Senior Data Analytics Engineer for our Corporate Analytics team. Opportunity - Technical lead for our corporate analytics practice using dbt, Dagster, Snowflake and Power BI, SQL and Python Responsibilities Build our data pipelines for our data warehouse in Python working with APIs to source data Build power bi reports and dashboards associated to this process Contribute to our strategy for new data pipelines and data engineering approaches Maintain a medallion based architecture for data analysis with Kimball Participates in daily scrum calls, follows agile SDLC Creates meaningful documentation of their work Follow organizational best practices for dbt and writes maintainable code Qualifications 5+ years of professional experience as a Data Engineer Strong dbt experience (3+ years) and knowledge of modern data stack Strong experience with Snowflake (3+ years) You have experience using Dagster and running complex pipelines (1+ year) Some Python experience, experience with git and Azure Devops Experience with data modeling in Kimball and medallion based structures
Posted 2 months ago
4.0 - 8.0 years
7 - 17 Lacs
Chennai
Work from Office
Dear Candidate We have a Walk In drive happening for Bigdata developer position on this Saturday. Skill: Bigdata developer Primary Skills :Python-pyspark OR Python -scala Experience : 4- 8 yrs Location: Chennai Notice period : Immediate to 15 days only Mode of Discussion : F2F Date of Interview : 5-Jul-25 (Saturday) Timing : 9:30 AM Venue : Aspire system office - Siruseri If interested Kinldy share your resume to saranya.raghu@aspiresys.com Regards Saranya Raghu
Posted 2 months ago
6.0 - 11.0 years
8 - 16 Lacs
Hyderabad, Pune, Chennai
Hybrid
Data Engineer having good experience on Azure Databricks and Python Must Have Databricks Python Azure Good to have ADF Candidate must be proficient in Databricks
Posted 2 months ago
10.0 - 18.0 years
20 - 35 Lacs
Pune, Chennai/Gurgaon, Hyderabad/Bengaluru
Work from Office
Looking for Data Engineer Skills:Data Engineer,AWS,GCP Notice Period:0-30 days Location: Hyderabad,Bangalore,Chennai,Pune,Gurgaon
Posted 2 months ago
6.0 - 9.0 years
20 - 25 Lacs
Bengaluru
Hybrid
Cloud, Artificial Intelligence, Data Engineering Skill to Evaluate : Cloud, Artificial Intelligence, Data Engineering Experience : 6 to 10 Years Location : Bengaluru Job Description : Job Summary: We are looking for an experienced Cloud AI and Data Engineer with a strong background in cloud-native data solutions, AI/ML engineering, and emerging Generative AI (GenAI) technologies. The ideal candidate will have 68 years of hands-on experience in building robust data platforms, deploying scalable ML models, and integrating GenAI solutions across cloud environments. Key Responsibilities: Build and maintain scalable data pipelines and infrastructure for AI and analytics using cloud-native tools (e.g., AWS Glue, Azure Data Factory, GCP Dataflow) Design and implement production-ready GenAI applications using services like Amazon Bedrock , Azure OpenAI , or Google Vertex AI Develop and deploy AI/ML models including transformer-based and LLM (Large Language Model) solutions Integrate GenAI with enterprise workflows using APIs, orchestration layers, and retrieval-augmented generation (RAG) patterns Collaborate with data scientists, product managers, and platform teams to operationalize AI-driven insights and GenAI capabilities Build prompt engineering frameworks, evaluate output quality, and optimize token usage and latency for GenAI deployments Set up monitoring, drift detection, and governance mechanisms for both traditional and GenAI models Implement CI/CD pipelines for data and AI solutions with automated testing and rollback strategies Ensure cloud solutions adhere to data privacy, security, and regulatory compliance standards Required Skills & Qualifications: 68 years of experience in data engineering or machine learning engineering in cloud environments (AWS, Azure, or GCP) Proficiency in Python and SQL; familiarity with PySpark, Java, or Scala is a plus Experience working with GenAI models such as GPT, Claude, or custom LLMs via cloud services (e.g., Bedrock, Azure OpenAI, HuggingFace) Hands-on with prompt design, fine-tuning, vector stores (e.g., FAISS, Pinecone), and knowledge base integrations Experience with MLOps and LLMOps tools (e.g., MLflow, LangChain, SageMaker Pipelines, Weights & Biases) Solid understanding of containerization (Docker), orchestration (Kubernetes), and microservices Knowledge of data lake/warehouse platforms such as S3, Snowflake, BigQuery, or Redshift Familiar with governance frameworks, access control, and responsible AI practices Preferred Qualifications: Certifications in Cloud AI/ML platforms (e.g., AWS Certified Machine Learning, Azure AI Engineer) Experience building RAG systems, vector database search, and multi-turn conversational agents Exposure to real-world GenAI use cases like code generation, chatbots, document summarization, or knowledge extraction Knowledge of OpenAPI, JSON schema validation, and API lifecycle tools
Posted 2 months ago
3.0 - 6.0 years
15 - 20 Lacs
Bengaluru
Hybrid
Description: Role: Data Engineer/ETL Developer - Talend/Power BI Job Description: 1. Study, analyze and understand business requirements in context to business intelligence and provide the end-to-end solutions. 2. Design and Implement ETL pipelines with data quality and integrity across platforms like Talend Enterprise, informatica 3. Load the data from heterogeneous sources like Oracle, MSSql, File system, FTP services, Rest APIs etc.. 4. Design and map data models to shift raw data into meaningful insights and build data catalog. 5. Develop strong data documentation about algorithms, parameters, models. 6. Analyze previous and present data for better decision making. 7. Make essential technical changes to improvise present business intelligence systems. 8. Optimizing ETL processes for improved performance, monitoring ETL jobs and troubleshooting issues. 9. Lead and oversee the Team deliverables, ensure best practices are followed for development. 10. Participate/lead in requirements gathering and analysis. Required Skillset and Experience: 1. Over all up to 3 years of working experience, preferably in SQL, ETL (Talend) 2. Must have 1+ years of experience in Talend Enterprise/Open studio and related tools like Talend API, Talend Data Catalog, TMC, TAC etc. 3. Must have understanding of database design, data modeling 4. Hands-on experience in any of the coding language (Java or Python etc.) Secondary Skillset/Good to have: 1. Experience in BI Tool like MS Power Bi. 2. Utilize Power BI to build interactive and visually appealing dashboards and reports. Required Personal & Interpersonal Skills • Strong Analytical skills • Good communication skills, both written and verbal. • Highly motivated and result-oriented • Self-driven independent work ethics that drives internal and external accountability • Ability to interpret instructions to executives and technical resources. • Advanced problem-solving skills dealing with complex distributed applications. • Experience of working in multicultural environment.Enable Skills-Based Hiring No Additional Details Planned Resource Unit : (55)IT_TRUCKS;(11)F/TC - Application Engineer - 3-6 Yrs;Business Intelligence;(Z2)3-6 Years
Posted 2 months ago
5.0 - 10.0 years
8 - 17 Lacs
Coimbatore
Work from Office
Position: Data Engineer Experience: 5 - 10 years Location: Coimbatore (WFO) Notice period: Immediate Job Type: Full time Skills: Data Engineer, Spark, Scala, Python, Big Data Job Description: Experience in the Big Data technologies (Hadoop, Spark, Nifi, Impala) 5+ years of hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines. High proficiency in Scala/Java and Spark for applied large-scale data processing. Expertise with big data technologies, including Spark, Data Lake, and Hive. Solid understanding of batch and streaming data processing techniques. Proficient knowledge of the Data Lifecycle Management process, including data collection, access, use, storage, transfer, and deletion. Expert-level ability to write complex, optimized SQL queries across extensive data volumes. Experience on HDFS, Nifi, Kafka. Experience on Apache Ozone, Delta Tables, Databricks, Axon(Kafka), Spring Batch, Oracle DB Familiarity with Agile methodologies. Obsession for service observability, instrumentation, monitoring, and alerting. Knowledge or experience in architectural best practices for building data lakes.
Posted 2 months ago
8.0 - 12.0 years
12 - 22 Lacs
Hyderabad, Bengaluru
Work from Office
Role & responsibilities 8-12 years of professional work experience in a relevant field Proficient in Azure Databricks, ADF, Delta Lake, SQL Data Warehouse, Unity Catalog, Mongo DB, Python Experience/ prior knowledge on semi structure data and Structured Streaming, Azure synapse analytics, data lake, data warehouse. Proficient in creating Azure Data Factory pipelines for ETL/ELT processing ; copy activity, custom Azure development etc. Lead the technical team of 4-6 resource. Prior Knowledge in Azure DevOps and CI/CD processes including Github . Good knowledge of SQL and Python for data manipulation, transformation, and analysis knowledge on Power bi would be beneficial. Understand business requirements to set functional specifications for reporting applications
Posted 2 months ago
5.0 - 10.0 years
20 - 30 Lacs
Pune
Work from Office
Role & responsibilities Ideally, we are looking for a 60:40 mix, with stronger capabilities on the Data Engineering side, along with working knowledge of Machine Learning and Data Science conceptsespecially those who can pick up tasks in Agentic AI, OpenAI, and related areas as required in the future.
Posted 2 months ago
6.0 - 11.0 years
10 - 20 Lacs
Hyderabad, Bangalore Rural, Bengaluru
Work from Office
We are seeking a highly skilled Snowflake Developer to join our team in Bangalore. The ideal candidate will have extensive experience in designing, implementing, and managing Snowflake-based data solutions. This role involves developing data architectures and ensuring the effective use of Snowflake to drive business insights and innovation. Key Responsibilities: Design and implement scalable, efficient, and secure Snowflake solutions to meet business requirements. Develop data architecture frameworks, standards, and principles, including modeling, metadata, security, and reference data. Implement Snowflake-based data warehouses, data lakes, and data integration solutions. Manage data ingestion, transformation, and loading processes to ensure data quality and performance. Collaborate with business stakeholders and IT teams to develop data strategies and ensure alignment with business goals. Drive continuous improvement by leveraging the latest Snowflake features and industry trends. Qualifications: Bachelors or Master’s degree in Computer Science, Information Technology, Data Science, or a related field. 8+ years of experience in data architecture, data engineering, or a related field. Extensive experience with Snowflake, including designing and implementing Snowflake-based solutions. Must have exposure working in Airflow Proven track record of contributing to data projects and working in complex environments. Familiarity with cloud platforms (e.g., AWS, GCP) and their data services. Snowflake certification (e.g., SnowPro Core, SnowPro Advanced) is a plus.
Posted 2 months ago
4.0 - 6.0 years
5 - 15 Lacs
Bengaluru
Work from Office
Dear Candidate, We are hiring Data Engineers for one of our prestigious clients a product-based company. Job Details: Position: Data Engineer Work Location: Bangalore Experience: 4 to 7 Years Shift: Day Shift Qualification: B.E / B.Tech / M.Tech Notice Period: Immediate to 20 Days Interview Mode: Only Face-to-Face (F2F) interviews in Bangalore for technical discussions Job Description: 4 to 6 years of experience in data engineering and integration projects Hands-on experience with cloud-based data integration platforms (IICS CDI on IDMC Platform) must-have Exposure to various source systems such as SFDC, Marketo, Azure, AWS, and relational databases like Oracle, SQL Server good to have Intermediate skills in Unix shell scripting and Python Working knowledge of Jupyter Notebooks, Databricks, ADLS Gen2, and SQL Data Warehouse Strong understanding of data modeling, including conceptual, logical, and physical models Experience in Change Data Capture (CDC) and Slowly Changing Dimensions (SCD Type 1 & 2) Advanced SQL/PLSQL skills must be able to work with complex queries Proven track record of delivering high-quality technical solutions Exposure to Agile/Scrum methodologies – nice to have Knowledge of Star and Snowflake schemas, and experience with modeling tools like Erwin, Visio Strong communication skills and ability to work in a global onshore-offshore model Self-driven, organized, and capable of handling multiple priorities in a dynamic environment To Apply: Please share your updated resume to mary@jyopa.com with the following details: Current Company Current Location CTC Expected CTC Notice Period Note: Only candidates who are available for face-to-face interviews in Bangalore will be considered. Regards, Reshma Mary L. 6361518594 (whats app)
Posted 2 months ago
5.0 - 10.0 years
25 - 30 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Develop and maintain data pipelines, ETL/ELT processes, and workflows to ensure the seamless integration and transformation of data. Architect, implement, and optimize scalable data solutions. Required Candidate profile Work closely with data scientists, analysts, and business stakeholders to understand requirements and deliver actionable insights. Partner with cloud architects and DevOps teams
Posted 2 months ago
5.0 - 7.0 years
15 - 25 Lacs
Pune, Bengaluru
Hybrid
Job Role & responsibilities: - Responsible for architecture designing, building and deploying data systems, pipelines etc Responsible for Designing and implementing agile, scalable, and cost efficiency solution on cloud data services. Responsible for Designing, Implementation, Development & Migration Migrate data from traditional database systems to Cloud environment Architect and implement ETL and data movement solutions. Technical Skill, Qualification & experience required:- 5-7 years of experience in Data Engineering, Azure Cloud Data Engineering, Azure Databricks, datafactory , Pyspark, SQL,Python Hands on experience in Azure Databricks, Data factory, Pyspark, SQL Proficient in Cloud Services-Azure Strong hands-on experience for working with Streaming dataset Hands-on Expertise in Data Refinement using Pyspark and Spark SQL Familiarity with building dataset using Scala. Familiarity with tools such as Jira and GitHub Experience leading agile scrum, sprint planning and review sessions Good communication and interpersonal skills Comfortable working in a multidisciplinary team within a fast-paced environment * Immediate Joiners will be preferred only
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |