Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 5.0 years
5 - 9 Lacs
bengaluru
Work from Office
Technical / Behavioral - You must have proven track record of working in collaborative teams to deliver high quality data solutions in a multi-developer agile environment following design & coding best practices. You must be an expert in using SQL queries with UNIX shell /Python scripting skills. You should have experience of using AWS services like EC2, S3, EMR to move data onto cloud platform and experience with Enterprise data lake strategy and use of Snowflake (AWS) is a big plus. You must have knowledge to develop ELT/ETL pipelines to move data to and from Snowflake data store using combination of Python and Snowflake SnowSQL is a big plus. You should have experience in SQL performance optimization for large data volumes. You should have proven analytical and problem-solving skills You should be strong in Database and Data Warehousing concepts. You must be able to work independently in a globally distributed environment You must have superior SQL and Data Modeling skills and experience performing deep data analysis on multiple database platforms like Oracle, or Snowflake. You should have experience in working with devops tools for code migrations. Nice to have working experience in Control-M or similar scheduling tools. Nice to have adequate knowledge on DevOps, JIRA and Agile practices. How Your Work Impacts the Organization Cloud Enablement and Data Model ready for Analytics.
Posted 2 weeks ago
7.0 - 10.0 years
25 - 40 Lacs
pune
Hybrid
Job Title : Senior Data Engineer Work location Pune Let me tell you about the role Data Engineer designs and builds scalable data management systems that support application simplification efforts. They develop and maintain databases and large-scale processing systems to enable efficient data collection, analysis, and integration. Key responsibilities include ensuring data accuracy for modeling and analytics, optimizing data pipelines for scalability, and collaborating with data scientists to drive data-driven decision-making. They play a crucial role in supporting the teams goals by enabling seamless access to reliable data for simplification initiatives. What you will deliver As part of a cross-disciplinary team, you will collaborate with data engineers, software engineers, data scientists, data managers, and business partners to architect, design, implement, and maintain reliable and scalable data infrastructure for moving, processing, and serving data. You will write, deploy, and maintain software to build, integrate, manage, and assure the quality of data. Adhering to software engineering best practices, you will ensure technical design, unit testing, monitoring, code reviews, and documentation are followed. You will also ensure the deployment of secure, well-tested software that meets privacy and compliance standards, and improve the CI/CD pipeline. Additionally, you will be responsible for service reliability, on-call rotations, SLAs, and maintaining infrastructure as code. By containerizing server deployments and mentoring others, you'll actively contribute to improving developer velocity. What you will need to be successful (experience and qualifications) Essential Deep, hands-on expertise in designing, building, and maintaining scalable data infrastructure and products in complex environments Development experience in object-oriented programming languages (e.g., Python, Scala, Java, C#) Advanced knowledge of databases and SQL Experience designing and implementing large-scale distributed data systems Strong understanding of technologies across all stages of the data lifecycle Ability to manage stakeholders effectively and lead initiatives through technical influence Continuous learning approach BS degree in computer science or a related field (or equivalent experience) Desired No prior experience in the energy industry required About bp Our purpose is to deliver energy to the world, today and tomorrow. For over 100 years, bp has focused on discovering, developing, and producing oil and gas in the nations where we operate. We are one of the few companies globally that can provide governments and customers with an integrated energy offering. Delivering our strategy sustainably is fundamental to achieving our ambition to be a net zero company by 2050 or sooner! We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform crucial job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
As a skilled professional in the field of cloud computing and data engineering, you will be responsible for a variety of tasks related to managing data assets, developing ETL pipelines, and exploring machine learning capabilities. Your expertise in technologies such as Git, Kubernetes, Docker, and Helm will be essential in ensuring the smooth functioning of cloud infrastructure on platforms like AWS and Azure. Your role will involve the evaluation and analysis of data using tools like AWS QuickSight, as well as the deployment of machine learning models through AWS MLOps. Proficiency in AWS services such as Glue, QuickSight, SageMaker, Lambda, and Step Functions will be crucial for success in this position. Additionally, you should have a basic understanding of machine learning concepts and MLOps practices. Experience with CI/CD pipelines, DevOps practices, and Infrastructure-as-Code tools like Terraform will be advantageous in this role. A strong grasp of cloud security principles and best practices is essential, along with knowledge of network architecture including VPNs, firewalls, and load balancers. Overall, your expertise in cloud infrastructure management, data engineering, and machine learning deployment will play a key role in driving success in this position.,
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
As a Lead Data Engineer with over 7 years of experience, you will be responsible for designing, developing, and maintaining data pipelines, ETL processes, and data warehouses. You will be based in Hyderabad and should be able to join immediately. Your primary skills should include proficiency in SQL, Python, PySpark, AWS, Airflow, Snowflake, and DBT. Additionally, you should have a Bachelor's or Master's degree in Computer Science, Information Systems, Engineering, or a related field. In this role, you will need a minimum of 4+ years of hands-on experience in data engineering, ETL, and data warehouse development. You should have expertise in ETL tools such as Informatica Power Center or IDMC, as well as strong programming skills in Python and PySpark for efficient data processing. Your responsibilities will also involve working with cloud-based data platforms like AWS Glue, Snowflake, Databricks, or Redshift. Proficiency in SQL and experience with RDBMS platforms like Oracle, MySQL, and PostgreSQL are essential. Familiarity with data orchestration tools like Apache Airflow will be advantageous. From a technical perspective, you are expected to have advanced knowledge of data warehousing concepts, data modeling, schema design, and data governance. You should be capable of designing and implementing scalable ETL pipelines and have experience with cloud infrastructure for data storage and processing on platforms such as AWS, Azure, or GCP. In addition to technical skills, soft skills are equally important. You should possess excellent communication and collaboration skills, be able to lead and mentor a team of engineers, and demonstrate strong problem-solving and analytical thinking abilities. The ability to manage multiple projects and prioritize tasks effectively is crucial. Preferred qualifications for this role include experience with machine learning workflows and data science tools, certifications in AWS, Snowflake, Databricks, or relevant data engineering technologies, as well as familiarity with Agile methodologies and DevOps practices.,
Posted 2 weeks ago
13.0 - 20.0 years
35 - 45 Lacs
pune, ahmedabad
Hybrid
Role & responsibilities We are seeking a skilled LEAD Data Engineer to join our dynamic team. The Data Engineer will be responsible for designing, developing, and maintaining our data pipelines, integrations, and data warehouse infrastructure. The successful candidate will work closely with data scientists, analysts, and business stakeholders to ensure that our data is accurate, secure, and accessible for all users. Responsibilities Design and build scalable data pipeline architecture that can handle large volumes of data Develop ELT/ETL pipelines to extract, load and transform data from various sources into our data warehouse Optimize and maintain the data infrastructure to ensure high availability and performance Collaborate with data scientists and analysts to identify and implement improvements to our data pipeline and models Develop and maintain data models to support business needs Ensure data security and compliance with data governance policies Identify and troubleshoot data quality issues Automate and streamline processes related to data management Stay up-to-date with emerging data technologies and trends to ensure the continuous improvement of our data infrastructure and architecture. Analyze the data products and requirements to align with data strategy Assist in extracting or researching data for cross-functional business partners for consumer insights, supply chain, and finance teams Enhance the efficiency, automation, and accuracy of existing reports Follow best practices in data querying and manipulation to ensure data integrity Requirements Bachelor's or masters degree in computer science, Data Science, or a related field Must have 13 + years of experience as a Snowflake Data Engineer or related role Must have experience with Snowflake Strong Snowflake experience building, maintaining and documenting data pipelines Expertise in Snowflake concepts like RBAC management, virtual warehouse, file format, streams, zero copy clone, time travel and understand how to use these features Strong SQL development experience including SQL queries and stored procedures Strong knowledge of ELT/ETL no-code/low-code tools like Informatica / SnapLogic. Well versed in data standardization, cleansing, enrichment, and modeling Proficiency in one or more programming languages such as Python, Java, or C# Experience with cloud computing platforms such as AWS, Azure, or GCP Knowledge of ELT/ETL processes, data warehousing, and data modeling Familiarity with data security and governance best practices Excellent hands-on experience in problem-solving and analytical skills and improving the performance of processes Strong communication and collaboration skills
Posted 2 weeks ago
5.0 - 10.0 years
20 - 35 Lacs
bengaluru
Hybrid
Were seeking a Senior Data Engineer to join our client, a world-class education consulting firm, in building robust data solutions that drive innovation and scalability. In this role, youll design and optimize high-performance data pipelines, ensuring seamless data flow to support advanced analytics and business intelligence. Collaborating with cross-functional teams, youll enhance data models, improve accessibility, and empower data-driven decision-making across a global organization. Location: Bengaluru (Remote Work Available for the right candidate) Company: A Globally Renowned Education Consulting Leader Work Flexibility: While based in Bengaluru, this role offers the flexibility to work remotely, blending autonomy with impactful collaboration. Key Responsibilities: Architect, develop, and maintain high-performance data pipelines and ETL processes on AWS to drive analytics and ML solutions. Leverage AWS services (RDS, Glue, Lambda, S3, SageMaker) to efficiently manage and process large-scale datasets. Integrate DevOps best practices into data workflows, including CI/CD, Infrastructure as Code (IaC), and automation for scalable deployments. Develop and manage IaC templates (Terraform, CloudFormation) to streamline AWS resource provisioning for data and ML workloads. Implement robust monitoring, logging, and alerting systems to ensure pipeline reliability, performance, and data integrity. Qualifications & Skills required for the role: 5+ years of hands-on experience in ETL/ELT processes, data pipeline development, and cloud-based data solutions. AWS Proficiency: Deep expertise in S3, Glue, Lambda, SageMaker, RDS, and related services. Programming: Advanced Python skills, including Pandas, PySpark, and scripting for automation. DevOps & IaC: Experience with CI/CD pipelines, Terraform, CloudFormation, and monitoring tools (CloudWatch). Data Architecture: Strong knowledge of data warehousing, ETL/ELT frameworks, and scalable data modeling. Collaboration & Agile: Excellent communication skills with a track record of working in cross-functional Agile teams. Problem-Solving: Sharp analytical abilities to troubleshoot complex data challenges. Execution & Adaptability: Strong organizational skills with the ability to deliver under pressure and meet tight deadlines If you are keen and available, please email your CV to suresh@techinduct.com. Otherwise, kindly refer someone for the role and win a referral fee.
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
thane, maharashtra
On-site
As a Data Engineer with 4 to 7 years of experience and a budget range of 25 to 30 LPA, you will be responsible for handling the flow and storage of data efficiently to ensure that the model has access to the right datasets. Your role will involve tasks such as Database Design, where you will create efficient database systems using SQL, NoSQL, Hadoop, and Spark to store large volumes of transactional or behavioral data. You will also be involved in ETL Pipelines, building pipelines to collect, transform, and load data from various sources into a format suitable for analysis. Data Integration will be another key aspect of your responsibilities, where you will integrate different data sources, including transactional data, customer behavior, and external data sources. Proficiency in tools and technologies such as Apache Spark, Kafka, SQL, NoSQL databases (e.g., MongoDB), and cloud data services (AWS, Azure) will be essential for this role. Additionally, candidates with knowledge of UX/UI tools like Sketch, Adobe XD, Tableau, and Power BI will be considered advantageous as they can help in building interfaces to visualize fraud prediction and alerts, along with creating user-friendly dashboards.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Senior Azure Data Engineer based in Chennai, India, you will play a crucial role in designing and implementing data pipelines using Azure Synapse to integrate data from various sources and file formats into SQL Server databases. Your responsibilities will include developing batch and real-time data pipelines to facilitate the transfer of data to data warehouses and data lakes. Collaborating with the Data Architect, you will work on new Data projects by constructing Data pipelines and managing Master Data. Your expertise in data analysis, extraction, cleansing, column-mapping, data transformations, and data modeling will be essential in meeting business requirements. You will be tasked with ensuring Data Availability on Azure SQL Datawarehouse by monitoring and troubleshooting Data pipelines effectively. To excel in this role, you must have a minimum of 3 years of experience in designing and developing ETL Pipelines using Azure Synapse or Azure Data Factory. Proficiency in Azure services such as ADLS2, Databricks, Azure SQL, and Logic Apps is required. Your strong implementation skills in Pyspark and Advanced SQL will be instrumental in achieving efficient Data transformations. Experience in handling structured, semi-structured, and unstructured data formats is a must, along with a clear understanding of Data warehouse, Data lake modeling, and ETL performance optimization. Additional skills that would be beneficial include working knowledge of consuming APIs in ETL pipelines, familiarity with PowerBI, and experience in Manufacturing Data Analytics & Reporting. A degree in information technology, Computer Science, or related disciplines is preferred. Join our global, inclusive, and diverse team dedicated to enhancing the quality of life through innovative motion systems. At our company, we value diversity, knowledge, skills, creativity, and talents that each employee brings. We are committed to fostering an inclusive, diverse, and equitable workplace where employees feel respected and valued, irrespective of their background. Our goal is to inspire our employees to grow, take ownership, and derive fulfillment and meaning from their work.,
Posted 2 weeks ago
1.0 - 3.0 years
0 Lacs
india
On-site
DESCRIPTION Are you excited about the digital media revolution and passionate about designing and delivering advanced analytics that directly influence the product decisions of Amazon's digital businesses. Do you see yourself as a champion of innovating on behalf of the customer by turning data insights into action The Amazon Digital Acceleration (DA) org is looking for an analytical and technically skilled data engineer to join our team. In this role, you will play a critical part in developing foundational analytical datasets spanning orders, subscriptions, discovery, promotions, pricing and royalties. Our mission is to enable digital clients to easily innovate with data on behalf of customers and make product and customer decisions faster. An ideal individual is someone who has deep data engineering skills around ETL, data modeling, database architecture and big data solutions. This individual should have strong business judgement, excellent written and verbal communication skills. Key job responsibilities 1. Develop data products, infrastructure and data pipelines leveraging AWS services (such as Redshift, Kinesis, EMR, Lambda etc.) and internal BDT tools (Datanet, Cradle, QuickSight etc. 2. Improve existing solutions/build solutions to improve scale, quality, IMR efficiency, data availability, consistency & compliance. 3. Partner with Software Developers, Business Intelligence Engineers, MLEs, Scientists, and Product Managers to develop scalable and maintainable data pipelines on both structured and unstructured (text based) data. 4. Drive operational excellence strongly within the team and build automation and mechanisms to reduce operations About the team The MIDAS team operates within Amazon's Digital Analytics (DA) engineering organization, building analytics and data engineering solutions that support cross-digital teams. Our platform delivers a wide range of capabilities, including metadata discovery, data lineage, customer segmentation, compliance automation, AI-driven data access through generative AI and LLMs, and advanced data quality monitoring. Today, more than 100 Amazon business and technology teams rely on MIDAS, with over 20,000 monthly active users leveraging our mission-critical tools to drive data-driven decisions at Amazon scale. BASIC QUALIFICATIONS - 1+ years of data engineering experience - Experience with SQL - Experience with data modeling, warehousing and building ETL pipelines - Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting language (e.g., Python, KornShell) PREFERRED QUALIFICATIONS - Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a Microsoft Purview Data Governance Consultant / Sr. Consultant with 5-8 years of experience, you will be responsible for developing and deploying solutions with Microsoft Purview or similar data governance platforms. You must have a deep understanding of data governance principles, including metadata management, data cataloging, lineage tracking, and compliance frameworks. Your proficiency in Microsoft Azure, especially with services like Azure Data Factory, Azure Synapse Analytics, Azure SQL Database, and Azure Blob Storage, will be crucial for this role. It would be nice to have experience in data integration, ETL pipelines, and data modeling, as well as knowledge of security and compliance standards. Your problem-solving skills and ability to collaborate in cross-functional team environments will be essential. Strong communication and documentation skills are required for effective collaboration with technical and non-technical stakeholders. Your responsibilities will include leading the program as a Program Leader using Agile Methodology, providing guidance on Data Governance, Microsoft Purview, and Azure Data Management. You will be responsible for designing and deploying Microsoft Purview solutions for data governance and compliance, aligning implementations with business and regulatory requirements. Overseeing data integration efforts, ensuring smooth lineage and metadata management, and configuring data classification and sensitivity policies to meet compliance standards will also be part of your role. Collaborating with data teams to define and enforce governance practices, ensuring Purview services meet business needs, security, and compliance, leading data discovery efforts, monitoring and troubleshooting Purview services, and documenting best practices and governance workflows will all fall under your responsibilities. You will also be responsible for training users and mentoring Consultants in documentation and training efforts.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
haryana
On-site
You should have a minimum of 5 years of experience in SQL development with a strong skill in query optimization. Your responsibilities will include designing, developing, and maintaining SQL queries, stored procedures, and database structures. You must demonstrate expertise in building and managing ETL pipelines using Azure Data Factory (ADF) and integrating data from on-premise and cloud sources using AWS services. A good understanding of data warehousing concepts, data modeling, and transformation logic is required. You should be able to ensure data accuracy, consistency, and performance across multiple systems and possess strong troubleshooting skills to resolve data and performance-related issues. Familiarity with cloud-based data architecture and modern data engineering best practices is preferred. Effective communication and collaboration with cross-functional teams is essential for this role. You will be expected to document technical processes and data workflows clearly. This position is located in Gurugram and is a hybrid full-time job that requires in-person work.,
Posted 2 weeks ago
2.0 - 6.0 years
0 Lacs
maharashtra
On-site
As a Data Product Manager at our company based in Mumbai, you will play a crucial role in driving the development of data-driven products that align with our business objectives. Your responsibilities will involve collaborating with data engineers, analysts, and stakeholders to build and scale data-powered solutions. By defining and executing the data product roadmap, you will ensure that our data strategy meets both company goals and customer needs. Additionally, you will work on developing product features that enhance data accessibility, analytics, and AI/ML adoption. It will be your responsibility to integrate data-driven insights into products and decision-making processes while ensuring compliance with data governance, privacy, and security regulations such as GDPR and CCPA. Optimizing data pipelines, analytics platforms, and real-time data processing capabilities will also fall under your purview. Defining key performance indicators (KPIs) and success metrics to measure product performance and impact will be vital, as well as effectively communicating data strategies, insights, and product updates to leadership and stakeholders. To excel in this role, you should have 3-7 years of experience in product management, data analytics, or related fields. A strong understanding of data infrastructure, ETL pipelines, and data modeling is essential. Proficiency in SQL, Python, and data visualization tools like Tableau and Power BI is required. Familiarity with cloud platforms such as AWS, GCP, or Azure for data management is a plus. Your ability to translate business needs into data product features and roadmaps, along with excellent communication skills and experience in working with cross-functional teams, will be key to your success. Preferred qualifications include prior experience in data-driven SaaS, AI/ML products, or business intelligence platforms. Knowledge of Big Data technologies like Snowflake, Apache Spark, and Databricks, as well as experience with A/B testing, experimentation frameworks, and customer analytics, will be advantageous. An understanding of data monetization strategies and data-as-a-service (DaaS) models is also desirable. In return, we offer a competitive salary with performance-based incentives, the opportunity to work on cutting-edge data and AI-driven products, flexible working hours with remote/hybrid options, and learning and development opportunities with mentorship from senior leadership. If you are passionate about driving data product innovation and are eager to make a significant impact, we invite you to join our team as a Data Product Manager in Mumbai.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a candidate for the position, your main responsibility will involve designing and implementing intelligent Agentic AI systems using LLMs, vector databases, and orchestration frameworks. You will also be tasked with building, deploying, and maintaining ML workflows utilizing AWS SageMaker, Lambda, and EC2 with Docker. Another key aspect of the role will be developing and managing ETL pipelines through AWS Glue and integrating them with structured/unstructured data sources. Additionally, you will be expected to implement APIs and full-stack components to support ML agents, including visualization tools using Streamlit, as well as reverse-engineer existing codebases and APIs to integrate AI features into legacy or proprietary systems. To excel in this role, you must have hands-on experience with AWS services like Lambda, EC2, Glue, and SageMaker. Strong proficiency in Python and full-stack development is essential, along with a solid grasp of LLMs and vector search engines. Demonstrated ability to reverse-engineer systems and build integrations is also required, as well as experience with cloud infrastructure, RESTful APIs, CI/CD pipelines, and containerization. Preferred qualifications for this position include a background in Retrieval-Augmented Generation (RAG), multi-agent systems, or knowledge graphs, experience with open-source LLM frameworks like Hugging Face, knowledge of autonomous task planning, symbolic reasoning, or reinforcement learning, and exposure to secure, regulated, or enterprise-scale AI systems. Overall, this role demands a strong blend of technical skills encompassing AWS, Python, and full-stack development, in addition to expertise with LLMs, vector search, and agentic AI systems.,
Posted 2 weeks ago
5.0 - 8.0 years
10 - 18 Lacs
pune
Work from Office
Summary We are seeking an experienced Data Engineer (5years of relevant experience) with a strong technical background to design, develop, and support cutting-edge data-driven solutions. The ideal candidate will bring expertise in data engineering, cloud technologies, and enterprise-scale solution development, along with exceptional problem-solving, communication, and leadership skills. Key Responsibilities: Design and implement data engineering pipelines using PySpark, Palantir Foundry and Python. Leverage cloud technologies (AWS) to build scalable and secure solutions. Ensure compliance with architecture best practices and develop enterprise-scale solutions. Develop data-driven applications and ensure robust performance across all platforms. Collaborate with technology and business stakeholders to deliver innovative solutions aligned with organizational goals. Provide operational support for applications, systems, and infrastructure. Qualifications: 5 years of experience in data engineering and ETL ecosystems with PySpark, Palantir Foundry and Python. Hands-on experience with cloud platforms (AWS) and related technologies. Strong understanding of enterprise architecture and scalable solution development. Background in enterprise system solutions with a focus on versatility and reliability. Exceptional analytical, problem-solving, and communication skills, with a collaborative work ethics. Preferred Skills: Experience in the utility domain is a strong plus. Experience with data-driven applications in high-scale environments. In-depth knowledge of industry trends and emerging technologies in the data engineering landscape. Ability to adapt to evolving challenges in dynamic work environments.
Posted 2 weeks ago
3.0 - 5.0 years
5 - 15 Lacs
pune
Work from Office
Role & responsibilities We are seeking a highly skilled and motivated Data Engineer to join our dynamic team. The ideal candidate will have a passion for building and optimizing robust data pipelines, managing real-time data flows, and developing solutions that drive data-driven decision-making. If you thrive in a fast-paced environment and are eager to tackle complex data challenges, we want to hear from you! Role and Responsibilities: Design, develop, and maintain scalable and high-performance ETL/ELT pipelines for batch and real-time data processing from diverse sources. Develop and implement APIs and webhooks for seamless and secure data ingestion and consumption by various internal and external systems. Champion real-time data management strategies, including stream processing, ensuring low latency and high availability of data for critical applications. Utilize advanced Python programming skills (including asynchronous programming and custom library development) to build efficient data transformation, validation, and enrichment logic. Work extensively with cloud platforms (preferably AWS) to architect and manage data infrastructure, including services like AWS Kinesis/Kafka, Lambda, Glue, S3, Redshift/Snowflake, and API Gateway. Implement and manage data warehousing solutions, ensuring optimal performance and accessibility for analytics and reporting. Develop and maintain robust data quality frameworks and monitoring systems to ensure data accuracy, completeness, and consistency across all pipelines. Optimize existing data workflows and database queries for enhanced performance and efficiency, aiming for significant improvements in data processing times and resource utilization. Collaborate with data scientists, analysts, software engineers & business stakeholders to understand data requirements and deliver effective data solutions. Implement data governance and security best practices to ensure data is handled responsibly and in compliance with relevant regulations. Contribute to the design and implementation of data models for both transactional (OLTP) and analytical (OLAP) systems. Explore and integrate new data technologies and tools to enhance our data capabilities. Required Skills and Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field. 2-5 years of hands-on experience as a Data Engineer or in a similar role. Expert proficiency in Python for data engineering tasks (e.g., Pandas, PySpark, data manipulation libraries) and experience with software development best practices (version control, testing, CI/CD). Proven experience in designing, building, and deploying real-time data pipelines using technologies like Kafka, AWS Kinesis, Apache Flink, or similar. Strong experience in creating, deploying, and managing RESTful APIs and webhooks for data exchange, with a focus on security and scalability. In-depth knowledge of SQL databases (e.g., PostgreSQL, MySQL, Mongo DB, Dynamo DB). Hands-on experience with cloud data services (AWS, Azure, or GCP) . Specific AWS experience with Glue, Lambda, S3, EC2, RDS, and API Gateway is highly desirable. Solid understanding of data warehousing concepts, ETL/ELT processes, and data modelling techniques. Familiarity with containerization technologies (Docker) and orchestration tools (Kubernetes) is a plus. Excellent problem-solving skills and the ability to work independently as well as in a collaborative team environment. Strong communication skills, with the ability to articulate complex technical concepts to non-technical stakeholders. Preferred Qualifications: Experience with data visualization tools (e.g., Tableau, Power BI, Looker). Knowledge of machine learning concepts and MLOps. Contributions to open-source data engineering projects. Relevant certifications (e.g., AWS Certified Data Analytics Specialty, AWS Certified Solutions Architect).
Posted 2 weeks ago
7.0 - 9.0 years
14 - 15 Lacs
bengaluru
Hybrid
Senior Azure Databricks Developer Weekend Virtual Drive on 30 Aug - Between 11am to 3pm Salary up to 15 LPA Exp: 7+ Years POSITION GENERAL DUTIES AND TASKS : Advanced experience with Databricks, including Databricks SQL, DataFrames, and Spark (PySpark or Scala). Deep understanding of Spark architecture and optimization strategies (e.g., tuning Spark SQL configurations, managing data partitions, handling data skew, and leveraging broadcast joins). Proficiency in building and optimizing large-scale data pipelines for ETL/ELT processes using Databricks. Familiarity with Delta Lake and data lake architectures. Strong programming skills in Python, SQL, or Scala. Experience with version control (e.g., Git), CI/CD pipelines, and automation tools. Understanding of Databricks cluster setup, resource management, and cost optimization. Experience with query optimization, performance monitoring, and troubleshooting complex data workflows. Familiarity with Databricks Photon engine and its enablement for accelerating workloads. Please share the following details along with the most updated resume to geeta.negi@compunnel.com if you are interested in the opportunity: Total Experience Relevant experience Current CTC Expected CTC Notice Period (Last working day if you are serving the notice period) Current Location SKILL 1 RATING OUT OF 5 SKILL 2 RATING OUT OF 5 SKILL 3 RATING OUT OF 5 (Mention the skill)
Posted 2 weeks ago
3.0 - 8.0 years
6 - 16 Lacs
chennai, bengaluru
Hybrid
Role & responsibilities Design, develop, and maintain ETL pipelines to extract, transform, and load data from various sources into Oracle, Snowflake, or cloud data warehouses. Write and optimize PL/SQL scripts, stored procedures, and functions for data processing and performance tuning. Build and configure Talend jobs (or other ETL tools) for batch and real-time data integration. Develop and support data models, schemas, and warehouse structures aligned with business reporting needs. Collaborate with BI teams to enable dashboards and reports in Cognos and Power BI. Implement data quality checks, validations, and reconciliation mechanisms. Work with cloud data platforms (AWS, Azure, or GCP) for scalable ETL and storage solutions. Apply best practices in ETL job scheduling, error handling, logging, and performance optimization. Support end-to-end data lifecycle from ingestion to consumption, ensuring governance and compliance. Document ETL designs, data flows, and technical specifications for knowledge sharing. Preferred candidate profile Bachelor's or Master's degree in Computer Science, Information Systems, or related field. 3-8 years of hands-on ETL development experience. Strong expertise in Oracle SQL, PL/SQL, and database optimization. Hands-on experience with ETL tools such as Talend (preferred), Informatica, or equivalent. Proficiency in Snowflake (warehousing, pipelines, SnowSQL, performance tuning). Good understanding of BI tools: Cognos, Power BI (data modeling, DAX, visualization best practices). Solid experience with cloud services (AWS Redshift/Glue, Azure Data Factory/Synapse, GCP BigQuery/Dataflow). Knowledge of data modeling concepts (star/snowflake schema, normalization, dimensional modeling). Experience with job orchestration & scheduling tools (Control-M, Autosys, Airflow, or similar). Strong understanding of data quality frameworks, metadata management, and error handling. Exposure to CI/CD pipelines, Git, Jenkins, and version control practices for ETL jobs. Knowledge of Python or Shell scripting for automation and custom transformations. Familiarity with real-time streaming (Kafka, Spark Streaming, Kinesis, or Pub/Sub). Basic knowledge of API-based integrations (REST, SOAP).
Posted 2 weeks ago
7.0 - 11.0 years
0 Lacs
karnataka
On-site
You will be joining Tietoevry Create as a Snowflake Developer in Bengaluru, India. Your primary responsibility will be to design, implement, and maintain data solutions using Snowflake's cloud data platform. Working closely with cross-functional teams, you will deliver high-quality, scalable data solutions that drive business value. With over 7 years of experience, you should excel in designing and developing Datawarehouse & Data integration projects at the SSE / TL level. It is essential to have experience working in an Azure environment and be proficient in developing ETL pipelines using Python and Snowflake SnowSQL. Your expertise in writing SQL queries against Snowflake and understanding database design concepts such as Transactional, Datamart, and Data warehouse will be crucial. As a Snowflake data engineer, you will architect and implement large-scale data intelligence solutions around Snowflake Data Warehouse. Your role will involve loading data from diverse sets, translating complex requirements into detailed designs, and analyzing vast data stores to uncover insights. A strong background in architecting, designing, and operationalizing data & analytics solutions on Snowflake Cloud Data Warehouse is a must. Articulation skills are key, along with a willingness to adapt and learn new skills. Tietoevry values diversity, equity, and inclusion, encouraging applicants from all backgrounds to join the team. The company believes that diversity fosters innovation and creates an inspiring workplace. Openness, trust, and diversity are core values driving Tietoevry's mission to create digital futures that benefit businesses, societies, and humanity.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
You will be responsible for designing, implementing, and continuously expanding data pipelines through extraction, transformation, and loading activities. Your role will involve gathering requirements and business process knowledge to ensure data transformation aligns with the needs of end users. Additionally, you will maintain and enhance existing processes, focusing on scalability and maintainability of the data architecture. Collaborating with the business, you will design and deliver high-quality data solutions. As part of our commitment to diversity, equity, and inclusion (DEI), you will be joining a workplace that values these principles. At Indium, we prioritize DEI through dedicated councils, expert sessions, and tailored training programs. Our initiatives, such as the WE@IN women empowerment program and DEI calendar, cultivate a culture of respect and belonging. Recognized with the Human Capital Award, we strive to create an environment where every individual can thrive. Come be a part of our team and contribute to building an inclusive workplace that fosters diversity and drives innovation.,
Posted 2 weeks ago
2.0 - 10.0 years
0 Lacs
hyderabad, telangana
On-site
We are looking for a highly skilled AI Technical Delivery Lead (Associate Director) with expertise in machine learning, generative AI, and hands-on architecture. Your role will involve leading our AI initiatives, providing technical guidance, fostering innovation, and ensuring successful implementation of advanced AI solutions. You will be responsible for designing, developing, and deploying cutting-edge machine learning and generative AI solutions to address business challenges. As the AI Technical Delivery Lead, you will be required to architect robust and scalable AI systems, ensuring adherence to best practices in design, coding, testing, and maintenance. Collaborating with cross-functional teams, you will be translating requirements into scalable AI systems while upholding best practices in design, coding, and maintenance. It will also be essential to stay updated on AI advancements, drive continuous improvement, and mentor team members. Working closely with data scientists, engineers, platform architects/engineering teams, and business stakeholders, you will be tasked with developing scalable, compliant, and high-performance AI applications. These applications will leverage large language models (LLMs), multimodal AI, and AI-driven automation to cater to various business needs. Key Responsibilities: - Define and implement a generative AI architecture and roadmap aligned with business goals in pharma and life sciences. - Architect scalable GenAI solutions for drug discovery, medical writing automation, clinical trials, regulatory submissions, and real-world evidence generation. - Work with data scientists and ML engineers to develop, fine-tune, and optimize large language models (LLMs) for life sciences applications. - Design GenAI solutions leveraging cloud platforms (AWS, Azure) or on-premise infrastructure while ensuring data security and regulatory compliance. - Implement best practices for GenAI model deployment, monitoring, and lifecycle management within GxP-compliant environments. - Ensure GenAI solutions comply with regulatory standards and adhere to responsible AI principles. - Drive efficiency in generative AI models, ensuring cost optimization and scalability. - Work with cross-functional teams to align GenAI initiatives with enterprise and industry-specific requirements. - Stay updated with the latest advancements in GenAI and incorporate emerging technologies into the company's AI strategy. Essential Requirements: - Bachelor's or Master's degree in Computer Science, AI, Data Science, Bioinformatics, or a related field. - 10+ years of experience in Big data, AI/ML development with at least 2 years in an AI Architect or GenAI Architect role in pharma, biotech, or life sciences. - Proficiency in Generative AI, LLMs, multimodal AI, and deep learning for pharma applications. - Hands-on experience with AI/ML frameworks (TensorFlow, PyTorch, Hugging Face, LangChain, Scikit-learn, etc.). - Knowledge of data engineering, ETL pipelines, big data technologies, cloud AI services, and MLOps & DevOps. Desirable Requirements: - Experience in GenAI applications for medical writing, clinical trials, drug discovery, and regulatory intelligence. - Knowledge of AI explainability, retrieval-augmented generation, knowledge graphs, and synthetic data generation in life sciences. - AI/ML certifications and exposure to biomedical ontologies, semantic AI models, and federated learning. Novartis is committed to diversity and inclusion, creating an inclusive work environment that reflects the patients and communities served. If you require accommodation due to a medical condition or disability, please reach out to us to discuss your needs. Join the Novartis Network to stay connected and explore future career opportunities: [Novartis Talent Network](https://talentnetwork.novartis.com/network),
Posted 2 weeks ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
You are an experienced and technically proficient SQL Server Warehouse Admin / Tech Lead responsible for managing, optimizing, and troubleshooting large-scale SQL Server-based data warehouse environments. Your role involves building and maintaining SQL Server high availability (HA) clusters, implementing disaster recovery (DR) solutions, leading development teams, and ensuring performance, stability, and scalability across systems. As a Technical Leader & Solution Delivery expert, you will develop, implement, and promote robust technical solutions using SQL Server technologies. You will design and maintain SQL Server HA clusters, lead and mentor team members, and ensure adherence to coding standards and engineering best practices. Your Database Administration & Optimization responsibilities include administering SQL Server-based data warehouses, troubleshooting complex issues, and monitoring SQL Server configuration, backups, and performance tuning to ensure optimal performance, availability, and scalability. In Project & Stakeholder Management, you will collaborate with Project Managers to execute project modules, ensure compliance with SLAs, delivery timelines, and quality standards, and align technology solutions with business needs by engaging with stakeholders. You will be involved in Testing & Quality Assurance by developing unit test cases, conducting technical assessments, ensuring defect-free code, and managing root cause analysis for production incidents. As part of Documentation & Compliance, you will create and maintain documentation for processes, configurations, design, and best practices, sign off on key project documents, and ensure alignment with organizational policies and regulatory compliance requirements. In Team Engagement & Development, you will set and review FAST goals for team members, provide regular feedback, identify upskilling needs, mentor junior developers, and foster a high-performance, collaborative, and innovative team culture. Required Skills & Qualifications: - 8+ years of hands-on experience with SQL Server-based data warehouses/databases. - Expertise in building and maintaining SQL Server HA clusters, disaster recovery design, and implementation. - Proficiency in T-SQL, indexing, query optimization, and SQL Server Agent job management. - Understanding of data warehousing concepts and ETL pipelines. - Experience leading a technical team and managing stakeholder expectations. Desirable Skills & Certifications: - Microsoft Certified: Azure Database Administrator Associate - Microsoft Certified: Data Analyst Associate - Experience working in Azure cloud environments. - Familiarity with NoSQL, data lakes, and big data platforms. - Knowledge of Agile/Scrum/Kanban delivery models. Soft Skills: - Strong leadership and mentoring abilities. - Excellent communication, documentation, and presentation skills. - Proactive management of technical issues and resolution of bottlenecks.,
Posted 2 weeks ago
0.0 years
0 Lacs
pune, maharashtra, india
On-site
Critical Skills To Possess Advanced working knowledge and experience with relational and non-relational databases. Advanced working knowledge and experience with API data providers Experience building and optimizing Big Data pipelines, architectures, and datasets. Strong analytic skills related to working with structured and unstructured datasets. Hands-on experience in Azure Databricks utilizing Spark to develop ETL pipelines. Strong proficiency in data analysis, manipulation, and statistical modeling using tools like Spark, Python, Scala, SQL, or similar languages. Strong experience in Azure Data Lake Storage Gen2, Azure Data Factory, Databricks, Event Hub, Azure Synapse. Familiarity with several of the following technologies: Event Hub, Docker, Azure Kubernetes Service, Azure DWH, API Azure, Azure Function, Power BI, Azure Cognitive Services. Azure DevOps experience to deploy the data pipelines through CI/CD. Preferred Qualifications BS degree in Computer Science or Engineering or equivalent experience Show more Show less
Posted 2 weeks ago
5.0 - 7.0 years
0 Lacs
india
On-site
DESCRIPTION We are seeking a highly skilled Data Engineer to join our FinTech ADA team, responsible for building and optimizing scalable data pipelines and platforms that power analytics, automation, and decision-making across Finance and Accounting domains. The ideal candidate will have strong expertise in AWS cloud technologies including Redshift, S3, AWS Glue, EMR, Kinesis, Firehose, Lambda, and IAM, along with hands-on experience designing secure, efficient, and resilient data architectures. You will work with large-scale structured and unstructured datasets, leveraging both relational and non-relational data stores (object storage, key-value/document databases, graph, and column-family stores) to deliver reliable, high-performance data solutions. This role requires strong problem-solving skills, attention to detail, and the ability to collaborate with cross-functional teams to translate business needs into technical data solutions. Key job responsibilities Scope - Fintech is seeking a Data Engineer to be part of Accounting and Data Analytics team. Our team builds and maintains data platform for sourcing, merging and transforming financial datasets to extract business insights, improve controllership and support financial month-end close periods. As a contributor to a crucial project, you will focus on building scalable data pipelines, optimizations of existing pipelines and operation excellence. Qualifications- . 5+ yrs experience as Data Engineer or in a similar role . Experience with data modeling, data warehousing, and building ETL pipelines . Bachelor's degree in Computer Science, Engineering, Mathematics, or a related field. . Extensive experience working with AWS with a strong understanding of Redshift, EMR, Athena, Aurora, DynamoDB, Kinesis, Lambda, S3, EC2, etc. . Experience with coding languages like Python/Java/Scala . Experience in maintaining data warehouse systems and working on large scale data transformation using EMR, Hadoop, Hive, or other Big Data technologies . Experience mentoring and managing other Data Engineers, ensuring data engineering best practices are being followed . Experience with hardware provisioning, forecasting hardware usage, and managing to a budget. . Exposure to large databases, BI applications, data quality and performance tuning BASIC QUALIFICATIONS - 3+ years of data engineering experience - Experience with data modeling, warehousing and building ETL pipelines - Experience with SQL PREFERRED QUALIFICATIONS - 5+ years of data engineering experience - Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.
Posted 2 weeks ago
3.0 - 5.0 years
0 Lacs
india
On-site
DESCRIPTION Are you interested in diving deep into the data and engineering the metrics model that drives continuous improvement for Amazon's eCommerce systems Do you have solid problem-solving abilities, metrics-driven decision making and want to solve problems with solutions that will meet the growing worldwide need for Amazon's products Then eCommerce Services is the team for you. We are part of building the world's best, most reliable, and most feature-rich eCommerce platform that provides customers the best experience possible and are looking for a top-notch Senior Business Intelligence Engineer. In eCommerce Services (eCS), we build systems that span the full range of eCommerce functionality, from Privacy, Identity, Purchase Experience and Ordering to Shipping, Tax and Financial integration. eCommerce Services manages several aspects of the customer life cycle, starting from account creation and sign in, to placing items in the shopping cart, proceeding through checkout, order processing, managing order history and post-fulfillment actions such as refunds and tax invoices. eCS services determine sales tax and shipping charges, and we ensure the privacy of our customers. Our mission is to provide a commerce foundation that accelerates business innovation and delivers a secure, available, performant, and reliable shopping experience to Amazon's customers. We are looking for an experienced and technically skilled individual to join our Data Engineering and Analytics team as a Business Intelligence Engineer (BIE). The ideal candidate will be an analytical, results-oriented, self-motivated, and customer-focused Business Intelligence Engineer who will play a key role in continuous improvement for the critical systems that enable our entire retail business. The role will also include surfacing key product metrics as well as building critical data pipelines and analytics tools to improve efficiency and quality of strategic business decisions. Key job responsibilities . Defining, developing and maintaining critical business and operational metrics reviewed on a weekly, monthly, quarterly, and annual basis. . Analysis of historical data to identify trends and support decision making, including written and verbal presentation of results and recommendations. . Collaborating with software development teams to implement analytics systems and data structures to support large-scale data analysis and delivery of reports. . Identifying data needs and driving data quality improvement projects. . Thought leadership on data mining and analysis. Understanding the broad range of Amazon's data resources, which to use, how, and when. Mining and manipulating data from database tables, simulation results, and log files. BASIC QUALIFICATIONS - 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience - Experience with data visualization using Tableau, Quicksight, or similar tools - Experience with data modeling, warehousing and building ETL pipelines - Experience in Statistical Analysis packages such as R, SAS and Matlab - Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling - Bachelor's degree in BI, finance, engineering, statistics, computer science, mathematics, finance or equivalent quantitative field PREFERRED QUALIFICATIONS - Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift - Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.
Posted 2 weeks ago
6.0 - 10.0 years
10 - 20 Lacs
hyderabad, chennai, bengaluru
Hybrid
Perficient is the global AI-first consultancy. Our team of strategists, designers, technologists, and engineers partners with the worlds most innovative enterprises and admired brands to deliver real business results through the transformative power of AI. As part of our AI-First strategy, we empower every employee to build AI fluency and actively engage with AI tools to drive innovation and efficiency. We break boundaries, obsess over outcomes, and shape the future for our clients. Join a company where bold ideas and brilliant minds converge to redefine whats possible - while building a career filled with growth, balance, and purpose AWS Glue Data Engineer Location: Chennai | Bangalore | Hyderabad Experience: 5 10 Years Employment Type: Full-Time Are you passionate about building scalable, efficient data pipelines on AWS? Were looking for an experienced AWS Glue Data Engineer to join our growing data team! In this role, youll work with modern data architecture, build complex ETL pipelines, and collaborate with cross-functional teams to deliver reliable data solutions at scale. What You'll Do: Design and develop robust ETL pipelines using AWS Glue (PySpark, Glue Studio) Work with various data sources (RDBMS, S3, APIs) to ingest, transform, and load data Optimize Glue jobs for performance and cost-efficiency Collaborate with Data Scientists, Analysts, and other Engineers Implement data governance and security best practices Use AWS services like S3, Redshift, Athena, Lambda, and Step Functions effectively What We're Looking For: 5–10 years of overall data engineering experience Minimum 3+ years hands-on with AWS Glue and PySpark Proficiency in SQL, Python, and AWS data tools Strong understanding of data lake architecture Good communication and problem-solving skills Bonus Points For: Experience with AWS Lake Formation Familiarity with CI/CD, Terraform, or CloudFormation AWS Certifications (Data Analytics, Solutions Architect)
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |