Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
12.0 - 15.0 years
35 - 50 Lacs
Hyderabad
Work from Office
Skill : Java, Spark, Kafka Experience : 10 to 16 years Location : Hyderabad As Data Engineer, you will : Support in designing and rolling out the data architecture and infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources Identify data source, design and implement data schema/models and integrate data that meet the requirements of the business stakeholders Play an active role in the end-to-end delivery of AI solutions, from ideation, feasibility assessment, to data preparation and industrialization. Work with business, IT and data stakeholders to support with data-related technical issues, their data infrastructure needs as well as to build the most flexible and scalable data platform. With a strong focus on DataOps, design, develop and deploy scalable batch and/or real-time data pipelines. Design, document, test and deploy ETL/ELT processes Find the right tradeoffs between the performance, reliability, scalability, and cost of the data pipelines you implement Monitor data processing efficiency and propose solutions for improvements. • Have the discipline to create and maintain comprehensive project documentation. • Build and share knowledge with colleagues and coach junior profiles.
Posted 1 month ago
2.0 - 3.0 years
6 - 7 Lacs
Pune
Work from Office
Data Engineer Job Description : Jash Data Sciences: Letting Data Speak! Do you love solving real-world data problems with the latest and best techniques? And having fun while solving them in a team! Then come and join our high-energy team of passionate data people. Jash Data Sciences is the right place for you. We are a cutting-edge Data Sciences and Data Engineering startup based in Pune, India. We believe in continuous learning and evolving together. And we let the data speak! What will you be doing? You will be discovering trends in the data sets and developing algorithms to transform raw data for further analytics Create Data Pipelines to bring in data from various sources, with different formats, transform it, and finally load it to the target database. Implement ETL/ ELT processes in the cloud using tools like AirFlow, Glue, Stitch, Cloud Data Fusion, and DataFlow. Design and implement Data Lake, Data Warehouse, and Data Marts in AWS, GCP, or Azure using Redshift, BigQuery, PostgreSQL, etc. Creating efficient SQL queries and understanding query execution plans for tuning queries on engines like PostgreSQL. Performance tuning of OLAP/ OLTP databases by creating indices, tables, and views. Write Python scripts for the orchestration of data pipelines Have thoughtful discussions with customers to understand their data engineering requirements. Break complex requirements into smaller tasks for execution. What do we need from you? Strong Python coding skills with basic knowledge of algorithms/data structures and their application. Strong understanding of Data Engineering concepts including ETL, ELT, Data Lake, Data Warehousing, and Data Pipelines. Experience designing and implementing Data Lakes, Data Warehouses, and Data Marts that support terabytes of scale data. A track record of implementing Data Pipelines on public cloud environments (AWS/GCP/Azure) is highly desirable A clear understanding of Database concepts like indexing, query performance optimization, views, and various types of schemas. Hands-on SQL programming experience with knowledge of windowing functions, subqueries, and various types of joins. Experience working with Big Data technologies like PySpark/ Hadoop A good team player with the ability to communicate with clarity Show us your git repo/ blog! Qualification 1-2 years of experience working on Data Engineering projects for Data Engineer I 2-5 years of experience working on Data Engineering projects for Data Engineer II 1-5 years of Hands-on Python programming experience Bachelors/Masters' degree in Computer Science is good to have Courses or Certifications in the area of Data Engineering will be given a higher preference. Candidates who have demonstrated a drive for learning and keeping up to date with technology by continuing to do various courses/self-learning will be given high preference.
Posted 1 month ago
1.0 - 4.0 years
7 - 17 Lacs
Bengaluru
Hybrid
Job Title: Data & GenAI AWS Specialist Experience: 1-4 Years Location: Bangalore Mandatory Qualification: B.E./ B.Tech/ M.Tech/ MS from IIT or IISc ONLY Job Overview: We are seeking a seasoned Data & GenAI Specialist with deep expertise in AWS Managed Services (PaaS) to join our innovative team. The ideal candidate will have extensive experience in designing sophisticated, scalable architectures for data pipelines and Generative AI (GenAI) solutions leveraging cloud services. Key to this role is the ability to articulate architecture solutions clearly and effectively to customers, helping them conceptualize and implement advanced GenAI-driven applications tailored precisely to their business requirements. Responsibilities: Engage closely with customers to thoroughly understand their business challenges, translate their requirements into comprehensive architecture solutions, and effectively communicate intricate technical details. Architect and oversee the design, development, and deployment of scalable, resilient data processing pipelines utilizing AWS/ Azure/ GCP/ Snowflake/ open Source services. Lead the architectural design and implementation of robust GenAI systems, leveraging AWS foundational models and frameworks including Amazon Bedrock, AWS Inferentia, Amazon SageMaker, and Amazon Kendra. Collaborate with internal and customer teams to align architectural strategies with business objectives, ensuring adherence to AWS best practices. Optimize and refine data architectures to effectively handle large-scale GenAI workloads, prioritizing performance, scalability, and robust security. Document and promote architectural best practices in data engineering, pipeline architecture, and GenAI development within AWS environments. Stay abreast of emerging architectural trends, innovative technologies, and advancements in AWS and GenAI ecosystems to ensure solutions remain cutting-edge and efficient. Have a keen sense of maximising and extending the AWS investments already done by the clients, instead of a rip and replace mentality. Qualifications: B.Tech/ MS/ M.Tech degree in Computer Science, Data Science, AI, or related technical fields Knowledge of architecting and building data pipelines and GenAI solutions specifically on AWS. Expert-level proficiency in AWS architectural patterns and services, including AWS Glue, Lambda, EMR, S3, and SageMaker. Why Join Us: Be at the forefront of cutting-edge GenAI and AWS cloud architectural innovation, working on the AI Lab. Thrive in a collaborative, dynamic, and supportive team environment. Continuous learning and growth opportunities.
Posted 1 month ago
5.0 - 10.0 years
14 - 24 Lacs
Bengaluru
Remote
Detailed job description - Skill Set: Strong Knowledge in Databricks. This includes creating scalable ETL (Extract, Transform, Load) processes, data lakes Strong knowledge in Python and SQL Strong experience with AWS cloud platforms is a must Good understanding of data modeling principles and data warehousing concepts Strong knowledge of optimizing ETL jobs, batch processing jobs to ensure high performance and efficiency Implementing data quality checks, monitoring data pipelines, and ensuring data consistency and security Hands on experience with Databricks features like Unity Catalog Mandatory Skills Databricks, AWS
Posted 1 month ago
9.0 - 12.0 years
25 - 35 Lacs
Hyderabad
Hybrid
Experience 12yrs only Notice period :- immediate / 15days Location :- Hyderabad Client :- Tech Star Group Please highlight the mandatory sill in resume . Client Feedback :- In short, the client is primarily looking for a candidate with strong expertise in data-related skills, including: SQL & Database Management: Deep knowledge of relational databases (PostgreSQL), cloud-hosted data platforms (AWS, Azure, GCP), and data warehouses like Snowflake. ETL/ELT Tools: Experience with SnapLogic, StreamSets, or DBT for building and maintaining data pipelines. / ETL Tools Extensive Experience on data Pipelines Data Modeling & Optimization: Strong understanding of data modeling, OLAP systems, query optimization, and performance tuning. Cloud & Security: Familiarity with cloud platforms and SQL security techniques (e.g., data encryption, TDE). Data Warehousing: Experience managing large datasets, data marts, and optimizing databases for performance. Agile & CI/CD: Knowledge of Agile methodologies and CI/CD automation tools. Imp :-The candidate should have a strong data engineering background with hands-on experience in handling large volumes of data, data pipelines, and cloud-based data systems
Posted 1 month ago
7.0 - 12.0 years
11 - 15 Lacs
Bengaluru
Work from Office
Position Summary: We are seeking a highly skilled ETL QA Engineer with at least 6 years of experience in ETL/data pipeline testing on the AWS cloud stack , specifically with Redshift, AWS Glue, S3 , and related data integration tools. The ideal candidate should be proficient in SQL , capable of reviewing and validating stored procedures , and should have the ability to automate ETL test cases using Python or suitable automation frameworks . Strong communication skills are essential, and web application testing exposure is a plus. Technical Skills Required: SQL Expertise : Ability to write, debug, and optimize complex SQL queries. Validate data across source systems, staging areas, and reporting layers. Experience with stored procedure review and validation. ETL Testing Experience : Hands-on experience with AWS Glue , Redshift , S3 , and data pipelines. Validate transformations, data flow accuracy, and pipeline integrity. ETL Automation : Ability to automate ETL tests using Python , PyTest , or other scripting frameworks. Nice to have exposure to TestNG , Selenium , or similar automation tools for testing UIs or APIs related to data validation. Cloud Technologies : Deep understanding of the AWS ecosystem , especially around ETL and data services. Familiarity with orchestration (e.g., Step Functions, Lambda), security, and logging. Health Check Automation : Build SQL and Python-based health check scripts to monitor pipeline sanity and data integrity. Reporting Tools (Nice to have): Exposure to tools like Jaspersoft , Tableau , Power BI , etc. for report layout and aggregation validation. Root Cause Analysis : Strong debugging skills to trace data discrepancies and report logical/data errors to development teams. Communication : Must be able to communicate clearly with both technical and non-technical stakeholders. Roles and Responsibilities Key Responsibilities: Design and execute test plans and test cases for validating ETL pipelines and data transformations. Ensure accuracy and integrity of data in transactional databases , staging zones , and data warehouses (Redshift) . Review stored procedures and SQL scripts to validate transformation logic. Automate ETL test scenarios using Python or other test automation tools as applicable. Implement health check mechanisms for automated validation of daily pipeline jobs. Investigate data issues and perform root cause analysis. Validate reports and dashboards, ensuring correct filters, aggregations, and visualizations. Collaborate with developers, analysts, and business teams to understand requirements and ensure complete test coverage. Report testing progress and results clearly and timely. Nice to Have: Web testing experience using Selenium or Appium. Experience in API testing and validation of data exposed via APIs.
Posted 1 month ago
8.0 - 10.0 years
12 - 16 Lacs
Noida, Pune, Chennai
Work from Office
Job Title: Lead Data Scientist We are seeking a highly skilled and experienced Lead Data Scientist to join our dynamic team. In this role, you will be responsible for leading data-driven projects, mentoring junior data scientists, and guiding the organization in making strategic decisions based on data insights. Key Responsibilities: - Develop and implement advanced statistical models and algorithms to analyze complex data sets. - Collaborate with cross-functional teams to identify business opportunities and translate them into data-driven solutions. - Mentor and oversee a team of data scientists, providing guidance on best practices and techniques in data analysis and modeling. - Communicate findings and insights to stakeholders through presentations, reports, and visualizations. - Stay current with industry trends and emerging technologies in data science and analytics. - Design and implement experiments to validate models and hypotheses. - Ensure the quality and integrity of data throughout the analytic process. Qualifications: - Master's. in Computer Science, Statistics, Mathematics, or a related field. - Proven experience as a Data Scientist, with a strong portfolio of successful projects. - Expertise in programming languages such as Python or R, as well as experience with machine learning frameworks. - Strong knowledge of statistical analysis and modeling techniques. - Excellent problem-solving skills and the ability to work with large and complex data sets. - Strong communication skills, with the ability to convey technical concepts to non-technical stakeholders. - Experience in leading and managing teams is a plus. We offer a competitive salary, comprehensive benefits, and the opportunity to work in a collaborative and innovative environment. If you are passionate about data science and eager to make a significant impact, we would love to hear from you. Roles and Responsibilities Job Title: Lead Data Scientist Roles and Responsibilities: 1. Leading the design and implementation of advanced machine learning models and algorithms to address complex business problems. 2. Collaborating with cross-functional teams to define data strategies and identify opportunities for data-driven decision-making. 3. Overseeing data collection, cleaning, and preprocessing to ensure high-quality datasets for analysis and modeling. 4. Mentoring and guiding junior data scientists and analysts, fostering a culture of continuous learning and innovation within the team. 5. Communicating findings and insights to stakeholders through visualizations, presentations, and reports, ensuring clarity and understanding of complex analyses. 6. Staying current with industry trends, tools, and technologies in data science, and applying best practices to enhance team capabilities and project outcomes. 7. Developing and maintaining scalable data pipelines and architectures to support large-scale data processing. 8. Evaluating the performance of statistical models and machine learning algorithms to ensure accuracy and effectiveness in real-world applications. 9. Collaborating with IT and engineering teams to deploy models into production environments and monitor their performance. 10. Contributing to the strategic direction of the data science team and advising leadership on data-driven opportunities and potential risks.
Posted 1 month ago
4.0 - 6.0 years
15 - 25 Lacs
Noida
Work from Office
We are looking for a highly experienced Senior Data Engineer with deep expertise in Snowflake to lead efforts in optimizing the performance of our data warehouse to enable faster, more reliable reporting. You will be responsible for improving query efficiency, data pipeline performance, and overall reporting speed by tuning Snowflake environments, optimizing data models, and collaborating with Application development teams. Roles and Responsibilities Analyze and optimize Snowflake data warehouse performance to support high-volume, complex reporting workloads. Identify bottlenecks in SQL queries, ETL/ELT pipelines, and data models impacting report generation times. Implement performance tuning strategies including clustering keys, materialized views, result caching, micro-partitioning, and query optimization. Collaborate with BI teams and business analysts to understand reporting requirements and translate them into performant data solutions. Design and maintain efficient data models (star schema, snowflake schema) tailored for fast analytical querying. Develop and enhance ETL/ELT processes ensuring minimal latency and high throughput using Snowflake’s native features. Monitor system performance and proactively recommend architectural improvements and capacity planning. Establish best practices for data ingestion, transformation, and storage aimed at improving report delivery times. Experience with Unistore will be an added advantage
Posted 1 month ago
4.0 - 7.0 years
10 - 20 Lacs
Pune
Work from Office
Experience in designing, developing, implementing, and optimizing data solutions on Microsoft Azure. Proven expertise in leveraging Azure services for ETL processes, data warehousing and analytics, ensuring optimal performance and scalability.
Posted 1 month ago
8.0 - 13.0 years
15 - 25 Lacs
Pune
Hybrid
About This Role : We are looking for a talented and experienced Data Engineer with Tech Lead with hands-on expertise in any ETL Tool with full knowledge about CI/CD practices with leading a team technically more than 5 and client facing and create Data Engineering, Data Quality frameworks. As a tech lead must ensure to build ETL jobs, Data Quality Jobs, Big Data Jobs performed performance optimization by understanding the requirements, create re-usable assets and able to perform production deployment and preferably worked in DWH appliances Snowflake / redshift / Synapse Responsibilities Work with a team of engineers in designing, developing, and maintaining scalable and efficient data solutions using Any Data Integration (any ETL tool like Talend / Informatica) and any Big Data technologies. Design, develop, and maintain end-to-end data pipelines using Any ETL Data Integration (any ETL tool like Talend / Informatica) to ingest, process, and transform large volumes of data from heterogeneous sources. Have good experience in designing cloud pipelines using Azure Data Factory or AWS Glues/Lambda. Implemented Data Integration end to end with any ETL technologies. Implement database solutions for storing, processing, and querying large volumes of structured and unstructured and semi-structured data Implement Job Migrations of ETL Jobs from Older versions to New versions. Implement and write advanced SQL scripts in SQL Database at medium to expert level. Work with technical team with client and provide guidance during technical challenges. Integrate and optimize data flows between various databases, data warehouses, and Big Data platforms. Collaborate with cross-functional teams to gather data requirements and translate them into scalable and efficient data solutions. Optimize ETL, Data Load performance, scalability, and cost-effectiveness through optimization techniques. Interact with Client on a daily basis and provide technical progress and respond to technical questions. Implement best practices for data integration. Implement complex ETL data pipelines or similar frameworks to process and analyze massive datasets. Ensure data quality, reliability, and security across all stages of the data pipeline. Troubleshoot and debug data-related issues in production systems and provide timely resolution. Stay current with emerging technologies and industry trends in data engineering technologies, CI/CD, and incorporate them into our data architecture and processes. Optimize data processing workflows and infrastructure for performance, scalability, and cost-effectiveness. Provide technical guidance and foster a culture of continuous learning and improvement. Implement and automate CI/CD pipelines for data engineering workflows, including testing, deployment, and monitoring. Perform migration to production deployment from lower environments, test & validate Must Have Skills Must be certified in any ETL tools, Database, Cloud.(Snowflake certified is more preferred) Must have implemented at least 3 end-to-end projects in Data Engineering. Must have worked on performance management optimization and tuning for data loads, data processes, data transformation in big data Must be flexible to write code using JAVA/Scala/Python etc. as required Must have implemented CI/CD pipelines using tools like Jenkins, GitLab CI, or AWS CodePipeline. Must have managed a team technically of min 5 members and guided the team technically. Must have the Technical Ownership capability of Data Engineering delivery. Strong communication capabilities with client facing. Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 5 years of experience in software engineering or a related role, with a strong focus on Any ETL Tool, database, integration. Proficiency in Any ETL tools like Talend , Informatica etc for Data Integration for building and orchestrating data pipelines. Hands-on experience with relational databases such as MySQL, PostgreSQL, or Oracle, and NoSQL databases such as MongoDB, Cassandra, or Redis. Solid understanding of database design principles, data modeling, and SQL query optimization. Experience with data warehousing, Data Lake , Delta Lake concepts and technologies, data modeling, and relational databases.
Posted 1 month ago
7.0 - 12.0 years
20 - 35 Lacs
Noida, Chennai
Hybrid
Deployment, configuration & maintenance of Databricks clusters & workspaces Security & Access Control Automate administrative task using tools like Python, PowerShell &Terraform Integrations with Azure Data Lake, Key Vault & implement CI/CD pipelines Required Candidate profile Azure, AWS, or GCP; Azure experience is preferred Strong skills in Python, PySpark, PowerShell & SQL Experience with Terraform ETL processes, data pipeline &big data technologies Security & Compliance
Posted 1 month ago
8.0 - 10.0 years
25 - 30 Lacs
Mumbai, Navi Mumbai, Mumbai (All Areas)
Work from Office
Loction :Mumbai Job Description: Looking for Candidates with 8 to 10 years of experience Hands on experience of implementing data pipelines using traditional DWH, Big data & Cloud ecosystem Good exposure to data architecture design, cost and size estimation Good understanding of handling Realtime/streaming pipelines Have experience of Data Quality and Data Governance Have experience of handling & interacting clients and managing vendors Having knowledge on AI/ML, GenAI is a plus Exposure of managing and leading teams
Posted 1 month ago
7.0 - 10.0 years
11 - 20 Lacs
Hyderabad, Bengaluru
Work from Office
Looking for a Business Analyst with strong SQL, ETL/data warehouse testing, insurance domain knowledge (GuideWire), and experience in defect tracking & issue management tools.
Posted 1 month ago
3.0 - 6.0 years
7 - 11 Lacs
Pune
Work from Office
Syensqo is all about chemistry Were not just referring to chemical reactions here, but also to the magic that occurs when the brightest minds get to work together This is where our true strength lies In you In your future colleagues and in all your differences And of course, in your ideas to improve lives while preserving our planets beauty for the generations to come We are looking for: A motivated and experienced Data & Integration Senior Data Engineer who thrives on turning complex data challenges into innovative solutions You will lead service ownership for key data technologies, guide the data engineering team, and collaborate across functions to deliver seamless end-to-end data integration, storage, and analytics solutions We count on you to: Develop and maintain service strategies aligned with business objectives and design cost-effective, scalable data services Oversee service delivery to meet quality standards and monitor performance for continuous improvement Collaborate with cross-functional teams and stakeholders to understand needs, deliver solutions, and ensure alignment Provide technical leadership by mentoring data engineers in designing and maintaining scalable data pipelines, storage, and data exposure Identify and implement opportunities for service enhancement, innovation, and improved data quality and security Manage project timelines, resources, and deliverables for data solutions evolution and successful completion Support agile delivery practices in collaboration with product lines, architects, business stakeholders, and external vendors You will bring: Bachelors or Masters degree in Computer Science, Data Engineering, or related field 5+ years of experience as a Data Engineer or in a similar role focusing on data integration, storage, and service ownership Strong technical skills in data pipeline design, cloud data platforms (Azure, GCP), Python, SQL, Talend, and CI/CD pipelines Proven expertise in managing services and mentoring data engineers Experience with JIRA, Confluence, and project management practices Excellent communication and stakeholder management skills Ability to work effectively in multicultural and cross-functional teams You can count on us to: Include you in a once-in-a-lifetime transformation journey shaping the future of advanced materials Provide a dynamic, collaborative environment where your impact is visible and valued Offer autonomy, trust, and growth opportunities in a purpose-driven organization Foster a culture of inclusion, curiosity, and innovation You will benefit from: A competitive salary and comprehensive social benefits Access to the on-site company restaurant 16 weeks or more of maternity/paternity/co-parenting leave, aligned with local regulations A training platform accessible to all employees Free access to language courses (24 languages available) About Us Syensqo is a science company developing groundbreaking solutions that enhance the way we live, work, travel and play Inspired by the scientific councils which Ernest Solvay initiated in 1911, we bring great minds together to push the limits of science and innovation for the benefit of our customers, with a diverse, global team of more than 13,000 associates Our solutions contribute to safer, cleaner, and more sustainable products found in homes, food and consumer goods, planes, cars, batteries, smart devices and health care applications Our innovation power enables us to deliver on the ambition of a circular economy and explore breakthrough technologies that advance humanity At Syensqo, we seek to promote unity and not uniformity We value the diversity that individuals bring and we invite you to consider a future with us, regardless of background, age, gender, national origin, ethnicity, religion, sexual orientation, ability or identity We encourage individuals who may require any assistance or accommodations to let us know to ensure a seamless application experience We are here to support you throughout the application journey and want to ensure all candidates are treated equally If you are unsure whether you meet all the criteria or qualifications listed in the job description, we still encourage you to apply Show more Show less
Posted 1 month ago
2.0 - 5.0 years
8 - 12 Lacs
Mumbai, Gurugram, Bengaluru
Work from Office
BIZMETRIC INDIA PRIVATE LIMITED is looking for DS / ML / Time-Series Engineer to join our dynamic team and embark on a rewarding career journey Data Exploration and Preparation:Explore and analyze large datasets to understand patterns and trends Prepare and clean datasets for analysis and model development Feature Engineering:Engineer features from raw data to enhance the performance of machine learning models Collaborate with data scientists to identify relevant features for model training Model Development:Design and implement machine learning models to solve business problems Work on both traditional statistical models and modern machine learning algorithms Scalable Data Pipelines:Develop scalable and efficient data pipelines for processing and transforming data Utilize technologies like Apache Spark for large-scale data processing Model Deployment:Deploy machine learning models into production environments Collaborate with DevOps teams to integrate models into existing systems Performance Optimization:Optimize the performance of data pipelines and machine learning models Fine-tune models for accuracy, efficiency, and scalability Collaboration:Collaborate with cross-functional teams, including data scientists, software engineers, and business stakeholders Communicate technical concepts and findings to non-technical audiences Continuous Learning:Stay current with advancements in data science and engineering Implement new technologies and methodologies to improve data engineering processes Learning Certification Opportunities: Enhance your professional growth. Comprehensive Medical Coverage and Life Insurance: For your we'll-being. Flexible Work Environment: Enjoy a 5-day work week. Collaborative Culture: Be part of a fun, innovative workplace. Job Description: Python, Data Science, Azure Databricks, Machine Learning
Posted 1 month ago
6.0 - 10.0 years
15 - 25 Lacs
Bengaluru
Work from Office
Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to dive headfirst into the captivating world of data engineering at Kyndryl? As a Data Engineer, you'll be the visionary behind our data platforms, crafting them into powerful tools for decision-makers. Your role? Ensuring a treasure trove of pristine, harmonized data is at everyone's fingertips. AWS Data/API Gateway Pipeline Engineer responsible for designing, building, and maintaining real-time, serverless data pipelines and API services. This role requires extensive hands-on experience with Java, Python, Redis, DynamoDB Streams, and PostgreSQL, along with working knowledge of AWS Lambda and AWS Glue for data processing and orchestration. This position involves collaboration with architects, backend developers, and DevOps engineers to deliver scalable, event-driven data solutions and secure API services across cloud-native systems. Key Responsibilities API & Backend Engineering Build and deploy RESTful APIs using AWS API Gateway, Lambda, and Java and Python. Integrate backend APIs with Redis for low-latency caching and pub/sub messaging. Use PostgreSQL for structured data storage and transactional processing. Secure APIs using IAM, OAuth2, and JWT, and implement throttling and versioning strategies. Data Pipeline & Streaming Design and develop event-driven data pipelines using DynamoDB Streams to trigger downstream processing. Use AWS Glue to orchestrate ETL jobs for batch and semi-structured data workflows. Build and maintain Lambda functions to process real-time events and orchestrate data flows. Ensure data consistency and resilience across services, queues, and databases. Cloud Infrastructure & DevOps Deploy and manage cloud infrastructure using CloudFormation, Terraform, or AWS CDK. Monitor system health and service metrics using CloudWatch, SNS and structured logging. Contribute to CI/CD pipeline development for testing and deploying Lambda/API services. So, if you're a technical enthusiast with a passion for data, we invite you to join us in the exhilarating world of data engineering at Kyndryl. Let's transform data into a compelling story of innovation and growth. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Skills and Experience Bachelor's degree in computer science, Engineering, or a related field. Over 6 years of experience in developing backend or data pipeline services using Java and Python . Strong hands-on experience with: AWS API Gateway , Lambda , DynamoDB Streams Redis (caching, messaging) PostgreSQL (schema design, tuning, SQL) AWS Glue for ETL jobs and data transformation Solid understanding of REST API design principles, serverless computing, and real-time architecture. Preferred Skills and Experience Familiarity with Kafka, Kinesis, or other message streaming systems Swagger/OpenAPI for API documentation Docker and Kubernetes (EKS) Git, CI/CD tools (e.g., GitHub Actions) Experience with asynchronous event processing, retries, and dead-letter queues (DLQs) Exposure to data lake architectures (S3, Glue Data Catalog, Athena) Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.
Posted 1 month ago
6.0 - 11.0 years
20 - 35 Lacs
Chennai
Work from Office
Technical Lead AI & Data Warehouse (DWH) Pando is a global leader in supply chain technology, building the world's quickest time-to-value Fulfillment Cloud platform. Pandos Fulfillment Cloud provides manufacturers, retailers, and 3PLs with a single pane of glass to streamline end-to-end purchase order fulfillment and customer order fulfillment to improve service levels, reduce carbon footprint, and bring down costs. As a partner of choice for Fortune 500 enterprises globally, with a presence across APAC, the Middle East, and the US, Pando is recognized as a Technology Pioneer by the World Economic Forum (WEF), and as one of the fastest growing technology companies by Deloitte. Role As the Senior Lead for AI and Data Warehouse at Pando, you will be responsible for building and scaling the data and AI services team. You will drive the design and implementation of highly scalable, modular, and reusable data pipelines, leveraging big data technologies and low-code implementations. This is a senior leadership position where you will work closely with cross-functional teams to deliver solutions that power advanced analytics, dashboards, and AI-based insights. Key Responsibilities Lead the development of scalable, high-performance data pipelines using PySpark or Big Data ETL pipeline technologies. Drive data modeling efforts for analytics, dashboards, and knowledge graphs. Oversee the implementation of parquet-based data lakes. Work on OLAP databases, ensuring optimal data structure for reporting and querying. Architect and optimize large-scale enterprise big data implementations with a focus on modular and reusable low-code libraries. Collaborate with stakeholders to design and deliver AI and DWH solutions that align with business needs. Mentor and lead a team of engineers, building out the data and AI services organization. Required 8-10 years of experience in big data and AI technologies, with expertise in PySpark or similar Big Data ETL pipeline technologies. Strong proficiency in SQL and OLAP database technologies. Firsthand experience with data modeling for analytics, dashboards, and knowledge graphs. Proven experience with parquet-based data lake implementations. Expertise in building highly scalable, high-volume data pipelines. Experience with modular, reusable, low-code-based implementations. Involvement in large-scale enterprise big data implementations. Initiative-taker with strong motivation and the ability to lead a growing team. Preferred Experience leading a team or building out a new department. Experience with cloud-based data platforms and AI services. Familiarity with supply chain technology or fulfilment platforms is a plus.
Posted 1 month ago
4.0 - 7.0 years
11 - 17 Lacs
Pune, Bengaluru
Work from Office
We are hiring for a leading Insurance Consulting organisation for Data Strategy & Governance role Exp: 4 to 6 yyrs Location : BangalorePune Responsibility: Develop and Drive Data Capability Maturity Assessment, Data & Analytics Operating Model & Data Governance exercises for clients Managing Critical Data Elements (CDEs) and coordinating with Finance Data Stewards Overseeing data quality standards and governance implementation Establish processes around effective data management ensuring Data Quality & Governance standards as well as roles for Data Stewards
Posted 1 month ago
3.0 - 6.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Data Engineering Pipeline Development Design implement and maintain ETL processes using ADF and ADB Create and manage views in ADB and SQL for efficient data access Optimize SQL queries for large datasets and high performance Conduct end-to-end testing and impact analysis on data pipelines Optimization Performance Tuning Identify and resolve bottlenecks in data processing Optimize SQL queries and Delta Tables for fast data processing Data Sharing Integration Implement Delta Share, SQL Endpoints, and other data sharing methods Use Delta Tables for efficient data sharing and processing API Integration Development Integrate external systems through Databricks Notebooks and build scalable solutions Experience in building APIs (Good to have) Collaboration Documentation Collaborate with teams to understand requirements and design solutions Provide documentation for data processes and architectures
Posted 1 month ago
4.0 - 9.0 years
12 - 22 Lacs
Gurugram
Work from Office
To Apply - Submit Details via Google Form - https://forms.gle/8SUxUV2cikzjvKzD9 As a Senior Consultant in our Consulting team, youll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations Seeking experienced AWS Data Engineers to design, implement, and maintain robust data pipelines and analytics solutions using AWS services. The ideal candidate will have a strong background in AWS data services, big data technologies, and programming languages. Role & responsibilities 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Preferred candidate profile 1. Bachelors degree in computer science, Information Technology, or related field 2. Relevant years of experience as a Data Engineer, with at least 60% of experience focusing on AWS 3. Strong proficiency in AWS data services: Glue, EMR, Lambda, Athena, Redshift, S3 4. Experience with data lake technologies, particularly Delta Lake 5. Expertise in database systems: Aurora, RDS, DynamoDB, PostgreSQL 6. Proficiency in Python and PySpark programming 7. Strong SQL skills and experience with PostgreSQL 8. Experience with AWS Step Functions for workflow orchestration Technical Skills: - AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB , Step Functions - Big Data: Hadoop, Spark, Delta Lake - Programming: Python, PySpark - Databases: SQL, PostgreSQL, NoSQL - Data Warehousing and Analytics - ETL/ELT processes - Data Lake architectures - Version control: Git - Agile methodologies
Posted 1 month ago
4.0 - 9.0 years
12 - 22 Lacs
Gurugram, Bengaluru
Work from Office
To Apply - Submit Details via Google Form - https://forms.gle/8SUxUV2cikzjvKzD9 As a Senior Consultant in our Consulting team, youll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations Seeking experienced AWS Data Engineers to design, implement, and maintain robust data pipelines and analytics solutions using AWS services. The ideal candidate will have a strong background in AWS data services, big data technologies, and programming languages. Role & responsibilities 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Preferred candidate profile 1. Bachelors degree in computer science, Information Technology, or related field 2. Relevant years of experience as a Data Engineer, with at least 60% of experience focusing on AWS 3. Strong proficiency in AWS data services: Glue, EMR, Lambda, Athena, Redshift, S3 4. Experience with data lake technologies, particularly Delta Lake 5. Expertise in database systems: Aurora, RDS, DynamoDB, PostgreSQL 6. Proficiency in Python and PySpark programming 7. Strong SQL skills and experience with PostgreSQL 8. Experience with AWS Step Functions for workflow orchestration Technical Skills: - AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB , Step Functions - Big Data: Hadoop, Spark, Delta Lake - Programming: Python, PySpark - Databases: SQL, PostgreSQL, NoSQL - Data Warehousing and Analytics - ETL/ELT processes - Data Lake architectures - Version control: Git - Agile methodologies
Posted 1 month ago
4.0 - 8.0 years
15 - 22 Lacs
Hyderabad
Work from Office
Senior Data Engineer Cloud & Modern Data Architectures Role Overview: We are looking for a Senior Data Engineer with expertise in ETL/ELT, Data Engineering, Data Warehousing, Data Lakes, Data Mesh, and Data Fabric architectures . The ideal candidate should have hands-on experience in at least one or two cloud data platforms (AWS, GCP, Azure, Snowflake, or Databricks) and a strong foundation in building PoCs, mentoring freshers, and contributing to accelerators and IPs. Must-Have: 5-8 years of experience in Data Engineering & Cloud Data Services . Hands-on with AWS (Redshift, Glue), GCP (BigQuery, Dataflow), Azure (Synapse, Data Factory), Snowflake, Databricks . Strong SQL, Python, or Scala skills. Knowledge of Data Mesh & Data Fabric principles . Nice-to-Have: Exposure to MLOps, AI integrations, and Terraform/Kubernetes for DataOps . Contributions to open-source, accelerators, or internal data frameworks Interested candidates share cv to dikshith.nalapatla@motivitylabs.com with below mention details for quick response. Total Experience: Relevant DE Experience : SQL Experience : SQL Rating out of 5 : Python Experience: Do you have experience in any 2 clouds(yes/no): Mention the cloud experience you have(Aws, Azure,GCP): Current Role / Skillset: Current CTC: Fixed: Payroll Company(Name): Client Company(Name): Expected CTC: Official Notice Period: (if it negotiable kindly mention up to how many days) Serving Notice (Yes / No): CTC of offer in hand: Last Working Day (in current organization): Location of the Offer in hand: ************* 5 DAYS WORK FROM OFFICE ****************
Posted 1 month ago
9.0 - 10.0 years
12 - 14 Lacs
Hyderabad
Work from Office
Responsibilities: * Design, develop & maintain data pipelines using Airflow/Data Flow/Data Lake * Optimize performance & scalability of ETL processes with SQL & Python
Posted 1 month ago
7.0 - 12.0 years
15 - 20 Lacs
Hyderabad
Work from Office
Overview We are seeking a strategic and hands-on Manager of Business Intelligence (BI) and Data Governance to lead the development and execution of our enterprise-wide data strategy. This role will oversee data governance frameworks, manage modern BI platforms, and ensure the integrity, availability, and usability of business-critical data. Reporting into senior leadership, this role plays a pivotal part in shaping data-informed decision-making across functions including Finance, Revenue Operations, Product, and more. The ideal candidate is a technically proficient and people-oriented leader with a deep understanding of data governance, cloud data architecture, and SaaS KPIs. They will drive stakeholder engagement, enablement, and adoption of data tools and insights, with a focus on building scalable, trusted, and observable data systems. Responsibilities Data Governance Leadership: Establish and maintain a comprehensive data governance framework that includes data quality standards, ownership models, data stewardship processes, and compliance alignment with regulations such as GDPR and SOC 2. Enterprise Data Architecture: Oversee data orchestration across Salesforce (SFDC), cloud-based data warehouses (e.g., Databricks, Snowflake, or equivalent), and internal systems. Cross collaborate with data engineering team for the development and optimization of ETL pipelines to ensure data reliability and performance at scale. Team Management & Enablement: Lead and mentor a team of BI analysts, and governance specialists. Foster a culture of collaboration, continuous learning, and stakeholder enablement to increase data adoption across the organization. BI Strategy & Tools Management: Own the BI toolset (with a strong emphasis on Tableau), and define standards for scalable dashboard design, self-service reporting, and analytics enablement. Evaluate and incorporate additional platforms (e.g., Power BI, Looker) as needed. Stakeholder Engagement & Strategic Alignment: Partner with leaders in Finance, RevOps, Product, and other departments to align reporting and data strategy with business objectives. Translate business needs into scalable reporting solutions and drive enterprise-wide adoption through clear communication and training. Data Quality & Observability: Implement data quality monitoring, lineage tracking, and observability tools to proactively detect issues and ensure data reliability and trustworthiness. Documentation & Transparency: Create and maintain robust documentation for data processes, pipeline architecture, code repositories (via GitHub), and business definitions to support transparency and auditability for technical and non-technical users. Executive-Level Reporting & Insight: Design and maintain strategic dashboards that surface key SaaS performance indicators to senior leadership and the board. Deliver actionable insights to support company-wide strategic decisions. Continuous Improvement & Innovation: Stay current with trends in data governance, BI technologies, and AI. Proactively recommend and implement enhancements to tools, processes, and governance maturity. Qualifications Data Governance Expertise: Proven experience implementing data governance frameworks, compliance standards, and ownership models across cross-functional teams. SQL Expertise: Advanced SQL skills with a strong background in ETL/data pipeline development across systems like Salesforce and enterprise data warehouses. BI Tools Mastery: Expertise in Tableau for developing reports and dashboards. Experience driving adoption of BI best practices across a diverse user base. Salesforce Data Proficiency: Deep understanding of SFDC data structure, reporting, and integration with downstream systems. Version Control & Documentation: Hands-on experience with GitHub and best practices in code versioning and documentation of data pipelines. Leadership & Stakeholder Communication: 3+ years of people management experience with a track record of team development and stakeholder engagement. Analytics Experience: 8+ years of experience in analytics roles, working with large datasets to derive insights and support executive-level decision-making. Programming Knowledge: Proficiency in Python for automation, data manipulation, and integration tasks. SaaS Environment Acumen: Deep understanding of SaaS metrics, business models, and executive reporting needs. Cross-functional Collaboration: Demonstrated success in partnering with teams like Finance, Product, and RevOps to meet enterprise reporting and insight goals.
Posted 1 month ago
2.0 - 3.0 years
1 - 4 Lacs
Pune
Hybrid
Must have skills required : SQL, Power BI, Tableau, AWS, Azure, Data Modeling, XSLT Good to have skills : XML, Data Warehousing, Data Pipeline Role Associate Data Analyst Location: Kharadi, Pune We are seeking a diligent Associate Data Analyst who will play a crucial role in supporting data driven decisions across the organization. You will be responsible for collecting, analyzing, and interpreting data to identify trends, generate insights, and help improve business outcomes. Working alongside senior analysts and systems managers, you will help develop reports, dashboards, and visualizations to communicate key findings to collaborators. This individual has a solid grasp of data analysis techniques, attention to detail, and a drive to convert raw data into actionable intelligence. This role offers significant opportunities for learning and professional development in analytics, reporting, and data management. What will you be doing? Collect, clean, and validate data from internal and external sources, ensuring accuracy, consistency, and completeness. Design, develop, and maintain dashboards and reports using tools such as Excel, Power BI, or Tableau. Write, optimize, and maintain SQL queries to extract, transform, and manipulate data for analysis. Perform exploratory data analysis to identify patterns, trends, and anomalies; document data workflows; and uphold data integrity across systems. Collaborate with business partners to understand data requirements and deliver actionable insights that support decision-making. What will you need to be successful? Education: Bachelors or Master's degree in computer science, Software Engineering, Information Technology, or a related field, or equivalent experience. Licenses/Certifications : Certifications in SQL or data visualization/reporting tools are a plus! Experience: Should have a 2-3 years of hands-on experience in data analysis or a related field. Strong proficiency in SQL for querying and data manipulation. Experience with data visualization tools such as Power BI or Tableau. Understanding of data warehousing concepts and familiarity with cloud platforms like AWS or Azure. Knowledge of data modeling and data pipeline concepts is required! Knowledge in working with XSLT and XML. Competencies: Strong analytical and problem-solving abilities. Superb communication and presentation skills. Meticulous attention to detail and a passion for working with data. A proactive, self-motivated attitude with a dedication to continuous learning and growth.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough