Home
Jobs
Companies
Resume

266 Athena Jobs

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 6.0 years

55 - 60 Lacs

Pune

Work from Office

Naukri logo

At Capgemini Invent, we believe difference drives change. As inventive transformation consultants, we blend our strategic, creative and scientific capabilities,collaborating closely with clients to deliver cutting-edge solutions. Join us to drive transformation tailored to our client's challenges of today and tomorrow. Informed and validated by science and data. Superpowered by creativity and design. All underpinned by technology created with purpose. Data engineers are responsible for building reliable and scalable data infrastructure that enables organizations to derive meaningful insights, make data-driven decisions, and unlock the value of their data assets. - Grade Specific The role support the team in building and maintaining data infrastructure and systems within an organization. Skills (competencies) Ab Initio Agile (Software Development Framework) Apache Hadoop AWS Airflow AWS Athena AWS Code Pipeline AWS EFS AWS EMR AWS Redshift AWS S3 Azure ADLS Gen2 Azure Data Factory Azure Data Lake Storage Azure Databricks Azure Event Hub Azure Stream Analytics Azure Sunapse Bitbucket Change Management Client Centricity Collaboration Continuous Integration and Continuous Delivery (CI/CD) Data Architecture Patterns Data Format Analysis Data Governance Data Modeling Data Validation Data Vault Modeling Database Schema Design Decision-Making DevOps Dimensional Modeling GCP Big Table GCP BigQuery GCP Cloud Storage GCP DataFlow GCP DataProc Git Google Big Tabel Google Data Proc Greenplum HQL IBM Data Stage IBM DB2 Industry Standard Data Modeling (FSLDM) Industry Standard Data Modeling (IBM FSDM)) Influencing Informatica IICS Inmon methodology JavaScript Jenkins Kimball Linux - Redhat Negotiation Netezza NewSQL Oracle Exadata Performance Tuning Perl Platform Update Management Project Management PySpark Python R RDD Optimization SantOs SaS Scala Spark Shell Script Snowflake SPARK SPARK Code Optimization SQL Stakeholder Management Sun Solaris Synapse Talend Teradata Time Management Ubuntu Vendor Management Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fuelled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of "22.5 billion.

Posted 22 hours ago

Apply

5.0 - 7.0 years

7 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

As a senior SAP Consultant, you will serve as a client-facing practitioner working collaboratively with clients to deliver high-quality solutions and be a trusted business advisor with deep understanding of SAP Accelerate delivery methodology or equivalent and associated work products. You will work on projects that assist clients in integrating strategy, process, technology, and information to enhance effectiveness, reduce costs, and improve profit and shareholder value. There are opportunities for you to acquire new skills, work across different disciplines, take on new challenges, and develop a comprehensive understanding of various industries. Your primary responsibilities include: Strategic SAP Solution FocusWorking across technical design, development, and implementation of SAP solutions for simplicity, amplification, and maintainability that meet client needs. Comprehensive Solution DeliveryInvolvement in strategy development and solution implementation, leveraging your knowledge of SAP and working with the latest technologies. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS; Exposure to streaming solutions and message brokers like Kafka technologies. Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers

Posted 23 hours ago

Apply

5.0 - 7.0 years

7 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

As a senior SAP Consultant, you will serve as a client-facing practitioner working collaboratively with clients to deliver high-quality solutions and be a trusted business advisor with deep understanding of SAP Accelerate delivery methodology or equivalent and associated work products. You will work on projects that assist clients in integrating strategy, process, technology, and information to enhance effectiveness, reduce costs, and improve profit and shareholder value. There are opportunities for you to acquire new skills, work across different disciplines, take on new challenges, and develop a comprehensive understanding of various industries. Your primary responsibilities include: Strategic SAP Solution FocusWorking across technical design, development, and implementation of SAP solutions for simplicity, amplification, and maintainability that meet client needs. Comprehensive Solution DeliveryInvolvement in strategy development and solution implementation, leveraging your knowledge of SAP and working with the latest technologies. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS; Exposure to streaming solutions and message brokers like Kafka technologies. Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers

Posted 23 hours ago

Apply

10.0 - 15.0 years

22 - 37 Lacs

Bengaluru

Work from Office

Naukri logo

Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to dive headfirst into the captivating world of data engineering at Kyndryl? As a Data Engineer, you'll be the visionary behind our data platforms, crafting them into powerful tools for decision-makers. Your role? Ensuring a treasure trove of pristine, harmonized data is at everyone's fingertips. As an AWS Data Engineer at Kyndryl, you will be responsible for designing, building, and maintaining scalable, secure, and high-performing data pipelines using AWS cloud-native services. This role requires extensive hands-on experience with both real-time and batch data processing, expertise in cloud-based ETL/ELT architectures, and a commitment to delivering clean, reliable, and well-modeled datasets. Key Responsibilities: Design and develop scalable, secure, and fault-tolerant data pipelines utilizing AWS services such as Glue, Lambda, Kinesis, S3, EMR, Step Functions, and Athena. Create and maintain ETL/ELT workflows to support both structured and unstructured data ingestion from various sources, including RDBMS, APIs, SFTP, and Streaming. Optimize data pipelines for performance, scalability, and cost-efficiency. Develop and manage data models, data lakes, and data warehouses on AWS platforms (e.g., Redshift, Lake Formation). Collaborate with DevOps teams to implement CI/CD and infrastructure as code (IaC) for data pipelines using CloudFormation or Terraform. Ensure data quality, validation, lineage, and governance through tools such as AWS Glue Data Catalog and AWS Lake Formation. Work in concert with data scientists, analysts, and application teams to deliver data-driven solutions. Monitor, troubleshoot, and resolve issues in production pipelines. Stay abreast of AWS advancements and recommend improvements where applicable. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Skills and Experience Bachelor’s or master’s degree in computer science, Engineering, or a related field Over 8 years of experience in data engineering More than 3 years of experience with the AWS data ecosystem Strong experience with Pyspark, SQL, and Python Proficiency in AWS services: Glue, S3, Redshift, EMR, Lambda, Kinesis, CloudWatch, Athena, Step Functions Familiarity with data modelling concepts, dimensional models, and data lake architectures Experience with CI/CD, GitHub Actions, CloudFormation/Terraform Understanding of data governance, privacy, and security best practices Strong problem-solving and communication skills Preferred Skills and Experience Experience working as a Data Engineer and/or in cloud modernization. Experience with AWS Lake Formation and Data Catalog for metadata management. Knowledge of Databricks, Snowflake, or BigQuery for data analytics. AWS Certified Data Engineer or AWS Certified Solutions Architect is a plus. Strong problem-solving and analytical thinking. Excellent communication and collaboration abilities. Ability to work independently and in agile teams. A proactive approach to identifying and addressing challenges in data workflows. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.

Posted 1 day ago

Apply

2.0 - 7.0 years

13 - 18 Lacs

Pune

Work from Office

Naukri logo

The Customer, Sales & Service Practice | Cloud Job Title - Amazon Connect + Level 11 (Analyst) + Entity (S&C GN) Management Level: Level 11 - Analyst Location: Delhi, Mumbai, Bangalore, Gurgaon, Pune, Hyderabad and Chennai Must have skills: AWS contact center, Amazon Connect flows, AWS Lambda and Lex bots, Amazon Connect Contact Center Good to have skills: AWS Lambda and Lex bots, Amazon Connect Experience: Minimum 2 year(s) of experience is required Educational Qualification: Engineering Degree or MBA from a Tier 1 or Tier 2 institute Join our team of Customer Sales & Service consultants who solve customer facing challenges at clients spanning sales, service and marketing to accelerate business change. Practice: Customer Sales & Service Sales I Areas of Work: Cloud AWS Cloud Contact Center Transformation, Analysis and Implementation | Level: Analyst | Location: Delhi, Mumbai, Bangalore, Gurgaon, Pune, Hyderabad and Chennai | Years of Exp: 2-5 years Explore an Exciting Career at Accenture Are you passionate about scaling businesses using in-depth frameworks and techniques to solve customer facing challengesDo you want to design, build and implement strategies to enhance business performanceDoes working in an inclusive and collaborative environment spark your interest Then, this is the right place for you! Welcome to a host of exciting global opportunities within Accenture Strategy & Consultings Customer, Sales & Service practice. The Practice A Brief Sketch The Customer Sales & Service Consulting practice is aligned to the Capability Network Practice of Accenture and works with clients across their marketing, sales and services functions. As part of the team, you will work on transformation services driven by key offerings like Living Marketing, Connected Commerce and Next-Generation Customer Care. These services help our clients become living businesses by optimizing their marketing, sales and customer service strategy, thereby driving cost reduction, revenue enhancement, customer satisfaction and impacting front end business metrics in a positive manner. You will work closely with our clients as consulting professionals who design, build and implement initiatives that can help enhance business performance. As part of these, you will drive the following: Work on creating business cases for journey to cloud, cloud strategy, cloud contact center vendor assessment activities Work on creating Cloud transformation approach for contact center transformations Work along with Solution Architects for architecting cloud contact center technology with AWS platform Work on enabling cloud contact center technology platforms for global clients specifically on Amazon connect Work on innovative assets, proof of concept, sales demos for AWS cloud contact center Support AWS offering leads in responding to RFIs and RFPs Bring your best skills forward to excel at the role: Good understanding of contact center technology landscape. An understanding of AWS Cloud platform and services with Solution architect skills. Deep expertise on AWS contact center relevant services. Sound experience in developing Amazon Connect flows , AWS Lambda and Lex bots Deep functional and technical understanding of APIs and related integration experience Functional and technical understanding of building API-based integrations with Salesforce, Service Now and Bot platforms Ability to understand customer challenges and requirements, ability to address these challenges/requirements in a differentiated manner. Ability to help the team to implement the solution, sell, deliver cloud contact center solutions to clients. Excellent communications skills Ability to develop requirements based on leadership input Ability to work effectively in a remote, virtual, global environment Ability to take new challenges and to be a passionate learner Read about us. Blogs Whats in it for you An opportunity to work on transformative projects with key G2000 clients Potential to Co-create with leaders in strategy, industry experts, enterprise function practitioners and, business intelligence professionals to shape and recommend innovative solutions that leverage emerging technologies. Ability to embed responsible business into everythingfrom how you service your clients to how you operate as a responsible professional. Personalized training modules to develop your strategy & consulting acumen to grow your skills, industry knowledge and capabilities Opportunity to thrive in a culture that is committed to accelerate equality for all. Engage in boundaryless collaboration across the entire organization. About Accenture: Accenture is a leading global professional services company, providing a broad range of services and solutions in strategy, consulting, digital, technology and operations. Combining unmatched experience and specialized skills across more than 40 industries and all business functions underpinned by the worlds largest delivery network Accenture works at the intersection of business and technology to help clients improve their performance and create sustainable value for their stakeholders. With 569,000 people serving clients in more than 120 countries, Accenture drives innovation to improve the way the world works and lives. Visit us at www.accenture.com About Accenture Strategy & Consulting: Accenture Strategy shapes our clients future, combining deep business insight with the understanding of how technology will impact industry and business models. Our focus on issues such as digital disruption, redefining competitiveness, operating and business models as well as the workforce of the future helps our clients find future value and growth in a digital world. Today, digital is changing the way organizations engage with their employees, business partners, customers and communities. This is our unique differentiator. To bring this global perspective to our clients, Accenture Strategy's services include those provided by our Capability Network a distributed management consulting organization that provides management consulting and strategy expertise across the client lifecycle. Our Capability Network teams complement our in-country teams to deliver cutting-edge expertise and measurable value to clients all around the world. For more information visit https://www.accenture.com/us-en/Careers/capability-network Accenture Capability Network | Accenture in One Word come and be a part of our team.QualificationYour experience counts! Bachelors degree in related field or equivalent experience and Post-Graduation in Business management would be added value. Minimum 2 years of experience in delivering software as a service or platform as a service projects related to cloud CC service providers such as Amazon Connect Contact Center cloud solution Hands-on experience working on the design, development and deployment of contact center solutions at scale. Hands-on development experience with cognitive service such as Amazon connect, Amazon Lex, Lambda, Kinesis, Athena, Pinpoint, Comprehend, Transcribe Working knowledge of one of the programming/scripting languages such as Node.js, Python, Java

Posted 4 days ago

Apply

5.0 - 10.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Naukri logo

Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Cloud Data Architecture Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Solutions Architect - Lead, you will analyze, design, code, and test multiple components of application code. You will perform maintenance, enhancements, and/or development work, contributing to the overall success of the projects. Roles & Responsibilities:Design and develop the overall architecture of our digital data platform using AWS services.Create and maintain cloud infrastructure designs and architectural diagrams. Collaborate with stakeholders to understand business requirements and translate them into scalable AWS-based solutions. Evaluate and recommend AWS technologies, services, and tools for the platform. Ensure the scalability, performance, security, and cost-effectiveness of the AWS-based platform. Lead and mentor the technical team in implementing architectural decisions and AWS best practices. Develop and maintain architectural documentation and standards for AWS implementations. Stay current with emerging AWS technologies, services, and industry trends. Optimize existing AWS infrastructure for performance and cost. Implement and manage disaster recovery and business continuity plans. Professional & Technical Skills: Minimum 8 years of experience in IT architecture, with at least 5 years in a solutions architect role. Strong knowledge of AWS platform and services (e.g., EC2, S3, RDS, Lambda, API Gateway, VPC, IAM). Experience with big data technologies and data warehousing solutions on AWS (e.g., Redshift, EMR, Athena).Experience in Infrastructure as Code (e.g., CloudFormation, Terraform). Exposure to Continuous Integration/Continuous Deployment (CI/CD) pipelines. Experience in Containerization technologies (e.g., Docker, Kubernetes).Proficiency in multiple programming languages and frameworks. AWS Certified Solutions Architect - Professional certification required. Additional Information:The candidate should have a minimum of 5 years of experience in solutions architect role.This position is based at our Hyderabad office.A 15 years full time education is required (Bachelor of Engineering in Electronics/Computer Science, or any related stream). Qualification 15 years full time education

Posted 4 days ago

Apply

4.0 - 8.0 years

11 - 16 Lacs

Hyderabad

Work from Office

Naukri logo

Job Summary: We are looking for a highly skilled AWS Data Architect to design and implement scalable, secure, and high-performing data architecture solutions on AWS. The ideal candidate will have hands-on experience in building data lakes, data warehouses, and data pipelines, along with a solid understanding of data governance and cloud security best practices. Roles and Responsibilities: Design and implement data architecture solutions on AWS using services such as S3, Redshift, Glue, Lake Formation, Athena, and Lambda. Develop scalable ETL/ELT workflows and data pipelines using AWS Glue, Apache Spark, or AWS Data Pipeline. Define and implement data governance, security, and compliance strategies, including IAM policies, encryption, and data cataloging. Create and manage data lakes and data warehouses that are scalable, cost-effective, and secure. Collaborate with data engineers, analysts, and business stakeholders to develop robust data models and reporting solutions. Evaluate and recommend tools, technologies, and best practices to optimize data architecture and ensure high-quality solutions. Ensure data quality, performance tuning, and optimization for large-scale data storage and processing Required Skills and Qualifications: Proven experience in AWS data services such as S3, Redshift, Glue, etc. Strong knowledge of data modeling, data warehousing, and big data architecture. Hands-on experience with ETL/ELT tools and data pipeline frameworks. Good understanding of data security and compliance in cloud environments. Excellent problem-solving skills and ability to work collaboratively with cross-functional teams. Strong verbal and written communication skills. Preferred Skills: AWS Certified Data Analytics – Specialty or AWS Solutions Architect Certification. Experience in performance tuning and optimizing large datasets.

Posted 4 days ago

Apply

2.0 - 7.0 years

15 - 20 Lacs

Hyderabad

Work from Office

Naukri logo

Job Area: Engineering Group, Engineering Group > Software Engineering General Summary: As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Software Engineer, you will design, develop, create, modify, and validate embedded and cloud edge software, applications, and/or specialized utility programs that launch cutting-edge, world class products that meet and exceed customer needs. Qualcomm Software Engineers collaborate with systems, hardware, architecture, test engineers, and other teams to design system-level software solutions and obtain information on performance requirements and interfaces. Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 2+ years of Software Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 1+ year of Software Engineering or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field. 2+ years of academic or work experience with Programming Language such as C, C++, Java, Python, etc. General Summary Preferred Qualifications 3+ years of experience as a Data Engineer or in a similar role Experience with data modeling, data warehousing, and building ETL pipelines Solid working experience with Python, AWS analytical technologies and related resources (Glue, Athena, QuickSight, SageMaker, etc.,) Experience with Big Data tools , platforms and architecture with solid working experience with SQL Experience working in a very large data warehousing environment, Distributed System. Solid understanding on various data exchange formats and complexities Industry experience in software development, data engineering, business intelligence, data science, or related field with a track record of manipulating, processing, and extracting value from large datasets Strong data visualization skills Basic understanding of Machine Learning; Prior experience in ML Engineering a plus Ability to manage on-premises data and make it inter-operate with AWS based pipelines Ability to interface with Wireless Systems/SW engineers and understand the Wireless ML domain; Prior experience in Wireless (5G) domain a plus Education Bachelor's degree in computer science, engineering, mathematics, or a related technical discipline Preferred QualificationsMasters in CS/ECE with a Data Science / ML Specialization Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 3+ years of Software Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field OR PhD in Engineering, Information Systems, Computer Science, or related field. 3+ years of experience with Programming Language such as C, C++, Java, Python, etc. Develops, creates, and modifies general computer applications software or specialized utility programs. Analyzes user needs and develops software solutions. Designs software or customizes software for client use with the aim of optimizing operational efficiency. May analyze and design databases within an application area, working individually or coordinating database development as part of a team. Modifies existing software to correct errors, allow it to adapt to new hardware, or to improve its performance. Analyzes user needs and software requirements to determine feasibility of design within time and cost constraints. Confers with systems analysts, engineers, programmers and others to design system and to obtain information on project limitations and capabilities, performance requirements and interfaces. Stores, retrieves, and manipulates data for analysis of system capabilities and requirements. Designs, develops, and modifies software systems, using scientific analysis and mathematical models to predict and measure outcome and consequences of design. Principal Duties and Responsibilities: Completes assigned coding tasks to specifications on time without significant errors or bugs. Adapts to changes and setbacks in order to manage pressure and meet deadlines. Collaborates with others inside project team to accomplish project objectives. Communicates with project lead to provide status and information about impending obstacles. Quickly resolves complex software issues and bugs. Gathers, integrates, and interprets information specific to a module or sub-block of code from a variety of sources in order to troubleshoot issues and find solutions. Seeks others' opinions and shares own opinions with others about ways in which a problem can be addressed differently. Participates in technical conversations with tech leads/managers. Anticipates and communicates issues with project team to maintain open communication. Makes decisions based on incomplete or changing specifications and obtains adequate resources needed to complete assigned tasks. Prioritizes project deadlines and deliverables with minimal supervision. Resolves straightforward technical issues and escalates more complex technical issues to an appropriate party (e.g., project lead, colleagues). Writes readable code for large features or significant bug fixes to support collaboration with other engineers. Determines which work tasks are most important for self and junior engineers, stays focused, and deals with setbacks in a timely manner. Unit tests own code to verify the stability and functionality of a feature.

Posted 5 days ago

Apply

4.0 - 5.0 years

6 - 10 Lacs

Chennai

Work from Office

Naukri logo

We are currently seeking a Data Visualization Expert - Quick sight to join our team in Chennai, Tamil Ndu (IN-TN), India (IN). What awaits you/ Job Profile Location Bangalore and Chennai, Hybrid mode,Immediate to 10 Days Notice period Develop reports using Amazon Quicksight Data Visualization DevelopmentDesign and develop data visualizations using Amazon Quicksight to present complex data in a clear and understandable format. Create interactive dashboards and reports that allow end-users to explore data and draw meaningful conclusions. Data AnalysisCollaborate with data analysts and business stakeholders to understand data requirements, gather insights, and transform raw data into actionable visualizations. Dashboard User Interface (UI) and User Experience (UX)Ensure that the data visualizations are user-friendly, intuitive, and aesthetically pleasing. Optimize the user experience by incorporating best practices in UI/UX design. Data IntegrationWork closely with data engineers and data architects to ensure seamless integration of data sources into Quicksight, enabling real-time and up-to-date visualizations. Performance OptimizationIdentify and address performance bottlenecks in data queries and visualization rendering to ensure quick and responsive dashboards. Data Security and GovernanceEnsure compliance with data security policies and governance guidelines when handling sensitive data within Quicksight. Training and DocumentationProvide training and support to end-users and stakeholders on how to interact with and interpret visualizations effectively. Create detailed documentation of the visualization development process. Stay Updated with Industry TrendsKeep up to date with the latest data visualization trends, technologies, and best practices to continuously enhance the quality and impact of visualizations. Using the Agile Methodology, attending daily standups and use of the Agile tools Collaborating with cross-functional teams and stakeholders to ensure data security, privacy, and compliance with regulations. using Scrum/Kanban Proficiency in Software Development best practices - Secure coding standards, Unit testing frameworks, Code coverage, Quality gates. Ability to lead and deliver change in a very productive way Lead Technical discussions with customers to find the best possible solutions. W orking closely with the Project Manager, Solution Architect and managing client communication (as and when required) What should you bring along Must Have Person should have relevant work experience in analytics, reporting and business intelligence tools. 4-5 years of hands-on experience in data visualization. Relatively 2-year Experience developing visualization using Amazon Quicksight. Experience working with various data sources and databases. Ability to work with large datasets and design efficient data models for visualization. Nice to Have AI Project implementation and AI methods. Must have technical skill Quick sight , SQL , AWS Good to have Technical skills Tableau, Data Engineer

Posted 6 days ago

Apply

2.0 - 5.0 years

6 - 10 Lacs

Chennai

Work from Office

Naukri logo

We are currently seeking a Data Visualization Expert - Quick sight to join our team in Chennai, Tamil Ndu (IN-TN), India (IN). What awaits you/ Job Profile Design and develop data visualizations using Amazon QuickSight to present complex data in clear and understandable Dashboards. Create interactive dashboards and reports that allow end-users to explore data and draw meaningful conclusions. Work on Data preparation and ensure the good quality data is used in Visualization. Collaborate with data analysts and business stakeholders to understand data requirements, gather insights, and transform raw data into actionable visualizations. Ensure that the data visualizations are user-friendly, intuitive, and aesthetically pleasing. Optimize the user experience by incorporating best practices. Identify and address performance bottlenecks in data queries and visualization. Ensure compliance with data security policies and governance guidelines when handling sensitive data within QuickSight. Provide training and support to end-users and stakeholders on how to interact with Dashboards. Self-Managing and explore the latest technical development and incorporate in the project. Experience in analytics, reporting and business intelligence tools. Using the Agile Methodology, attending daily standups and use of the Agile tools. Lead Technical discussions with customers to find the best possible solutions. What should you bring along Must Have Overall experience of 2-5 years in Data visualization development. Minimum of 2 years in QuickSight and 1-2 years in other BI Tools like Tableau, PowerBI, Qlik Good In writing complex SQL Scripting, Dataset Modeling. Hands on in AWS -Athena, RDS, S3, IAM, permissions, Logging and monitoring Services. Experience working with various data sources and databases like Oracle, mySQL, S3, Athena. Ability to work with large datasets and design efficient data models for visualization. Prior experience in working in Agile, Scrum/Kanban working model. Nice to Have Knowledge on Data ingestion and Data pipeline in AWS. Knowledge Amazon Q or AWS LLM Service to enable AI integration Must have skill Quick sight, Tableau, SQL , AWS Good to have skills Qlikview ,Data Engineer, AWS LLM

Posted 6 days ago

Apply

6.0 - 10.0 years

15 - 25 Lacs

Bengaluru

Work from Office

Naukri logo

Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to dive headfirst into the captivating world of data engineering at Kyndryl? As a Data Engineer, you'll be the visionary behind our data platforms, crafting them into powerful tools for decision-makers. Your role? Ensuring a treasure trove of pristine, harmonized data is at everyone's fingertips. AWS Data/API Gateway Pipeline Engineer responsible for designing, building, and maintaining real-time, serverless data pipelines and API services. This role requires extensive hands-on experience with Java, Python, Redis, DynamoDB Streams, and PostgreSQL, along with working knowledge of AWS Lambda and AWS Glue for data processing and orchestration. This position involves collaboration with architects, backend developers, and DevOps engineers to deliver scalable, event-driven data solutions and secure API services across cloud-native systems. Key Responsibilities API & Backend Engineering Build and deploy RESTful APIs using AWS API Gateway, Lambda, and Java and Python. Integrate backend APIs with Redis for low-latency caching and pub/sub messaging. Use PostgreSQL for structured data storage and transactional processing. Secure APIs using IAM, OAuth2, and JWT, and implement throttling and versioning strategies. Data Pipeline & Streaming Design and develop event-driven data pipelines using DynamoDB Streams to trigger downstream processing. Use AWS Glue to orchestrate ETL jobs for batch and semi-structured data workflows. Build and maintain Lambda functions to process real-time events and orchestrate data flows. Ensure data consistency and resilience across services, queues, and databases. Cloud Infrastructure & DevOps Deploy and manage cloud infrastructure using CloudFormation, Terraform, or AWS CDK. Monitor system health and service metrics using CloudWatch, SNS and structured logging. Contribute to CI/CD pipeline development for testing and deploying Lambda/API services. So, if you're a technical enthusiast with a passion for data, we invite you to join us in the exhilarating world of data engineering at Kyndryl. Let's transform data into a compelling story of innovation and growth. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Skills and Experience Bachelor's degree in computer science, Engineering, or a related field. Over 6 years of experience in developing backend or data pipeline services using Java and Python . Strong hands-on experience with: AWS API Gateway , Lambda , DynamoDB Streams Redis (caching, messaging) PostgreSQL (schema design, tuning, SQL) AWS Glue for ETL jobs and data transformation Solid understanding of REST API design principles, serverless computing, and real-time architecture. Preferred Skills and Experience Familiarity with Kafka, Kinesis, or other message streaming systems Swagger/OpenAPI for API documentation Docker and Kubernetes (EKS) Git, CI/CD tools (e.g., GitHub Actions) Experience with asynchronous event processing, retries, and dead-letter queues (DLQs) Exposure to data lake architectures (S3, Glue Data Catalog, Athena) Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.

Posted 1 week ago

Apply

5.0 - 8.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

Roles and Responsibilities: Experience in GLUE AWS Experience with one or more of the followingSpark, Scala, Python, and/or R . Experience in API development with NodeJS Experience with AWS (S3, EC2) or other cloud provider Experience in Data Virtualization tools like Dremio and Athena is a plus Should be technically proficient in Big Data concepts Should be technically proficient in Hadoop and noSQL (MongoDB) Good communication and documentation skills

Posted 1 week ago

Apply

8.0 - 13.0 years

9 - 14 Lacs

Bengaluru

Work from Office

Naukri logo

8+ years experience combined between backend and data platform engineering roles Worked on large scale distributed systems. 5+ years of experience building data platform with (one of) Apache Spark, Flink or with similar frameworks. 7+ years of experience programming with Java Experience building large scale data/event pipelines Experience with relational SQL and NoSQL databases, including Postgres/MySQL, Cassandra, MongoDB Demonstrated experience with EKS, EMR, S3, IAM, KDA, Athena, Lambda, Networking, elastic cache and other AWS services.

Posted 1 week ago

Apply

4.0 - 7.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

PySpark, Python, SQL Strong focus on big data processing,which is core to data engineering. AWS Cloud Services (Lambda, Glue, S3, IAM) Indicates working with cloud-based data pipelines. Airflow, GitHub Essential for orchestration and version control in data workflows.

Posted 1 week ago

Apply

3.0 - 6.0 years

10 - 15 Lacs

Gurugram, Bengaluru

Work from Office

Naukri logo

3+ years of experience in data science roles, working with tabular data in large-scale projects. Experience in feature engineering and working with methods such as XGBoost, LightGBM, factorization machines , and similar algorithms. Experience in adtech or fintech industries is a plus. Familiarity with clickstream data, predictive modeling for user engagement, or bidding optimization is highly advantageous. MS or PhD in mathematics, computer science, physics, statistics, electrical engineering, or a related field. Proficiency in Python (3.9+), with experience in scientific computing and machine learning tools (e.g., NumPy, Pandas, SciPy, scikit-learn, matplotlib, etc.). Familiarity with deep learning frameworks (such as TensorFlow or PyTorch) is a plus. Strong expertise in applied statistical methods, A/B testing frameworks, advanced experiment design, and interpreting complex experimental results. Experience querying and processing data using SQL and working with distributed data storage solutions (e.g., AWS Redshift, Snowflake, BigQuery, Athena, Presto, MinIO, etc.). Experience in budget allocation optimization, lookalike modeling, LTV prediction, or churn analysis is a plus. Ability to manage multiple projects, prioritize tasks effectively, and maintain a structured approach to complex problem-solving. Excellent communication and collaboration skills to work effectively with both technical and business teams.

Posted 1 week ago

Apply

4.0 - 9.0 years

12 - 22 Lacs

Gurugram

Work from Office

Naukri logo

To Apply - Submit Details via Google Form - https://forms.gle/8SUxUV2cikzjvKzD9 As a Senior Consultant in our Consulting team, youll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations Seeking experienced AWS Data Engineers to design, implement, and maintain robust data pipelines and analytics solutions using AWS services. The ideal candidate will have a strong background in AWS data services, big data technologies, and programming languages. Role & responsibilities 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Preferred candidate profile 1. Bachelors degree in computer science, Information Technology, or related field 2. Relevant years of experience as a Data Engineer, with at least 60% of experience focusing on AWS 3. Strong proficiency in AWS data services: Glue, EMR, Lambda, Athena, Redshift, S3 4. Experience with data lake technologies, particularly Delta Lake 5. Expertise in database systems: Aurora, RDS, DynamoDB, PostgreSQL 6. Proficiency in Python and PySpark programming 7. Strong SQL skills and experience with PostgreSQL 8. Experience with AWS Step Functions for workflow orchestration Technical Skills: - AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB , Step Functions - Big Data: Hadoop, Spark, Delta Lake - Programming: Python, PySpark - Databases: SQL, PostgreSQL, NoSQL - Data Warehousing and Analytics - ETL/ELT processes - Data Lake architectures - Version control: Git - Agile methodologies

Posted 1 week ago

Apply

4.0 - 9.0 years

12 - 22 Lacs

Gurugram, Bengaluru

Work from Office

Naukri logo

To Apply - Submit Details via Google Form - https://forms.gle/8SUxUV2cikzjvKzD9 As a Senior Consultant in our Consulting team, youll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations Seeking experienced AWS Data Engineers to design, implement, and maintain robust data pipelines and analytics solutions using AWS services. The ideal candidate will have a strong background in AWS data services, big data technologies, and programming languages. Role & responsibilities 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Preferred candidate profile 1. Bachelors degree in computer science, Information Technology, or related field 2. Relevant years of experience as a Data Engineer, with at least 60% of experience focusing on AWS 3. Strong proficiency in AWS data services: Glue, EMR, Lambda, Athena, Redshift, S3 4. Experience with data lake technologies, particularly Delta Lake 5. Expertise in database systems: Aurora, RDS, DynamoDB, PostgreSQL 6. Proficiency in Python and PySpark programming 7. Strong SQL skills and experience with PostgreSQL 8. Experience with AWS Step Functions for workflow orchestration Technical Skills: - AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB , Step Functions - Big Data: Hadoop, Spark, Delta Lake - Programming: Python, PySpark - Databases: SQL, PostgreSQL, NoSQL - Data Warehousing and Analytics - ETL/ELT processes - Data Lake architectures - Version control: Git - Agile methodologies

Posted 1 week ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Naukri logo

EPAM has presence across 40+ countries globally with 55,000 + professionals & numerous delivery centers, Key locations are North America, Eastern Europe, Central Europe, Western Europe, APAC, Mid East & Development Centers in India (Hyderabad, Pune & Bangalore). Location: Gurgaon/Pune/Hyderabad/Bengaluru/Chennai Work Mode: Hybrid (2-3 days office in a week) Job Description: 5-14 Years of in Big Data & Data related technology experience Expert level understanding of distributed computing principles Expert level knowledge and experience in Apache Spark Hands on programming with Python Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming Good understanding of Big Data querying tools, such as Hive, and Impala Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files Good understanding of SQL queries, joins, stored procedures, relational schemas Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge of ETL techniques and frameworks Performance tuning of Spark Jobs Experience with native Cloud data services AWS Ability to lead a team efficiently Experience with designing and implementing Big data solutions Practitioner of AGILE methodology WE OFFER Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: • Health benefits, Medical Benefits• Retirement benefits• Paid time off• Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc

Posted 1 week ago

Apply

4.0 - 9.0 years

4 - 8 Lacs

Chennai

Work from Office

Naukri logo

Your Role As a senior software engineer with Capgemini, you should have 4 + years of experience in Snowflake Data Engineer with strong project track record In this role you will play a key role in Strong customer orientation, decision making, problem solving, communication and presentation skills Very good judgement skills and ability to shape compelling solutions and solve unstructured problems with assumptions Very good collaboration skills and ability to interact with multi-cultural and multi-functional teams spread across geographies Strong executive presence andspirit Superb leadership and team building skills with ability to build consensus and achieve goals through collaboration rather than direct line authority Your Profile 4+ years of experience in data warehousing, and cloud data solutions. Minimum 2+ years of hands-on experience with End-to-end Snowflake implementation. Experience in developing data architecture and roadmap strategies with knowledge to establish data governance and quality frameworks within Snowflake Expertise or strong knowledge in Snowflake best practices, performance tuning, and query optimisation. Experience with cloud platforms like AWS or Azure and familiarity with Snowflakes integration with these environments. Strong knowledge in at least one cloud (AWS or Azure) is mandatory Skills (competencies) Ab Initio Agile (Software Development Framework) Apache Hadoop AWS Airflow AWS Athena AWS Code Pipeline AWS EFS AWS EMR AWS Redshift AWS S3 Azure ADLS Gen2 Azure Data Factory Azure Data Lake Storage Azure Databricks Azure Event Hub Azure Stream Analytics Azure Sunapse Bitbucket Change Management Client Centricity Collaboration Continuous Integration and Continuous Delivery (CI/CD) Data Architecture Patterns Data Format Analysis Data Governance Data Modeling Data Validation Data Vault Modeling Database Schema Design Decision-Making DevOps Dimensional Modeling GCP Big Table GCP BigQuery GCP Cloud Storage GCP DataFlow GCP DataProc Git Google Big Tabel Google Data Proc Greenplum HQL IBM Data Stage IBM DB2 Industry Standard Data Modeling (FSLDM) Industry Standard Data Modeling (IBM FSDM)) Influencing Informatica IICS Inmon methodology JavaScript Jenkins Kimball Linux - Redhat Negotiation Netezza NewSQL Oracle Exadata Performance Tuning Perl Platform Update Management Project Management PySpark Python R RDD Optimization SantOs SaS Scala Spark Shell Script Snowflake SPARK SPARK Code Optimization SQL Stakeholder Management Sun Solaris Synapse Talend Teradata Time Management Ubuntu Vendor Management

Posted 1 week ago

Apply

4.0 - 9.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Your Role As a senior software engineer with Capgemini, you should have 4 + years of experience in Azure Data Engineer with strong project track record In this role you will play a key role in Strong customer orientation, decision making, problem solving, communication and presentation skills Very good judgement skills and ability to shape compelling solutions and solve unstructured problems with assumptions Very good collaboration skills and ability to interact with multi-cultural and multi-functional teams spread across geographies Strong executive presence andspirit Superb leadership and team building skills with ability to build consensus and achieve goals through collaboration rather than direct line authority Your Profile Experience with Azure Data Bricks, Data Factory Experience with Azure Data components such as Azure SQL Database, Azure SQL Warehouse, SYNAPSE Analytics Experience in Python/Pyspark/Scala/Hive Programming. Experience with Azure Databricks/ADB Experience with building CI/CD pipelines in Data environments Primary Skills ADF (Azure Data Factory) OR ADB (Azure Data Bricks) Secondary Skills Excellent verbal and written communication and interpersonal skills Skills (competencies) Ab Initio Agile (Software Development Framework) Apache Hadoop AWS Airflow AWS Athena AWS Code Pipeline AWS EFS AWS EMR AWS Redshift AWS S3 Azure ADLS Gen2 Azure Data Factory Azure Data Lake Storage Azure Databricks Azure Event Hub Azure Stream Analytics Azure Sunapse Bitbucket Change Management Client Centricity Collaboration Continuous Integration and Continuous Delivery (CI/CD) Data Architecture Patterns Data Format Analysis Data Governance Data Modeling Data Validation Data Vault Modeling Database Schema Design Decision-Making DevOps Dimensional Modeling GCP Big Table GCP BigQuery GCP Cloud Storage GCP DataFlow GCP DataProc Git Google Big Tabel Google Data Proc Greenplum HQL IBM Data Stage IBM DB2 Industry Standard Data Modeling (FSLDM) Industry Standard Data Modeling (IBM FSDM)) Influencing Informatica IICS Inmon methodology JavaScript Jenkins Kimball Linux - Redhat Negotiation Netezza NewSQL Oracle Exadata Performance Tuning Perl Platform Update Management Project Management PySpark Python R RDD Optimization SantOs SaS Scala Spark Shell Script Snowflake SPARK SPARK Code Optimization SQL Stakeholder Management Sun Solaris Synapse Talend Teradata Time Management Ubuntu Vendor Management

Posted 1 week ago

Apply

5.0 - 8.0 years

15 - 27 Lacs

Hyderabad

Work from Office

Naukri logo

Dear Candidate, We are pleased to invite you to participate in the EY GDS face to face hiring Event for the position of AWS Data Engineer. Role: AWS Data Engineer Experience Required: 5-8 Years Location - Hyderabad Mode of interview - Face to Face JD - Technical Skills: • Must have Strong experience in AWS Data Services like Glue , Lambda, Even bridge, Kinesis, S3/ EMR , Redshift , RDS, Step functions, Airflow & Pyspark • Strong exposure to IAM, Cloud Trail , Cluster optimization , Python & SQL • Should have expertise in Data design, STTM, understanding of Data models , Data component design, Automated testing, Code Coverage, UAT support , Deployment and go live • Experience with version control systems like SVN, Git. Create and manage AWS Glue crawlers and jobs to automate data cataloging and ingestion processes across various structured and unstructured data sources. • Strong experience with AWS Glue building ETL pipelines, managing crawlers, and working with Glue data catalogue. • Proficiency in AWS Redshift designing and managing Redshift clusters, writing complex SQL queries, and optimizing query performance. • Enable data consumption from reporting and analytics business applications using AWS services (ex: QuickSight, Sagemaker, JDBC / ODBC connectivity, etc.) Kindly confirm your availability by applying to this Job

Posted 1 week ago

Apply

12.0 - 17.0 years

14 - 19 Lacs

Hyderabad

Work from Office

Naukri logo

W e are seeking a highly skilled , hands -on and technically proficient Test Automation Engineering Manager with strong experience in data quality , data integration , and a specific focus on semantic layer validation . This role combines technical ownership of automated data testing solutions with team leadership responsibilities, ensuring that the data infrastructure across platforms remains accurate , reliable, and high performing . As a leader in the QA and Data Engineering space, you will be responsible for building robust automated testing frameworks, validating GraphQL -based data layers, and driving the teams technical growth. Your work will ensure that all data flows, transformations, and API interactions meet enterprise-grade quality standards across the data lifecycle. Y ou will be responsible for the end-to-end design and development of test automation frameworks, working collaboratively with your team. As the delivery owner for test automation, your primary responsibilities will include building and automating comprehensive validation frameworks for semantic layer testing, GraphQL API validation, and schema compliance , ensuring alignment with data quality, performance, and integration reliability standards. You will also work closely with data engineers, product teams, and platform architects to validate data contracts and integration logic, supporting the integrity and trustworthiness of enterprise data solutions. This is a highly technical and hands-on role, with strong emphasis on automation, data workflow validation , and the seamless integration of testing practices into CI/CD pipelines . Roles & Responsibilities: Design and implement robust data validation frameworks focused on the semantic layer, ensuring accurate data model, schema compliance, and contract adherence across services and platforms. Build and automate end-to-end data pipeline validations across ingestion, transformation, and consumption layers using Databricks, Apache Spark, and AWS services such as S3, Glue, Athena, and Lake Formation. Lead test automation initiatives by developing scalable, modular test frameworks and embedding them into CI/CD pipelines for continuous validation of semantic models, API integrations, and data workflows. Validate GraphQL APIs by testing query/mutation structures, schema compliance, and end-to-end integration accuracy using tools like Postman, Python, and custom test suites. Oversee UI and visualization testing for tools like Tableau, Power BI, and custom front-end dashboards, ensuring consistency with backend data through Selenium with Python and backend validations. Define and drive the overall QA strategy with emphasis on performance, reliability, and semantic data accuracy, while setting up alerting and reporting mechanisms for test failures, schema issues, and data contract violations. Collaborate closely with product managers, data engineers, developers, and DevOps teams to align quality assurance initiatives with business goals and agile release cycles. Actively contribute to architecture and design discussions, ensuring quality and testability are embedded from the earliest stages of development. Mentor and manage QA engineers, fostering a collaborative environment focused on technical excellence, knowledge sharing, and continuous professional growth. Must-Have Skills: Team Leadership Experience is also required. Strong 6+ years of experience in Requested Data Ops/Testing is required 7+ to 12 years of Overall experience is expected in Test Automation. Strong experience in designing and implementing test automation frameworks integrated with CI/CD pipelines. Expertise in validating data pipelines at the syntactic layer, including schema checks, null/duplicate handling, and transformation validation. Hands-on experience with Databricks, Apache Spark, and AWS services (S3, Glue, Athena, Lake Formation). Proficiency in Python, PySpark, and SQL for writing validation scripts and automation logic. Solid understanding of GraphQL APIs, including schema validation and query/mutation testing. Experience with API testing tools like Postman and Python-based test frameworks. Proficient in UI and visualization testing using Selenium with Python, especially for tools like Tableau, Power BI, or custom dashboards. Familiarity with CI/CD tools such as Jenkins, GitHub Actions, or GitLab CI for test orchestration. Ability to implement alerting and reporting for test failures, anomalies, and validation issues. Strong background in defining QA strategies and leading test automation initiatives in data-centric environments. Excellent collaboration and communication skills, with the ability to work closely with cross-functional teams in Agile settings. Mentor and manage QA engineers, fostering a collaborative environment focused on technical excellence, knowledge sharing, and continuous professional growth. Good-to-Have Skills: Experience with data governance tools such as Apache Atlas, Collibra, or Alation Understanding of DataOps methodologies and practices Contributions to internal quality dashboards or data observability systems Awareness of metadata-driven testing approaches and lineage-based validations Experience working with agile Testing methodologies such as Scaled Agile. Familiarity with automated testing frameworks like Selenium, JUnit, TestNG, or PyTest. Education and Professional Certifications Bachelors/Masters degree in computer science and engineering preferred. Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT We provide reasonable accommodations for individuals with disabilities during the application, interview process, job functions, and employment benefits. Contact us to request an accommodation .

Posted 1 week ago

Apply

3.0 - 8.0 years

6 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

ABOUT THE ROLE We are seeking a highly skilled, hands-on Senior QA & Test Automation Specialist ( T e st Automation Engineer ) with strong experience in data validation , ETL testing , test automation , and QA process ownership . This role combines deep technical execution with a solid foundation in QA best practices including test planning, defect tracking, and test lifecycle management . You will be responsible for designing and executing manual and automated test strategies for complex real-time and batch data pipelines , contributing to the design of automation frameworks , and ensuring high-quality data delivery across our AWS and Databricks-based analytics platforms . The role is highly technical and hands-on , with a strong focus on automation, metadata validation , and ensuring data governance practices are seamlessly integrated into development pipelines. Roles & Responsibilities: Collaborate with the QA Manager to design and implement end-to-end test strategies for data validation, semantic layer testing, and GraphQL API validation. Perform manual validation of data pipelines, including source-to-target data mapping, transformation logic, and business rule verification. Develop and maintain automated data validation scripts using Python and PySpark for both real-time and batch pipelines. Contribute to the design and enhancement of reusable automation frameworks, with components for schema validation, data reconciliation, and anomaly detection. Validate semantic layers (e.g., Looker, dbt models) and GraphQL APIs, ensuring data consistency, compliance with contracts, and alignment with business expectations. Write and manage test plans, test cases, and test data for structured, semi-structured, and unstructured data. Track, manage, and report defects using tools like JIRA, ensuring thorough root cause analysis and timely resolution. Collaborate with Data Engineers, Product Managers, and DevOps teams to integrate tests into CI/CD pipelines and enable shift-left testing practices. Ensure comprehensive test coverage for all aspects of the data lifecycle, including ingestion, transformation, delivery, and consumption. Participate in QA ceremonies (standups, planning, retrospectives) and continuously contribute to improving the QA process and culture. Experience building or maintainingtest data generators Contributions to internal quality dashboards or data observability systems Awareness of metadata-driven testing approaches and lineage-based validations Experience working with agile Testing methodologies such as Scaled Agile. Familiarity with automated testing frameworks like Selenium, JUnit, TestNG, or PyTest. Must-Have Skills: 6-9 years of experience in QA roles, with at least 3+ yearsof strong exposure to data pipeline testing and ETL validation. Strong in SQL, Python, and optionally PySpark comfortable with writing complex queries and validation scripts. Practical experience with manual validation of data pipelines and source-to-target testing. Experience in validating GraphQL APIs, semantic layers (Looker, dbt, etc.), and schema/data contract compliance. Familiarity with data integration tools and platforms such as Databricks, AWS Glue, Redshift, Athena, or BigQuery. Strong understanding of test planning, defect tracking, bug lifecycle management, and QA documentation. Experience working in Agile/Scrum environments with standard QA processes. Knowledge of test case and defect management tools (e.g., JIRA, TestRail, Zephyr). Strong understanding of QA methodologies, test planning, test case design, and defect lifecycle management. Deep hands-on expertise in SQL, Python, and PySpark for testing and automating validation. Proven experience in manual and automated testing of batch and real-time data pipelines. Familiarity with data processing and analytics stacks: Databricks, Spark, AWS (Glue, S3, Athena, Redshift). Experience with bug tracking and test management tools like JIRA, TestRail, or Zephyr. Ability to troubleshoot data issues independently and collaborate with engineering for root cause analysis. Experience integrating automated tests into CI/CD pipelines (e.g., Jenkins, GitHub Actions). Experience validating data from various file formats such as JSON, CSV, Parquet, and Avro Strong ability to validate and automate data quality checks: schema validation, null checks, duplicates, thresholds, and transformation validation Hands-on experience with API testing using Postman, pytest, or custom automation scripts Good-to-Have Skills: Experience with data governance tools such as Apache Atlas, Collibra, or Alation Familiarity with monitoring/observability tools such as Datadog, Prometheus, or CloudWatch Education and Professional Certifications Bachelors/Masters degree in computer science and engineering preferred. Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills.

Posted 1 week ago

Apply

6.0 - 9.0 years

8 - 11 Lacs

Hyderabad

Work from Office

Naukri logo

Role Description: We are seeking a highly skilled, hands-on Senior QA & Test Automation Specialist (Test Automation Engineer)with strong experience in data validation , ETL testing , test automation , and QA process ownership . This role combines deep technical execution with a solid foundation in QA best practices including test planning, defect tracking, and test lifecycle management . You will be responsible for designing and executing manual and automated test strategies for complex real-time and batch data pipelines , contributing to the design of automation frameworks , and ensuring high-quality data delivery across our AWS and Databricks-based analytics platforms . The role is highly technical and hands-on , with a strong focus on automation, metadata validation , and ensuring data governance practices are seamlessly integrated into development pipelines. Roles & Responsibilities: Collaborate with the QA Manager to design and implement end-to-end test strategies for data validation, semantic layer testing, and GraphQL API validation. Perform manual validation of data pipelines, including source-to-target data mapping, transformation logic, and business rule verification. Develop and maintain automated data validation scripts using Python and PySpark for both real-time and batch pipelines. Contribute to the design and enhancement of reusable automation frameworks, with components for schema validation, data reconciliation, and anomaly detection. Validate semantic layers (e.g., Looker, dbt models) and GraphQL APIs, ensuring data consistency, compliance with contracts, and alignment with business expectations. Write and manage test plans, test cases, and test data for structured, semi-structured, and unstructured data. Track, manage, and report defects using tools like JIRA, ensuring thorough root cause analysis and timely resolution. Collaborate with Data Engineers, Product Managers, and DevOps teams to integrate tests into CI/CD pipelines and enable shift-left testing practices. Ensure comprehensive test coverage for all aspects of the data lifecycle, including ingestion, transformation, delivery, and consumption. Participate in QA ceremonies (standups, planning, retrospectives) and continuously contribute to improving the QA process and culture. Experience building or maintaining test data generators Contributions to internal quality dashboards or data observability systems Awareness of metadata-driven testing approaches and lineage-based validations Experience working with agile Testing methodologies such as Scaled Agile. Familiarity with automated testing frameworks like Selenium, JUnit, TestNG, or PyTest. Must-Have Skills: 69 years of experience in QA roles, with at least 3+ years of strong exposure to data pipeline testing and ETL validation. Strong in SQL, Python, and optionally PySpark comfortable with writing complex queries and validation scripts. Practical experience with manual validation of data pipelines and source-to-target testing. Experience in validating GraphQL APIs, semantic layers (Looker, dbt, etc.), and schema/data contract compliance. Familiarity with data integration tools and platforms such as Databricks, AWS Glue, Redshift, Athena, or BigQuery. Strong understanding of test planning, defect tracking, bug lifecycle management, and QA documentation. Experience working in Agile/Scrum environments with standard QA processes. Knowledge of test case and defect management tools (e.g., JIRA, TestRail, Zephyr). Strong understanding of QA methodologies, test planning, test case design, and defect lifecycle management. Deep hands-on expertise in SQL, Python, and PySpark for testing and automating validation. Proven experience in manual and automated testing of batch and real-time data pipelines. Familiarity with data processing and analytics stacks: Databricks, Spark, AWS (Glue, S3, Athena, Redshift). Experience with bug tracking and test management tools like JIRA, TestRail, or Zephyr. Ability to troubleshoot data issues independently and collaborate with engineering for root cause analysis. Experience integrating automated tests into CI/CD pipelines (e.g., Jenkins, GitHub Actions). Experience validating data from various file formats such as JSON, CSV, Parquet, and Avro Strong ability to validate and automate data quality checks: schema validation, null checks, duplicates, thresholds, and transformation validation Hands-on experience with API testing using Postman, pytest, or custom automation scripts Good-to-Have Skills: Experience with data governance tools such as Apache Atlas, Collibra, or Alation Familiarity with monitoring/observability tools such as Datadog, Prometheus, or Cloud Watch Education and Professional Certifications Bachelors/Masters degree in computer science and engineering preferred. Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills.

Posted 1 week ago

Apply

11.0 - 16.0 years

27 - 32 Lacs

Hyderabad

Work from Office

Naukri logo

Director - Portfolio Operations Delivery What you will do Let’s do this. Let’s change the world. In this vital role the Director, Portfolio Effectiveness and Optimization Results Delivery within the Customer Data & Analytics team is accountable for coordinating our delivery efforts across the internal and external team located in AIN and across India. In addition, the Director must manage relationships across a complex internal set of teams and functional groups. This position reports to the Associate Vice President, Portfolio Effectiveness and Optimization and will be responsible for the following Responsibilities Key IntegratorAct as main point of contact and representative of the Portfolio Effectiveness and Optimization team in India Talent DevelopmentHire, train, develop, and manage talent to meet organizational needs Global CollaborationAct as the primary point of contact for PE&O senior leadership in the US and the offshore team in India, either through our Contract teams or direct AIN FTEs Operational Excellence and DeliveryOversee end-to-end delivery of core data and analytics projects ensuring quality, scalability, and operational efficiency, while promoting standard processes in data governance and analytics methodologies Offshore Vendor ManagementManage offshore teams including CWs, maintaining quality of service and timely deliverables Innovation LeadershipFoster a culture of innovation, ensuring the India team remains at the forefront of emerging technologies and trends in analytics, AI Business Impact & Collaborator ManagementEnsure analytics solutions drive tangible business outcomes and collaborate with global key collaborators to refine requirements, measure impact, and report progress Financial managementOversee PE&O budget associated with offshore work in India, ensuring best negotiated rates and overall value What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Doctorate degree and 4 years of statistics, operations research, mathematics, econometrics, business administration or a quantitative field experience OR Master’s degree and 14 to 16 years of statistics, operations research, mathematics, econometrics, business administration or a quantitative field experience OR Bachelor’s degree and 16 to 18 years of statistics, operations research, mathematics, econometrics, business administration or a quantitative field experience Managerial experience, directly handling people and/or leadership experience leading teams, projects, programs or directing the allocation of resources Preferred Qualifications: Relevant data science certifications and Bio/Pharmaceutical industry experience 8+ years of innovative Data Science/Advanced Analytics leadership experience Experience in AI, Machine Learning, quantitative methods, multivariate statistics, predictive modelling and other analytics frameworks/techniques with 10+ years of experience delivering complex analytical projects Minimum 5 years of professional experience in Amazon Web ServicesRedShift, S3, Athena, etc. and industry standard Data Warehousing technologiesSnowflake, Spark, Airflow, etc. Advanced proficiency and hands on coding experience in Python/R/Scala/Java or any other Object-Oriented Programming language; ETL using SQL/shell scripting Experience in successfully completing AI/ML based Next Best Action recommendation engine to optimize against desired objective function(s) Expertise in setting up and measuring randomized controlled trials, cohort studies, and matched, case-control studies Comprehensive understanding of the components of setting up data models and running scenario planning that match the business need Experience in setting up process for data ingestion, Quality Checks etc. Thorough understanding of tagging, Google Analytics, CRM, Content Management Systems, and other components of a Digital Marketing Ecosystem. Leadership experience in building and developing dedication teams, delivering results, and shaping the future Ability to foster and encourage an environment of openness and transparency in seeking diverse opinions and empower risk-taking in idea generation, idea incubation and/or experimentation The ideal candidate will lead the creation of an analytics-driven culture that drives top-line growth, controls costs, and takes timely corrective action to reduce risks that derail plans Ability to think strategically about issues impacting an entire portfolio of therapeutics across geographies and stages of development Experience managing multiple senior key collaborators, prioritizing across a multitude of responsibilities and allocating resources to drive maximum impact Partners with business leaders to deliver high-quality predictions that guide strategic decision making Oral, written and presentation skills to explain complex concepts and controversial findings clearly to a variety of audiences, including senior management Comfortable challenging the status quo and bringing forward innovative solutions Ability to identify areas for process and systems innovation and implement change that will enhance the overall effectiveness of the team Comfortable working through and leading large-scale global change management Understanding of technology platforms and ability to partner with IS/IT and business leaders What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies