Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 10.0 years
15 - 25 Lacs
Chennai
Work from Office
Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to dive headfirst into the captivating world of data engineering at Kyndryl? As an AWS Data Engineer, you'll be the visionary behind our data platforms, crafting them into powerful tools for decision-makers. Your role? Ensuring a treasure trove of pristine, harmonized data is at everyone's fingertips. In this role, you'll be engineering the backbone of our data infrastructure, ensuring the availability of pristine, refined data sets. With a well-defined methodology, critical thinking, and a rich blend of domain expertise, consulting finesse, and software engineering prowess, you'll be the mastermind of data transformation. Key Responsibilities: 1. Data Pipeline Design & Development Design and develop scalable, resilient, and secure ETL/ELT data pipelines using AWS services. Build and optimize data workflows leveraging AWS Glue, EMR, Lambda, and Step Functions. Implement batch and real-time data ingestion using Kafka, Kinesis, or AWS Data Streams. Ensure efficient data movement across S3, Redshift, DynamoDB, RDS, and Snowflake. 2. Cloud Data Engineering & Storage Architect and manage data lakes and data warehouses using Amazon S3, Redshift, and Athena. Optimize data storage and retrieval using Parquet, ORC, Avro, and columnar storage formats. Implement data partitioning, indexing, and query performance tuning. Work with NoSQL databases (DynamoDB, MongoDB) and relational databases (PostgreSQL, MySQL, Aurora). 3. Infrastructure as Code (IaC) & Automation Deploy and manage AWS data infrastructure using Terraform, AWS CloudFormation, or AWS CDK. Implement CI/CD pipelines for automated data pipeline deployments using GitHub Actions, Jenkins, or AWS CodePipeline. Automate data workflows and job orchestration using Apache Airflow, AWS Step Functions, or MWAA. 4. Performance Optimization & Monitoring Optimize Spark, Hive, and Presto queries for performance and cost efficiency. Implement auto-scaling strategies for AWS EMR clusters. Set up monitoring, logging, and alerting with AWS CloudWatch, CloudTrail, and Prometheus/Grafana. 5. Security, Compliance & Governance Implement IAM policies, encryption (AWS KMS), and role-based access controls. Ensure compliance with GDPR, HIPAA, and industry data governance standards. Monitor data pipelines for security vulnerabilities and unauthorized access. 6. Collaboration & Stakeholder Engagement Work closely with data analysts, data scientists, and business teams to understand data needs. Document data pipeline designs, architecture decisions, and best practices. Mentor and guide junior data engineers on AWS best practices and optimization techniques. Your journey begins by understanding project objectives and requirements from a business perspective, converting this knowledge into a data puzzle. You'll be delving into the depths of information to uncover quality issues and initial insights, setting the stage for data excellence. But it doesn't stop there. You'll be the architect of data pipelines, using your expertise to cleanse, normalize, and transform raw data into the final dataset—a true data alchemist. So, if you're a technical enthusiast with a passion for data, we invite you to join us in the exhilarating world of data engineering at Kyndryl. Let's transform data into a compelling story of innovation and growth. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Skills and Experience 7+ years of experience in data engineering with a focus on AWS cloud technologies. Expertise in AWS Glue, Lambda, EMR, Redshift, Kinesis , and Step Functions. Proficiency in SQL, Python, Java and PySpark for data transformations. Strong understanding of ETL/ELT best practices and data warehousing concepts. Experience with Apache Airflow or Step Functions for orchestration. Familiarity with Kafka, Kinesis, or other streaming platforms. Knowledge of Terraform, CloudFormation, and DevOps for AWS. Expertise in data mining, data storage, and Extract-Transform-Load (ETL) processes. Experience in data pipelines development and tooling, such as Glue, Databricks, Synapse, or Dataproc. Experience with both relational and NoSQL databases, including PostgreSQL, DB2, and MongoDB. Excellent problem-solving, analytical, and critical thinking skills. Ability to manage multiple projects simultaneously while maintaining attention to detail. Communication skills: Ability to communicate with both technical and non-technical colleagues to derive technical requirements from business needs and problems. Preferred Skills and Experience Experience working as a Data Engineer and/or in cloud modernization. Experience with AWS Lake Formation and Data Catalog for metadata management. Knowledge of Databricks, Snowflake, or BigQuery for data analytics. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.
Posted 2 weeks ago
7.0 - 9.0 years
20 - 25 Lacs
Hyderabad, Bengaluru
Work from Office
Immediate Joiners Only Role & responsibilities 6+ years of experience with Snowflake (Snowpipe, Streams, Tasks) Strong proficiency in SQL for high-performance data transformations Hands-on experience building ELT pipelines using cloud-native tools Proficiency in dbt for data modeling and workflow automation Python skills (Pandas, PySpark, SQLAlchemy) for data processing Experience with orchestration tools like Airflow or Prefect
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture, and technology to become the best version of you. And we're counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Supply Chain Data Integration Consultant Senior The opportunity We're looking for Senior Level Consultants with expertise in Data Modelling, Data Integration, Data Manipulation, and analysis to join the Supply Chain Technology group of our GDS consulting Team. This is a fantastic opportunity to be part of a leading firm while being instrumental in the growth of a new service offering. This role demands a highly technical, extremely hands-on Data Warehouse Modelling consultant who will work closely with our EY Partners and external clients to develop new business as well as drive other initiatives on different business needs. The ideal candidate must have a good understanding of the value of data warehouse and ETL with Supply Chain industry knowledge and proven experience in delivering solutions to different lines of business and technical leadership. Your key responsibilities A minimum of 5+ years of experience in BI/Data integration/ETL/DWH solutions in cloud and on-premises platforms such as Informatica/PC/IICS/Alteryx/Talend/Azure Data Factory (ADF)/SSIS/SSAS/SSRS and experience on any reporting tool like Power BI, Tableau, OBIEE, etc. Performing Data Analysis and Data Manipulation as per client requirements. Expert in Data Modelling to simplify business concepts. Create extensive ER Diagrams to help business in decision-making. Working experience with large, heterogeneous datasets in building and optimizing data pipelines, pipeline architectures, and integrated datasets using data integration technologies. Should be able to develop sophisticated workflows & macros (Batch, Iterative, etc.) in Alteryx with enterprise data. Design and develop ETL workflows and datasets in Alteryx to be used by the BI Reporting tool. Perform end-to-end Data validation to maintain the accuracy of data sets. Support client needs by developing SSIS Packages in Visual Studio (version 2012 or higher) or Azure Data Factory (Extensive hands-on experience implementing data migration and data processing using Azure Data Factory). Support client needs by delivering Various Integrations with third-party applications. Experience in pulling data from a variety of data source types using appropriate connection managers as per Client needs. Develop, Customize, Deploy, maintain SSIS packages as per client business requirements. Should have thorough knowledge in creating dynamic packages in Visual Studio with multiple concepts such as - reading multiple files, Error handling, Archiving, Configuration creation, Package Deployment, etc. Experience working with clients throughout various parts of the implementation lifecycle. Proactive with a Solution-oriented mindset, ready to learn new technologies for Client requirements. Analyzing and translating business needs into long-term solution data models. Evaluating existing Data Warehouses or Systems. Strong knowledge of database structure systems and data mining. Skills and attributes for success Deliver large/medium DWH programs, demonstrate expert core consulting skills and an advanced level of Informatica, SQL, PL/SQL, Alteryx, ADF, SSIS, Snowflake, Databricks knowledge, and industry expertise to support delivery to clients. Demonstrate management and an ability to lead projects or teams individually. Experience in team management, communication, and presentation. To qualify for the role, you must have 5+ years ETL experience as Lead/Architect. Expertise in the ETL Mappings, Data Warehouse concepts. Should be able to design a Data Warehouse and present solutions as per client needs. Thorough knowledge in Structured Query Language (SQL) and experience working on SQL Server. Experience in SQL tuning and optimization using explain plan and SQL trace files. Should have experience in developing SSIS Batch Jobs Deployment, Scheduling Jobs, etc. Building Alteryx workflows for data integration, modeling, optimization, and data quality. Knowledge of Azure components like ADF, Azure Data Lake, and Azure SQL DB. Knowledge of data modeling and ETL design. Design and develop complex mappings, Process Flows, and ETL scripts. In-depth experience in designing the database and data modeling. Ideally, you'll also have Strong knowledge of ELT/ETL concepts, design, and coding. Expertise in data handling to resolve any data issues as per client needs. Experience in designing and developing DB objects such as Tables, Views, Indexes, Materialized Views, and Analytical functions. Experience of creating complex SQL queries for retrieving, manipulating, checking, and migrating complex datasets in DB. Experience in SQL tuning and optimization using explain plan and SQL trace files. Candidates ideally should have ideally good knowledge of ETL technologies/tools such as Alteryx, SSAS, SSRS, Azure Analysis Services, Azure Power Apps. Good verbal and written communication in English, Strong interpersonal, analytical, and problem-solving abilities. Experience of interacting with customers in understanding business requirement documents and translating them into ETL specifications and High- and Low-level design documents. Candidates having additional knowledge of BI tools such as PowerBi, Tableau, etc will be preferred. Experience with Cloud databases and multiple ETL tools. What we look for The incumbent should be able to drive ETL Infrastructure related developments. Additional knowledge of complex source system data structures preferably in Financial services (preferred) Industry and reporting related developments will be an advantage. An opportunity to be a part of market-leading, multi-disciplinary team of 10000 + professionals, in the only integrated global transaction business worldwide. Opportunities to work with EY GDS consulting practices globally with leading businesses across a range of industries. What working at EY offers At EY, we're dedicated to helping our clients, from startups to Fortune 500 companies, and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees, and you will be able to control your development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching, and feedback from some of the most engaging colleagues around. Opportunities to develop new skills and progress your career. The freedom and flexibility to handle your role in a way that's right for you. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people, and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform, and operate. Working across assurance, consulting, law, strategy, tax, and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.,
Posted 3 weeks ago
6.0 - 11.0 years
18 - 32 Lacs
Hyderabad
Hybrid
Job Title: Senior Data Engineer Python, PySpark, AWS Experience Required: 6 to 12 Years Location: Hyderabad Job Type: Full Time / Permanent Job Description: We are looking for a passionate and experienced Senior Data Engineer to join our team in Hyderabad . The ideal candidate should have a strong background in data engineering on AWS , with hands-on expertise in Python, PySpark, and AWS services to build and maintain scalable data pipelines and ETL workflows. Mandatory Skills: Data Engineering Python PySpark AWS Services (S3, Glue, Lambda, Redshift, RDS, EC2, Data Pipeline) Key Responsibilities: Design and implement robust, scalable data pipelines using PySpark , AWS Glue , and AWS Data Pipeline . Develop and maintain efficient ETL workflows to handle large-scale data processing. Automate data workflows and job orchestration using AWS Data Pipeline Ensure smooth data integration across services like S3 , Redshift , and RDS . Optimize data processing for performance and cost efficiency on the cloud. Work with various file formats like CSV, Parquet, and Avro. Technical Requirements: 8+ years of experience in Data Engineering , particularly in cloud-based environments . Proficient in Python and PySpark for data transformation and manipulation. Strong experience with AWS Glue for ETL development, Data Catalog, and Crawlers. Solid knowledge of SQL for querying structured and semi-structured data. Familiar with Data Lake architectures , Amazon EMR , and Kinesis . Experience with Docker , Git , and CI/CD pipelines for deployment and versioning Interested Candidates can also share their CV at akanksha.s@esolglobal.com
Posted 3 weeks ago
5.0 - 10.0 years
10 - 12 Lacs
Navi Mumbai
Work from Office
Hello Candidates , We are Hiring !! Job Position - Data Engineer Experience - 5+ years Location - NAVI MUMBAI ( Juinagar ) Work Mode - WFO Job Description We are looking for an experienced and results-driven Senior Data Engineer to join our Data Engineering team. In this role, you will design, develop, and maintain robust data pipelines and infrastructure that enable efficient data flow across our systems. As a senior contributor, you will also help define best practices, mentor junior team members, and contribute to the long-term vision of our data platform. You will work closely with cross-functional teams to deliver reliable, scalable, and high-performance data systems that support critical business intelligence and analytics initiatives. Responsibilities Design, build, and maintain scalable ETL/ELT pipelines to support analytics, Data Warehouse, and business operations. Collaborate with cross-functional teams to gather requirements and deliver high-quality data solutions. Develop and manage data models, data lakes, and data warehouse solutions in cloud environments (e.g., AWS, Azure, GCP). Monitor and optimize the performance of data pipelines and storage systems. Ensure data quality, integrity, and security across all platforms. Optimize and tune SQL queries and ETL jobs for performance and scalability. Collaborate with business analysts, data scientists, and stakeholders to understand requirements and deliver data solutions. Contribute to architectural decisions and development standards across the data engineering team. Participate in code reviews and provide guidance to junior developers. Leverage tools such as Airflow, Spark, Kafka, dbt, or Snowflake to build modern data infrastructure. Ensure data accuracy, completeness, and integrity across systems. Implement best practices in data governance, security, and compliance (e.g., GDPR, HIPAA). Mentor junior developers and participate in peer code reviews. Create and maintain detailed technical documentation. Required Qualifications Bachelors degree in Computer Science, Information Systems, or a related field; Masters degree is a plus. 5+ years of experience in data warehousing, ETL development, and data modeling. Strong hands-on experience with one or more databases: Snowflake, Redshift, SQL Server, Oracle, Postgres, Teradata, BigQuery. Proficiency in SQL and scripting languages (e.g., Python, Shell). Deep knowledge of data modeling techniques and ETL frameworks. Excellent communication, analytical thinking, and troubleshooting skills. Preferred Qualifications Experience with modern data stack tools like dbt, Fivetran, Stitch, Looker, Tableau, or Power BI. Knowledge of data lakes, lakehouses, and real-time data streaming (e.g., Kafka). Agile/Scrum project experience and version control using Git. NOTE - Candidates can share their Resume - shruti.a@talentsketchers.com
Posted 3 weeks ago
5.0 - 10.0 years
30 - 35 Lacs
Bengaluru
Remote
Position Responsibilities: As a part of the Data Warehouse Team, implement technology improvements to Enterprise Data Warehouse environment. In this role, you will be a key contributor to the implementation of Snowflake with DBT, as part of our Migration from Oracle to a cloud-based data warehousing. Collaborate closely with cross-functional teams to design, develop, and optimize ELT processes within a cloud-based Data Warehouse environment. Develop and maintain Fivetran data pipelines to ensure smooth data extraction and loading from various source systems into Snowflake. Implement and enhance ETL programs using Informatica Power Center against the Data Warehouse and Adobe Campaign (Neolane) databases. Contribute to technical architectural planning, digital data modeling, process flow documentation and the design and development of innovative Digital business solutions. Create technical designs and mapping specifications. Work with both technical staff and business constituents to translate Digital business requirements into technical solutions. Estimate workload and participate in an Agile project team approach. Proven individual contributor and team player with high communication skills. Ability to lead, manage & validate workload for up to 2 offshore developers. Provide on-call support of the Data Warehouse nightly processing. Be an active participant in both technology and business initiatives. Position Requirements & Qualifications: At least 8 years of experience supporting Data Warehouse & Data related environments. Conduct efficient data integration with other-third party tools and snowflake. Hands on experience with snowflake development. Familiarity with cloud-based Data Warehousing solutions, particularly Snowflake. Advanced experience required in Informatica Power Center (5+ Years). Ability to code in Python, JavaScript. Knowledge of data governance practices and data security considerations in a cloud environment. Experience in working with Web services using Informatica for external vendor data integration. Experience in working with number of XML data sources & API Calls Solid experience in performance tuning ETL Jobs & Database queries Advanced Oracle & Snowflake database skills including packages, procedures, indexing and query tuning (5+ years). Solid understanding of Data Warehouse design theory including dimensional data modeling Working experience with cloud computing architecture. Experience in working with Azure DevOps, Jira, TFS (Team Foundation Server) or other similar Agile Project Management tool. Ability to thrive in change by having a fast, flexible, cooperative work style and ability to reprioritize at a moments notice. Bachelor's degree required. Notice Period - 0-15 days .
Posted 3 weeks ago
5.0 - 10.0 years
15 - 30 Lacs
Hyderabad
Hybrid
We are seeking a Lead Snowflake Engineer .The ideal candidate will bring deep technical expertise in Snowflake, hands-on experience with DBT (Data Build Tool), and a collaborative mindset for working across data, analytics, and business teams.
Posted 3 weeks ago
6.0 - 11.0 years
7 - 17 Lacs
Gurugram
Work from Office
We are heavily dependent on BigQuery/Snowflake, Airflow, Stitch/Fivetran, dbt , Tableau/Looker for our business intelligence and embrace AWS with some GCP. As a Data Engineer,Developing end to end ETL/ELT Pipeline.
Posted 3 weeks ago
6.0 - 10.0 years
18 - 33 Lacs
Pune
Work from Office
Must have skills- Snowflake DBT SQL Notice period- Immediate to 15 Days
Posted 3 weeks ago
5.0 - 8.0 years
4 - 7 Lacs
Bengaluru
Work from Office
About The Role Skill required: Delivery - Marketing Analytics and Reporting Designation: I&F Decision Sci Practitioner Sr Analyst Qualifications: Any Graduation Years of Experience: 5 to 8 years About Accenture Combining unmatched experience and specialized skills across more than 40 industries, we offer Strategy and Consulting, Technology and Operations services, and Accenture Song all powered by the worlds largest network of Advanced Technology and Intelligent Operations centers. Our 699,000 people deliver on the promise of technology and human ingenuity every day, serving clients in more than 120 countries. Visit us at www.accenture.com What would you do Data & AIAnalytical processes and technologies applied to marketing-related data to help businesses understand and deliver relevant experiences for their audiences, understand their competition, measure and optimize marketing campaigns, and optimize their return on investment. What are we looking for Data Analytics - with a specialization in the marketing domain*Domain Specific skills* Familiarity with ad tech and B2B sales*Technical Skills* Proficiency in SQL and Python Experience in efficiently building, publishing & maintaining robust data models & warehouses for self-ser querying, advanced data science & ML analytic purposes Experience in conducting ETL / ELT with very large and complicated datasets and handling DAG data dependencies. Strong proficiency with SQL dialects on distributed or data lake style systems (Presto, BigQuery, Spark/Hive SQL, etc.), including SQL-based experience in nested data structure manipulation, windowing functions, query optimization, data partitioning techniques, etc. Knowledge of Google BigQuery optimization is a plus. Experience in schema design and data modeling strategies (e.g. dimensional modeling, data vault, etc.) Significant experience with dbt (or similar tools), Spark-based (or similar) data pipelines General knowledge of Jinja templating in Python. Hands-on experience with cloud provider integration and automation via CLIs and APIs*Soft Skills* Ability to work well in a team Agility for quick learning Written and verbal communication Roles and Responsibilities: In this role you are required to do analysis and solving of increasingly complex problems Your day-to-day interactions are with peers within Accenture You are likely to have some interaction with clients and/or Accenture management You will be given minimal instruction on daily work/tasks and a moderate level of instruction on new assignments Decisions that are made by you impact your own work and may impact the work of others In this role you would be an individual contributor and/or oversee a small work effort and/or team Please note that this role may require you to work in rotational shifts Qualification Any Graduation
Posted 3 weeks ago
10.0 - 15.0 years
22 - 37 Lacs
Bengaluru
Work from Office
Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to dive headfirst into the captivating world of data engineering at Kyndryl? As a Data Engineer, you'll be the visionary behind our data platforms, crafting them into powerful tools for decision-makers. Your role? Ensuring a treasure trove of pristine, harmonized data is at everyone's fingertips. As an AWS Data Engineer at Kyndryl, you will be responsible for designing, building, and maintaining scalable, secure, and high-performing data pipelines using AWS cloud-native services. This role requires extensive hands-on experience with both real-time and batch data processing, expertise in cloud-based ETL/ELT architectures, and a commitment to delivering clean, reliable, and well-modeled datasets. Key Responsibilities: Design and develop scalable, secure, and fault-tolerant data pipelines utilizing AWS services such as Glue, Lambda, Kinesis, S3, EMR, Step Functions, and Athena. Create and maintain ETL/ELT workflows to support both structured and unstructured data ingestion from various sources, including RDBMS, APIs, SFTP, and Streaming. Optimize data pipelines for performance, scalability, and cost-efficiency. Develop and manage data models, data lakes, and data warehouses on AWS platforms (e.g., Redshift, Lake Formation). Collaborate with DevOps teams to implement CI/CD and infrastructure as code (IaC) for data pipelines using CloudFormation or Terraform. Ensure data quality, validation, lineage, and governance through tools such as AWS Glue Data Catalog and AWS Lake Formation. Work in concert with data scientists, analysts, and application teams to deliver data-driven solutions. Monitor, troubleshoot, and resolve issues in production pipelines. Stay abreast of AWS advancements and recommend improvements where applicable. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Skills and Experience Bachelor’s or master’s degree in computer science, Engineering, or a related field Over 8 years of experience in data engineering More than 3 years of experience with the AWS data ecosystem Strong experience with Java, Pyspark, SQL, and Python Proficiency in AWS services: Glue, S3, Redshift, EMR, Lambda, Kinesis, CloudWatch, Athena, Step Functions Familiarity with data modelling concepts, dimensional models, and data lake architectures Experience with CI/CD, GitHub Actions, CloudFormation/Terraform Understanding of data governance, privacy, and security best practices Strong problem-solving and communication skills Preferred Skills and Experience Experience working as a Data Engineer and/or in cloud modernization. Experience with AWS Lake Formation and Data Catalog for metadata management. Knowledge of Databricks, Snowflake, or BigQuery for data analytics. AWS Certified Data Engineer or AWS Certified Solutions Architect is a plus. Strong problem-solving and analytical thinking. Excellent communication and collaboration abilities. Ability to work independently and in agile teams. A proactive approach to identifying and addressing challenges in data workflows. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.
Posted 3 weeks ago
7.0 - 12.0 years
10 - 20 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Skill: Data Engineer Experience: 7+ Years Location: Warangal, Bangalore, Chennai, Hyderabad, Mumbai, Pune, Delhi, Noida, Gurgaon, Kolkata, Jaipur, Jodhpur Notice Period: Immediate - 15 Days Job Description: Design & Build Data Pipelines Develop scalable ETL/ELT workflows to ingest, transform, and load data into Snowflake using SQL, Python, or data integration tools. Data Modeling Create and optimize Snowflake schemas, tables, views, and materialized views to support business analytics and reporting needs. Performance Optimization Tune Snowflake compute resources (warehouses), optimize query performance, and manage clustering and partitioning strategies. Data Quality & Validation Security & Access Control Automation & CI/CD Monitoring & Troubleshooting Documentation
Posted 3 weeks ago
7.0 - 10.0 years
30 - 40 Lacs
Noida
Hybrid
Insurance Domain Expert - Data Migration || 7+ years || Noida Years of exp: 7+ years Work mode: Hybrid Overview : Seeking an experienced Insurance Domain Expert to lead data migration projects within the organization. The ideal candidate will have a deep understanding of the insurance industry, data management principles, and hands-on experience in executing successful data migration initiatives. Key Responsibilities: 1. Industry Expertise: - Provide insights into best practices within the insurance domain to ensure compliance and enhance data quality. - Stay updated on regulatory changes affecting the insurance industry that may impact data processing and migration. 2. Data Migration Leadership: - Plan, design, and implement comprehensive data migration strategies to facilitate smooth transitions between systems. - Oversee the entire data migration process, including data extraction, cleaning, transformation, and loading (ETL / ELT). 3. Collaboration and Communication : - Liaise between technical teams and business stakeholders to ensure alignment of migration objectives with business goals. - Prepare and present progress reports and analytical findings to management and cross-functional teams. 4. Risk Management : - Identify potential data migration risks and develop mitigation strategies. - Conduct thorough testing and validation of migrated data to ensure accuracy and integrity. 5. Training and Support : - Train team members and clients on new systems and data handling processes post-migration. - Provide ongoing support and troubleshooting for data-related issues. Qualifications: - Bachelors degree in information technology, Computer Science, or a related field; advanced degree preferred. - Minimum of 7-10 years of experience in the insurance domain with a focus on data migration projects. - Strong knowledge of insurance products, underwriting, claims, and regulatory requirements. - Proficient in data migration tools and techniques, with experience in ETL processes. - Excellent analytical and problem-solving skills with a keen attention to detail. - Strong communication and presentation skills to interact with various stakeholders.
Posted 3 weeks ago
3.0 - 8.0 years
5 - 8 Lacs
Mumbai
Work from Office
Role Overview: Seeking an experienced Apache Airflow specialist to design and manage data orchestration pipelines for batch/streaming workflows in a Cloudera environment. Key Responsibilities: * Design, schedule, and monitor DAGs for ETL/ELT pipelines * Integrate Airflow with Cloudera services and external APIs * Implement retries, alerts, logging, and failure recovery * Collaborate with data engineers and DevOps teams Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Skills Required: * Experience3-8 years * Expertise in Airflow 2.x, Python, Bash * Knowledge of CI/CD for Airflow DAGs * Proven experience with Cloudera CDP, Spark/Hive-based data pipelines * Integration with Kafka, REST APIs, databases
Posted 3 weeks ago
5.0 - 9.0 years
9 - 14 Lacs
Bengaluru
Work from Office
Educational Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities Job Overview:As a Lead Computer Vision Engineer, you will lead the development and deployment of cutting-edge computer vision models and solutions for a variety of applications including image classification, object detection, segmentation, and more. You will work closely with cross-functional teams to implement advanced computer vision algorithms, ensure the integration of AI solutions into products, and help guide the research and innovation of next-generation visual AI technologies.2.Technical Skills: Deep Learning FrameworksProficiency in TensorFlow, PyTorch, or other deep learning libraries. Computer Vision ToolsExpertise in OpenCV, Dlib, and other image processing libraries. Model DeploymentExperience deploying models to production using platforms such as AWS, Google Cloud, or NVIDIA Jetson (for edge devices). AlgorithmsStrong understanding of core computer vision techniques like image classification, object detection (YOLO, Faster R-CNN), image segmentation (U-Net), and feature extraction. Programming LanguagesProficient in Python, C++, and other relevant programming languages for computer vision tasks. Data HandlingExperience working with large datasets, data augmentation, and preprocessing techniques. OptimizationSkills in model optimization techniques such as pruning, quantization, and hardware acceleration (e.g., using GPUs or TPUs). Additional Responsibilities: trong working experience in Agile environment - Experience working and understanding of ETL / ELT, Data load process - Knowledge on Cloud Infrastructure and data source integrations - Knowledge on relational Databases - Self-motivated, be able to work independently as well as being a team player - Excellent analytical and problem-solving skills - Ability to handle and respond to multiple stakeholders and queries - Ability to prioritize tasks and update key stakeholders - Strong client service focus and willingness to respond to queries and provide deliverables within prompt timeframes. Technical and Professional : Technology-Artificial Intelligence-Computer Vision Preferred Skills: Technology-Artificial Intelligence-Computer Vision
Posted 3 weeks ago
2.0 - 5.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Educational Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities The ideal candidate will be responsible for the entire SDLC and should have excellent communication skills and experience working directly with the business. They need to be self-sufficient and comfortable with building internal networks, both with the business and other technology teams. The ideal candidate will be expected to own changes all the way from inception to deployment in production. In addition to implementing new functionality, they need to use their experience in TDD and best practices to identify process gaps or areas for improvement with a constant focus on scalability and stability. Candidate should be self-motivated, results oriented and able to multi-task across different teams and applications. Further, the candidate needs to work effectively with remotely dispersed teams as the role will require constant communication across various regional teams. Additional Responsibilities: Strong working experience in Agile environment - Experience working and understanding of ETL / ELT, Data load process - Knowledge on Cloud Infrastructure and data source integrations - Knowledge on relational Databases - Self-motivated, be able to work independently as well as being a team player - Excellent analytical and problem-solving skills - Ability to handle and respond to multiple stakeholders and queries - Ability to prioritize tasks and update key stakeholders - Strong client service focus and willingness to respond to queries and provide deliverables within prompt timeframes. Technical and Professional : Expertise in workflow enhancement and designing macros. Able to integrate Alteryx with various other tools and applications as per business requirements. Knowledge in maintaining user roles and access provisions in Alteryx gallery Knowledge in version control systems like git Familiarity with multiple data sources compatible with Alteryx – ranging from spreadsheets and flat files to databases. Advanced development and troubleshooting skills Documentation of Training and Support Strong understanding of SDLC methodologies with a track record of high-quality deliverables Excellent communication skills and experience working with global teams Strong knowledge of database query tools (SQL). Good understanding of data warehouse architecture Preferred Skills: Technology-DataAnalytics-Alteryx
Posted 3 weeks ago
7.0 - 12.0 years
22 - 37 Lacs
Bengaluru, Mumbai (All Areas)
Hybrid
Hiring: Data Engineering Senior Software Engineer / Tech Lead / Senior Tech Lead - Mumbai & Bengaluru - Hybrid (3 Days from office) | Shift: 2 PM 11 PM IST - Experience: 5 to 12+ years (based on role & grade) Open Grades/Roles : Senior Software Engineer : 58 Years Tech Lead : 710 Years Senior Tech Lead : 10–12+ Years Job Description – Data Engineering Team Core Responsibilities (Common to All Levels) : Design, build and optimize ETL/ELT pipelines using tools like Pentaho , Talend , or similar Work on traditional databases (PostgreSQL, MSSQL, Oracle) and MPP/modern systems (Vertica, Redshift, BigQuery, MongoDB) Collaborate cross-functionally with BI, Finance, Sales, and Marketing teams to define data needs Participate in data modeling (ER/DW/Star schema) , data quality checks , and data integration Implement solutions involving messaging systems (Kafka) , REST APIs , and scheduler tools (Airflow, Autosys, Control-M) Ensure code versioning and documentation standards are followed (Git/Bitbucket) Additional Responsibilities by Grade Senior Software Engineer (5–8 Yrs) : Focus on hands-on development of ETL pipelines, data models, and data inventory Assist in architecture discussions and POCs Good to have: Tableau/Cognos, Python/Perl scripting, GCP exposure Tech Lead (7–10 Yrs) : Lead mid-sized data projects and small teams Decide on ETL strategy (Push Down/Push Up) and performance tuning Strong working knowledge of orchestration tools, resource management, and agile delivery Senior Tech Lead (10–12+ Yrs) : Drive data architecture , infrastructure decisions , and internal framework enhancements Oversee large-scale data ingestion, profiling, and reconciliation across systems Mentoring junior leads and owning stakeholder delivery end-to-end Advantageous: Experience with AdTech/Marketing data , Hadoop ecosystem (Hive, Spark, Sqoop) - Must-Have Skills (All Levels): ETL Tools: Pentaho / Talend / SSIS / Informatica Databases: PostgreSQL, Oracle, MSSQL, Vertica / Redshift / BigQuery Orchestration: Airflow / Autosys / Control-M / JAMS Modeling: Dimensional Modeling, ER Diagrams Scripting: Python or Perl (Preferred) Agile Environment, Git-based Version Control Strong Communication and Documentation
Posted 3 weeks ago
4.0 - 6.0 years
6 - 8 Lacs
Bengaluru
Work from Office
Role: Snowflake Developer with DBT Location: Bangalore/Hyderabad/Pune About the Role : We are seeking a Snowflake Developer with a deep understanding of DBT (data build tool) to help us design, build, and maintain scalable data pipelines. The ideal candidate will have hands-on experience working with Snowflake, DBT, and a passion for optimizing data processes for performance and efficiency. Responsibilities : Design, develop, and optimize Snowflake data models and DBT transformations. Build and maintain CI/CD pipelines for automated DBT workflows. Implement best practices for data pipeline performance, scalability, and efficiency in Snowflake. Contribute to the DBT community or develop internal tools/plugins to enhance the workflow. Troubleshoot and resolve complex data pipeline issues using DBT and Snowflake Qualifications : Must have minimum 4+ years of experience with Snowflake Must have at least 1 year of experience with DBT Extensive experience with DBT, including setting up CI/CD pipelines, optimizing performance, and contributing to the DBT community or plugins. Must be strong in SQL, data modelling, and ELT pipelines. Excellent problem-solving skills and the ability to collaborate effectively in a team environment.
Posted 3 weeks ago
5.0 - 10.0 years
10 - 15 Lacs
New Delhi, Chennai, Bengaluru
Work from Office
We are looking for an experienced Data Engineer with a strong background in data engineering, storage, and cloud technologies. The role involves designing, building, and optimizing scalable data pipelines, ETL/ELT workflows, and data models for efficient analytics and reporting. The ideal candidate must have strong SQL expertise, including complex joins, stored procedures, and certificate-auth-based queries. Experience with NoSQL databases such as Firestore, DynamoDB, or MongoDB is required, along with proficiency in data modeling and warehousing solutions like BigQuery (preferred), Redshift, or Snowflake. The candidate should have hands-on experience working with ETL/ELT pipelines using Airflow, dbt, Kafka, or Spark. Proficiency in scripting languages such as PySpark, Python, or Scala is essential. Strong hands-on experience with Google Cloud Platform (GCP) is a must. Additionally, experience with visualization tools such as Google Looker Studio, LookerML, Power BI, or Tableau is preferred. Good-to-have skills include exposure to Master Data Management (MDM) systems and an interest in Web3 data and blockchain analytics.
Posted 3 weeks ago
4.0 - 6.0 years
5 - 10 Lacs
Chennai
Work from Office
What Youll Do: Build and maintain robust, scalable ETL/ELT pipelines for large-scale, high-volume, and multi-modal datasets. Architect and manage data lakes, warehouses, and marts for structured and unstructured data. Collaborate with ML, NLP, and Speech teams to deliver high-quality, ML-ready datasets. Implement data quality checks, lineage tracking, versioning, and compliance controls. Optimize pipeline performance and handle infrastructure reliability and observability (logging, alerting, monitoring). Develop internal tools and workflows to automate ingestion, labeling, and transformation processes. Mentor junior engineers and help establish best practices in data engineering and MLOps. What We’re Looking For: 4+ years of experience in data engineering or backend systems for ML/AI workflows. Advanced SQL skills and experience with analytical databases (PostgreSQL, BigQuery, Redshift, Snowflake). Strong Python or Scala skills for data transformation and pipeline orchestration. Proficiency with workflow tools like Airflow, Prefect, or DBT. Experience with cloud platforms (AWS, GCP, or Azure), object storage (e.g., S3, GCS). Familiarity with big data frameworks (Spark, Hadoop) and distributed computing. Solid understanding of data modeling, partitioning, and performance optimization. Comfort with CI/CD workflows, version control (Git), Docker, and Kubernetes. Nice to Have: Experience with real-time data pipelines using Kafka, Flink, or Spark Streaming. Exposure to ML data lifecycle: labeling, augmentation, training, evaluation. Knowledge of data governance, security, PII compliance, and auditability. Prior leadership experience or ownership of major data initiatives.
Posted 3 weeks ago
4.0 - 8.0 years
20 - 25 Lacs
Bengaluru
Hybrid
Job Title: AWS Engineer Experience: 4 - 8 Years Location: Bengaluru (Hybrid 2- 3 Days Onsite per Week) Employment Type: Full-Time Notice Period: Only Immediate to 15 Days Joiners Preferred Job Description: We are looking for an experienced AWS Engineer to join our dynamic data engineering team. The ideal candidate will have hands-on experience building and maintaining robust, scalable data pipelines and cloud-based architectures on AWS. Key Responsibilities: Design, develop, and maintain scalable data pipelines using AWS services such as Glue, Lambda, S3, Redshift, and EMR Collaborate with data scientists and ML engineers to operationalize machine learning models using AWS SageMaker Implement efficient data transformation and feature engineering workflows Optimize ETL/ELT processes and enforce best practices for data quality and governance Work with structured and unstructured data using Amazon Athena, DynamoDB, RDS, and similar services Build and manage CI/CD pipelines for data and ML workflows using AWS CodePipeline, CodeBuild, and Step Functions Monitor data infrastructure for performance, reliability, and cost-effectiveness Ensure data security and compliance with organizational and regulatory standards Required Skills: Strong experience with AWS data and ML services Solid knowledge of ETL/ELT frameworks and data modeling Proficiency in Python, SQL, and scripting for data engineering Experience with CI/CD and DevOps practices on AWS Good understanding of data governance and compliance standards Excellent collaboration and problem-solving skills
Posted 3 weeks ago
4.0 - 9.0 years
7 - 11 Lacs
Pune
Work from Office
What You'll Do The Global Analytics & Insights (GAI) team is looking for a Data Engineer to help build of the data infrastructure for Avalara's core data assets- empowering the organization with accurate, timely data to drive data backed decisions. As a Data Engineer, you will help implement and maintain our data infrastructure using Snowflake, dbt (Data Build Tool), Python, Terraform, and Airflow. You will learn the ins and outs of Avalara's financial, sales, and marketing data to become a go-to resource of Avalara knowledge. You will have foundational SQL experience, an understanding of modern data stacks and technology, a desire to build things the right way using modern software principles, and experience with data and all things data-related. What Your Responsibilities Will Be Design functional data models by demonstrating understanding of business use cases and different data sources Develop scalable, reliable, and efficient data pipelines using dbt, Python, or other ELT tools Build scalable, complex dbt models to support a variety of data products Implement and maintain scalable data orchestration and transformation, ensuring data accuracy, consistency, and timeliness Collaborate with cross-functional teams to understand complex requirements and translate them into technical solutions\ you will report to the Senior Manager, Data & Analytics Engineering What You'll Need to be Successful Bachelor's degree in Computer Science or Engineering, or related field 4+ years experience in data engineering field, with deep SQL knowledge 3+ years of working with Git, and demonstrated experience collaborating with other engineers across repositories 2+ years of working with Snowflake 2+ years working with dbt (dbt core preferred) Experience working with complex Salesforce data Functional experience with AWS Functional experience with Infrastructure as Code, preferably Terraform Functional experience with CI CD and DevOps concepts
Posted 3 weeks ago
5.0 - 10.0 years
12 - 20 Lacs
Chennai
Work from Office
Location: Chennai, India Experience: 5+ years Work Mode: Full-time (9am-6:30pm), In-office (Monday to Friday) Department: Asign Data Sciences About Us: At Asign, we are revolutionizing the art sector with our innovative digital solutions. We are a passionate and dynamic startup dedicated to enhancing the art experience through technology. Join us in creating cutting-edge products that empower artists and art enthusiasts worldwide. Role Overview We are looking for an experienced Data Engineer with a strong grasp of ELT architecture and experience to help us build and maintain robust data pipelines. This is a hands-on role for someone passionate about structured data, automation, and scalable infrastructure. The ideal candidate will be responsible for sourcing data, ingesting, transforming, storing, and making data accessible and reliable for data analysis, machine learning, and reporting. You will play a key role in maintaining and evolving our data architecture and ensuring that our data flows efficiently and securely. Key Responsibilities: Design, develop, and maintain efficient and scalable ELT data pipelines. Work closely with the data science and backend teams to understand data needs and transform raw inputs into structured datasets. Integrate multiple data sources including APIs, web pages, spreadsheets, and databases into a central warehouse. Monitor, test, and continuously improve data flows for reliability and performance. Create documentation and establish best practices for data governance, lineage, and quality. Collaborate with product and tech teams to plan data models that support business and AI/ML applications. Required Skills: Minimum 5 years of hands-on experience in data engineering. Solid understanding and experience with ELT pipelines and modern data stack tools. Practical knowledge of one or more orchestrators (Dagster, Airflow, Prefect, etc.). Proficiency in Python and SQL. Experience working with APIs and data integration from multiple sources. Familiarity with one or more cloud data warehouses (e.g., Snowflake, BigQuery, Redshift). Strong problem-solving and debugging skills. Qualifications: Must-have: Bachelors/Masters degree in Computer Science, Engineering, Statistics, or a related field Proven experience (5+ years) in data engineering, data integration, and data management Hands-on experience in data sourcing tools and frameworks (e.g. Scrapy, Beautiful Soup, Selenium, Playwright) Proficiency in Python and SQL for data manipulation and pipeline development Experience with cloud-based data platforms (AWS, Azure, or GCP) and data warehouse tools (e.g. Redshift, BigQuery, Snowflake) Familiarity with workflow orchestration tools (e.g. Airflow, Prefect, Dagster) Strong understanding of relational and non-relational databases (PostgreSQL, MongoDB, etc.) Solid understanding of data modeling, ETL best practices, and data governance principles Systems knowledge and experience working with Docker. Strong and creative problem-solving skills and the ability to think critically about data engineering solutions. Effective communication and collaboration skills Ability to work independently and as part of a team in a fast-paced, dynamic environment. Good-to-have: Experience working with APIs and third-party data sources Familiarity with version control (Git) and CI/CD processes Exposure to basic machine learning concepts and working with data science teams Experience handling large datasets and working with distributed data systems Why Join Us? Innovative Environment: Be part of a forward-thinking team that is dedicated to pushing the boundaries of art and technology. Career Growth: Opportunities for professional development and career advancement. Creative Freedom: Work in a role that values creativity and encourages new ideas. Company Culture: Enjoy a dynamic, inclusive, and supportive work environment.
Posted 3 weeks ago
6.0 - 11.0 years
8 - 12 Lacs
Pune
Work from Office
What You'll Do We are seeking an experienced Lead Data Engineer with experience in the Data Engineering. We are looking for a background in ETL processes, data warehousing, data modeling, and hands-on expertise in SQL and Python. The ideal candidate will have exposure to cloud technologies and will play a key role in designing and managing scalable, high-performance data systems that support marketing and sales insights. You will report to Manager- Data engineering What Your Responsibilities Will Be You will Design, develop, and maintain efficient ETL pipelines using DBT,Airflow to move and transform data from multiple sources into a data warehouse. You will Lead the development and optimization of data models (e.g., star, snowflake schemas) and data structures to support reporting. You will Leverage cloud platforms (e.g., AWS, Azure, Google Cloud) to manage and scale data storage, processing, and transformation processes. You will Work with business teams, marketing, and sales departments to understand data requirements and translate them into actionable insights and efficient data structures. You will Use advanced SQL and Python skills to query, manipulate, and transform data for multiple use cases and reporting needs. You will Implement data quality checks and ensure that the data adheres to governance best practices, maintaining consistency and integrity across datasets. You will Experience using Git for version control and collaborating on data engineering projects. What You'll Need to be Successful Bachelor's degree with 6+ years of experience in Data Engineering. ETL/ELT Expertise : experience in building, improving ETL/ELT processes. Data Modeling : experience with designing and implementing data models such as star and snowflake schemas, and working with denormalized tables to optimize reporting performance. Experience with cloud-based data platforms (AWS, Azure, Google Cloud) SQL and Python Proficiency : Advanced SQL skills for querying large datasets and Python for automation, data processing, and integration tasks. DBT Experience : Hands-on experience with DBT (Data Build Tool) for transforming and managing data models. Good to have Skills: Familiarity with AI concepts such as machine learning (ML), (NLP), and generative AI. Work with AI-driven tools and models for data analysis, reporting, and automation. Oversee and implement DBT models to improve the data transformation process. Experience in the marketing and sales domain, with lead management, marketing analytics, and sales data integration. Familiarity with business intelligence reporting tools, Power BI, for building data models and generating insights.
Posted 3 weeks ago
3.0 - 8.0 years
20 - 25 Lacs
Bengaluru
Work from Office
3 to 8 years IT experience in development and implementation of Business Intelligence and Data warehousing solutions using ODI. Knowledge of Analysis Design, Development, Customization, Implementation & Maintenance of Oracle Data Integrator (ODI). Experience in Designing, implementing, and maintaining ODI load plan and process. Working knowledge of ODI, PL/SQL, TOAD Data Modelling logical / Physical, Star/Snowflake, Schema, FACT & Dimensions tables, ELT,OLAP. Experience with SQL, UNIX, complex queries, Stored Procedures and Data Warehouse best practices. Ensure correctness and completeness of Data loading (Full load & Incremental load). Excellent communication skills, organized and effective in delivering high-quality solutions using ODI. Location - Bengaluru (Pan India)
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough