Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 - 10.0 years
10 - 14 Lacs
Hyderabad
Work from Office
About the Job : We are seeking a highly skilled and experienced Senior Data Engineer to join our dynamic team. In this pivotal role, you will be instrumental in driving our data engineering initiatives, with a strong emphasis on leveraging Dataiku's capabilities to enhance data processing and analytics. You will be responsible for designing, developing, and optimizing robust data pipelines, ensuring seamless integration of diverse data sources, and maintaining high data quality and accessibility to support our business intelligence and advanced analytics projects. This role requires a unique blend of expertise in traditional data engineering principles, advanced data modeling, and a forward-thinking approach to integrating cutting-AI technologies, particularly LLM Mesh for Generative AI applications. If you are passionate about building scalable data solutions and are eager to explore the cutting edge of AI, we encourage you to apply. Key Responsibilities : - Dataiku Leadership : Drive data engineering initiatives with a strong emphasis on leveraging Dataiku capabilities for data preparation, analysis, visualization, and the deployment of data solutions. - Data Pipeline Development : Design, develop, and optimize robust and scalable data pipelines to support various business intelligence and advanced analytics projects. This includes developing and maintaining ETL/ELT processes to automate data extraction, transformation, and loading from diverse sources. - Data Modeling & Architecture : Apply expertise in data modeling techniques to design efficient and scalable database structures, ensuring data integrity and optimal performance. - ETL/ELT Expertise : Implement and manage ETL processes and tools to ensure efficient and reliable data flow, maintaining high data quality and accessibility. - Gen AI Integration : Explore and implement solutions leveraging LLM Mesh for Generative AI applications, contributing to the development of innovative AI-powered features. - Programming & Scripting : Utilize programming languages such as Python and SQL for data manipulation, analysis, automation, and the development of custom data solutions. - Cloud Platform Deployment : Deploy and manage scalable data solutions on cloud platforms such as AWS or Azure, leveraging their respective services for optimal performance and cost-efficiency. - Data Quality & Governance : Ensure seamless integration of data sources, maintaining high data quality, consistency, and accessibility across all data assets. Implement data governance best practices. - Collaboration & Mentorship : Collaborate closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver impactful solutions. Potentially mentor junior team members. - Performance Optimization : Continuously monitor and optimize the performance of data pipelines and data systems. Required Skills & Experience : - Proficiency in Dataiku : Demonstrable expertise in Dataiku for data preparation, analysis, visualization, and building end-to-end data pipelines and applications. - Expertise in Data Modeling : Strong understanding and practical experience in various data modeling techniques (e.g., dimensional modeling, Kimball, Inmon) to design efficient and scalable database structures. - ETL/ELT Processes & Tools : Extensive experience with ETL/ELT processes and a proven track record of using various ETL tools (e.g., Dataiku's built-in capabilities, Apache Airflow, Talend, SSIS, etc.). - Familiarity with LLM Mesh : Familiarity with LLM Mesh or similar frameworks for Gen AI applications, understanding its concepts and potential for integration. - Programming Languages : Strong proficiency in Python for data manipulation, scripting, and developing data solutions. Solid command of SQL for complex querying, data analysis, and database interactions. - Cloud Platforms : Knowledge and hands-on experience with at least one major cloud platform (AWS or Azure) for deploying and managing scalable data solutions (e.g., S3, EC2, Azure Data Lake, Azure Synapse, etc.). - Gen AI Concepts : Basic understanding of Generative AI concepts and their potential applications in data engineering. - Problem-Solving : Excellent analytical and problem-solving skills with a keen eye for detail. - Communication : Strong communication and interpersonal skills to collaborate effectively with cross-functional teams. Bonus Points (Nice to Have) : - Experience with other big data technologies (e.g., Spark, Hadoop, Snowflake). - Familiarity with data governance and data security best practices. - Experience with MLOps principles and tools. - Contributions to open-source projects related to data engineering or AI. Education : Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related quantitative field.
Posted 3 weeks ago
4.0 - 7.0 years
9 - 13 Lacs
Bengaluru
Work from Office
Senior Data Engineer with a deep focus on data quality, validation frameworks, and reliability engineering . This role will be instrumental in ensuring the accuracy, integrity, and trustworthiness of data assets across our cloud-native infrastructure. The ideal candidate combines expert-level Python programming with practical experience in data pipeline engineering, API integration, and managing cloud-native workloads on AWS and Kubernetes . Roles and Responsibilities Design, develop, and deploy automated data validation and quality frameworks using Python. Build scalable and fault-tolerant data pipelines that support quality checks across data ingestion, transformation, and delivery. Integrate with REST APIs to validate and enrich datasets across distributed systems. Deploy and manage validation workflows using AWS services (EKS, EMR, EC2) and Kubernetes clusters. Collaborate with data engineers, analysts, and DevOps to embed quality checks into CI/CD and ETL pipelines. Develop monitoring and alerting systems for real-time detection of data anomalies and inconsistencies. Write clean, modular, and reusable Python code for automated testing, validation, and reporting. Lead root cause analysis for data quality incidents and design long-term solutions. Maintain detailed technical documentation of data validation strategies, test cases, and architecture. Promote data quality best practices and evangelize a culture of data reliability within the engineering teams. Required Skills: Experience with data quality platforms such as Great Expectations , Collibra Data Quality , or similar tools. Proficiency in Docker and container lifecycle management. Familiarity with serverless compute environments (e.g., AWS Lambda, Azure Functions), Python, PySpark Relevant certifications in AWS , Kubernetes , or data quality technologies . Prior experience working in big data ecosystems and real-time data environments.
Posted 3 weeks ago
4.0 - 6.0 years
13 - 18 Lacs
Bengaluru
Remote
About BNI: Established in 1985, BNI is the world’s largest business referral network. With over 325,000 small-to medium-size business Members in over 11,000 Chapters across 77 Countries, we are a global company with local footprints. Our proven approach provides Members with a structured, positive, and professional referral program that enables them to sharpen their business skills, develop meaningful, long-term relationships, and experience business growth. Visit to learn how BNI has impacted the lives of our Members and how it can help you achieve your business goals. Position Summary The Database Developer will be a part of BNI’s Global Information Technology Team and will primarily have responsibilities over the creation, development, maintenance, and enhancements for our databases, queries, routines and processes. The Database Developer will work closely with the Database Administrator, data team, software developers, QA engineers and DevOps Engineers located within the BNI office in Bangalore, as well as all levels of BNI Management and Leadership teams. This is an unparalleled opportunity to become part of a growing team and a growing global organization. High performers will have significant growth opportunities available to them. The candidate should be able to be an expert in both database and query design and should be able to write queries on the fly on demand, he/she should posses good hands on experience on data engineering and should be well versed with tools mentioned in the technical table below The person should be able to own the assignments and should be independent in terms of the development of queries and other aspects in data engineering. Roles and Responsibilities Design stable, reliable and effective databases Create, optimize and maintain queries, used in our software applications, as well as data extracts and ETL processes Modify and maintain databases, routines, queries in order to ensure accuracy, maintainability, scalability, and high performance of all our data systems Solve database usage issues and malfunctions Liaise with developers to improve applications and establish best practices Provide data management support for our users/clients Research, analyze and recommend upgrades to our data systems Prepare documentation and specifications for all deployed queries/routines/processes Profile, optimize and tweak queries and routines for optimal performance Support the Development and Quality Assurance teams with their needs for database development and access Be a team player and strong problem-solver to work with a diverse team Qualifications Required: Bachelor’s Degree or equivalent work experience Fluent in English, with excellent oral and written communication skills 5+ years of experience with Linux-based MySQL/MariaDB database development and maintenance 2+ years of experience with Database Design/Development/Scripting Proficient in writing and optimizing SQL Statements Strong proficiency in MySQL/MariaDB scripting, including functions, routines and complex data queries. Understanding of MySQL/MariaDB’s underlying storage engines, such as InnoDB and MyISAM Knowledge of standards and best practices in MySQL/MariaDB Knowledge of MySQL/MariaDB features, such as its event scheduler (Desired) Familiarity with other SQL/NoSQL databases such as PostgreSQL, MongoDB, Redis Experience with Amazon Web Services’ RDS offering Experience with Data Lakes and Big Data is a must Experience in Python is a must Experience with tools like Airflow/DBT/Data pipelines Experience with Apache Superset Knowledgeable with AWS services from Data Engineering point of view (Desired) Proficient Understanding of git/GitHub as a source control system Familiarity with working on an Agile/Iterative development framework Self-starter with positive attitude with the ability to collaborate with product managers and developers Strong SQL Experience and ability to write queries on demand. Primary Technologies: Database Stored Procedure SQL Optimization Database Management Airflow/DBT/ Data Warehousing with RedShift/Snowflake (Mandatory). Python/Linux/ Data Pipelines Physical Demands and Working Conditions Sedentary work. Exerting up to 10 pounds of force occasionally and/or negligible amount of force frequently or constantly to lift, carry, push, pull or otherwise move objects. Repetitive motion. Substantial movements (motions) of the wrists, hands, and/or fingers. The worker is required to have close visual acuity to perform an activity such as: preparing and analyzing data and figures; transcribing; viewing a computer terminal; extensive reading. External Posting Language This is a full-time position. This job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities, and activities may change at any time with or without notice. Learn more at BNI.com
Posted 3 weeks ago
6.0 - 11.0 years
25 - 30 Lacs
Mumbai, Mumbai Suburban, Mumbai (All Areas)
Work from Office
Experience in using SQL, PL/SQL or T-SQL with RDBMSs like Teradata, MS SQL Server, or Oracle in production environments. Experience with Python, ADF,Azure,Data Ricks. Experience working of Microsoft Azure/AWS or other leading cloud platforms Required Candidate profile Hands-on experience with Hadoop, Spark, Hive, or similar frameworks. Data Integration & ETL Data Modelling Database management Data warehousing Big-data framework CI/CD Perks and benefits To be disclosed post interview
Posted 3 weeks ago
6.0 - 11.0 years
20 - 30 Lacs
Hyderabad, Bengaluru
Hybrid
Notice Period - Immediate to 15 days max Virtusa JD: 8+ years of experience in data engineering, specifically in cloud environments like AWS. Dont Not Share Data Science Profiles. Proficiency in Python and PySpark for data processing and transformation tasks. Solid experience with AWS Glue for ETL jobs and managing data workflows. Hands-on experience with AWS Data Pipeline (DPL) for workflow orchestration. Strong experience with AWS services such as S3, Lambda, Redshift, RDS, and EC2. Technical Skills: Deep understanding of ETL concepts and best practices.. Strong knowledge of SQL for querying and manipulating relational and semi-structured data. Experience with Data Warehousing and Big Data technologies, specifically within AWS. Additional Skills: Experience with AWS Lambda for serverless data processing and orchestration. Understanding of AWS Redshift for data warehousing and analytics. Familiarity with Data Lakes, Amazon EMR, and Kinesis for streaming data processing. Knowledge of data governance practices, including data lineage and auditing. Familiarity with CI/CD pipelines and Git for version control. Experience with Docker and containerization for building and deploying applications. Design and Build Data Pipelines: Design, implement, and optimize data pipelines on AWS using PySpark, AWS Glue, and AWS Data Pipeline to automate data integration, transformation, and storage processes. ETL Development: Develop and maintain Extract, Transform, and Load (ETL) processes using AWS Glue and PySpark to efficiently process large datasets. Data Workflow Automation: Build and manage automated data workflows using AWS Data Pipeline, ensuring seamless scheduling, monitoring, and management of data jobs. Data Integration: Work with different AWS data storage services (e.g., S3, Redshift, RDS) to ensure smooth integration and movement of data across platforms. Optimization and Scaling: Optimize and scale data pipelines for high performance and cost efficiency, utilizing AWS services like Lambda, S3, and EC2.
Posted 3 weeks ago
5.0 - 10.0 years
25 - 30 Lacs
Chennai
Work from Office
Job Summary: We are seeking a highly skilled Data Engineer to design, develop, and maintain robust data pipelines and architectures. The ideal candidate will transform raw, complex datasets into clean, structured, and scalable formats that enable analytics, reporting, and business intelligence across the organization. This role requires strong collaboration with data scientists, analysts, and cross-functional teams to ensure timely and accurate data availability and system performance. Key Responsibilities Design and implement scalable data pipelines to support real-time and batch processing. Develop and maintain ETL/ELT processes that move, clean, and organize data from multiple sources. Build and manage modern data architectures that support efficient storage, processing, and access. Collaborate with stakeholders to understand data needs and deliver reliable solutions. Perform data transformation, enrichment, validation, and normalization for analysis and reporting. Monitor and ensure the quality, integrity, and consistency of data across systems. Optimize workflows for performance, scalability, and cost-efficiency. Support cloud and on-premise data integrations, migrations, and automation initiatives. Document data flows, schemas, and infrastructure for operational and development purposes. • Apply best practices in data governance, security, and compliance. Required Qualifications & Skills: Bachelors or Masters degree in Computer Science, Data Engineering, or a related field. Proven 6+ Years experience in data engineering, ETL development, or data pipeline management. Proficiency with tools and technologies such as: SQL, Python, Spark, Scala ETL tools (e.g., Apache Airflow, Talend) Cloud platforms (e.g., AWS, GCP, Azure) Big Data tools (e.g., Hadoop, Hive, Kafka) Data warehouses (e.g., Snowflake, Redshift, BigQuery) Strong understanding of data modeling, data architecture, and data lakes. Experience with CI/CD, version control, and working in Agile environments. Preferred Qualifications: • Experience with data observability and monitoring tools. • Knowledge of data cataloging and governance frameworks. • AWS/GCP/Azure data certification is a plus.
Posted 3 weeks ago
3.0 - 10.0 years
18 - 22 Lacs
Hyderabad
Work from Office
WHAT YOU'LL DO Lead the development of scalable data infrastructure solutions Leverage your data engineering expertise to support data stakeholders and mentor less experienced Data Engineers. Design and optimize new and existing data pipelines Collaborate with a cross-functional product engineering teams and data stakeholders deliver on Codecademy’s data needs WHAT YOU'LL NEED 8 to 10 years of hands-on experience building and maintaining large scale ETL systems Deep understanding of database design and data structures. SQL, & NoSQL. Fluency in Python. Experience working with cloud-based data platforms (we use AWS) SQL and data warehousing skills -- able to write clean and efficient queries Ability to make pragmatic engineering decisions in a short amount of time Strong project management skills; a proven ability to gather and translate requirements from stakeholders across functions and teams into tangible results WHAT WILL MAKE YOU STAND OUT Experience with tools in our current data stack: Apache Airflow, Snowflake, dbt, FastAPI, S3, & Looker. Experience with Kafka, Kafka Connect, and Spark or other data streaming technologies Familiarity with the database technologies we use in production: Snowflake, Postgres, and MongoDB. Comfort with containerization technologies: Docker, Kubernetes, etc.
Posted 3 weeks ago
6.0 - 11.0 years
22 - 27 Lacs
Hyderabad, Bengaluru
Work from Office
Job description: 8+ years of experience in data engineering, specifically in cloud environments like AWS. Proficiency in Python and PySpark for data processing and transformation tasks. Solid experience with AWS Glue for ETL jobs and managing data workflows. Hands-on experience with AWS Data Pipeline (DPL) for workflow orchestration. Strong experience with AWS services such as S3, Lambda, Redshift, RDS, and EC2. Technical Skills: Deep understanding of ETL concepts and best practices.. Strong knowledge of SQL for querying and manipulating relational and semi-structured data. Experience with Data Warehousing and Big Data technologies, specifically within AWS. Additional Skills: Experience with AWS Lambda for serverless data processing and orchestration. Understanding of AWS Redshift for data warehousing and analytics. Familiarity with Data Lakes, Amazon EMR, and Kinesis for streaming data processing. Knowledge of data governance practices, including data lineage and auditing. Familiarity with CI/CD pipelines and Git for version control. Experience with Docker and containerization for building and deploying applications. Design and Build Data Pipelines: Design, implement, and optimize data pipelines on AWS using PySpark, AWS Glue, and AWS Data Pipeline to automate data integration, transformation, and storage processes. ETL Development: Develop and maintain Extract, Transform, and Load (ETL) processes using AWS Glue and PySpark to efficiently process large datasets. Data Workflow Automation: Build and manage automated data workflows using AWS Data Pipeline, ensuring seamless scheduling, monitoring, and management of data jobs. Data Integration: Work with different AWS data storage services (e.g., S3, Redshift, RDS) to ensure smooth integration and movement of data across platforms. Optimization and Scaling: Optimize and scale data pipelines for high performance and cost efficiency, utilizing AWS services like Lambda, S3, and EC2.
Posted 3 weeks ago
6.0 - 8.0 years
15 - 20 Lacs
Chennai
Work from Office
Senior Data Engineer: Job Title: Senior Data Engineer Experience : 6 to 8 Years Location: Chennai Job Description: Movate is seeking a highly skilled Senior Data Engineer to lead the development of scalable, modular, and high-performance data pipelines. You will work closely with cross-functional teams to support data integration, transformation, and delivery for analytics and business intelligence. Key Responsibilities: Design and maintain ETL/ELT pipelines using Apache Airflow , Azure Databricks , and Azure Data Factory Build scalable data infrastructure and optimize data workflows Ensure data quality, security, and governance across platforms Collaborate with data scientists and BI developers to support analytics and reporting Monitor and troubleshoot data pipelines for reliability and performance Document data processes and workflows for knowledge sharing Technical Skills Required: Strong proficiency in Python (Pandas, NumPy, REST APIs) Advanced SQL skills (joins, CTEs, performance tuning) Experience with Databricks , Apache Airflow , and Azure Cloud Services Knowledge of SparkSQL , PySpark , and containerization using Docker Familiarity with data lake vs data warehouse architectures Experience in data security , encryption , and access provisioning Qualifications: Bachelors or Masters degree in Computer Science, Information Systems, Engineering, or related field Excellent problem-solving and communication skills Ability to work independently and manage end-to-end delivery Comfortable in agile development environments EEO Statement: Movate provides equal opportunity in all our employment practices to all qualified employees and applicants without regard race, color, religion, sex (including gender identity, sexual orientation, and pregnancy), national origin, age, disability or genetic information and other characteristics that are protected by applicable law.
Posted 3 weeks ago
5.0 - 10.0 years
15 - 22 Lacs
Bangalore/ Bengaluru
Hybrid
Role & Responsibilities: The candidate will have to leverage strong collaboration skills and the ability to independently develop and design highly complex data sets and ETL processes to develop a data warehouse and to ask the right questions. You'd be engaged in a fast-paced learning environment and will be solving problems for the largest organizations in the world mostly Fortune 500. The candidate will also work closely with internal business teams and clients to work on various kinds of Data Engineering related problems like the development of a data warehouse, Advanced Stored Procedures, ETL pipelines, Reporting, Data Governance, and BI development. Basic Qualifications: Bachelor's degree in Computer Science, Engineering, Operations Research, Math, Economics or related discipline Strong SQL, Python, PySpark ETL development, Azure Data Factory and PowerBI knowledge and Hands-on experience Proficient in understanding the Business Requirement and converting it into process flows and codes(Special preference for SQL based stored procedures) Develop and design data architecture and frameworks for optimal performance and response time Strong Analytical skills and the ability to start from ambiguous problem statements, identify and access relevant data, make appropriate assumptions, perform insightful analysis, and draw conclusions relevant to the business problem Excellent communication skills to communicate efficiently (written and spoken) in English. Demonstrated ability to communicate complex technical problems in simple plain stories. Ability to present information professionally & concisely with supporting data. Ability to work effectively & independently in a fast-paced environment with tight deadlines. Ability to engage with cross-functional teams for implementation of project and program requirements. 5+ years of hands-on experience as a Data Engineer or tech lead roles. 5+ years of experience in data engineering on the Azure cloud, highly proficient in the Azure ecosystem and its services."
Posted 3 weeks ago
2.0 - 7.0 years
12 - 16 Lacs
Hyderabad
Work from Office
WHAT YOU'LL DO Build scalable data infrastructure solutions Design and optimize new and existing data pipelines Integrate new data sources into our existing data architecture Collaborate with a cross-functional product engineering teams and data stakeholders deliver on Codecademy’s data needs WHAT YOU'LL NEED 3 to 5 years of hands-on experience building and maintaining large scale ETL systems Deep understanding of database design and data structures. SQL, & NoSQL. Fluency in Python. Experience working with cloud-based data platforms (we use AWS) SQL and data warehousing skills -- able to write clean and efficient queries Ability to make pragmatic engineering decisions in a short amount of time Strong project management skills; a proven ability to gather and translate requirements from stakeholders across functions and teams into tangible results WHAT WILL MAKE YOU STAND OUT Experience with tools in our current data stack: Apache Airflow, Snowflake, dbt, FastAPI, S3, & Looker. Experience with Kafka, Kafka Connect, and Spark or other data streaming technologies Familiarity with the database technologies we use in production: Snowflake, Postgres, and MongoDB. Comfort with containerization technologies: Docker, Kubernetes, etc.
Posted 3 weeks ago
4.0 - 7.0 years
0 Lacs
Pune
Hybrid
Job Title: GCP Data Engineer Location: Pune, India Experience: 4 to 7 Years Job Type: Full-Time Job Summary: We are looking for a highly skilled GCP Data Engineer with 4 to 7 years of experience to join our data engineering team in Pune . The ideal candidate should have strong experience working with Google Cloud Platform (GCP) , including Dataproc , Cloud Composer (Apache Airflow) , and must be proficient in Python , SQL , and Apache Spark . The role involves designing, building, and optimizing data pipelines and workflows to support enterprise-grade analytics and data science initiatives. Key Responsibilities: Design and implement scalable and efficient data pipelines on GCP , leveraging Dataproc , BigQuery , Cloud Storage , and Pub/Sub. Develop and manage ETL/ELT workflows using Apache Spark , SQL , and Python. Orchestrate and automate data workflows using Cloud Composer (Apache Airflow). Build batch and streaming data processing jobs that integrate data from various structured and unstructured sources. Optimize pipeline performance and ensure cost-effective data processing. Collaborate with data analysts, scientists, and business teams to understand data requirements and deliver high-quality solutions. Implement and monitor data quality checks, validation, and transformation logic. Required Skills: Strong hands-on experience with Google Cloud Platform (GCP) Proficiency with Dataproc for big data processing and Apache Spark Expertise in Python and SQL for data manipulation and scripting Experience with Cloud Composer / Apache Airflow for workflow orchestration Knowledge of data modeling, warehousing, and pipeline best practices Solid understanding of ETL/ELT architecture and implementation Strong troubleshooting and problem-solving skills Preferred Qualifications: GCP Data Engineer or Cloud Architect Certification. Familiarity with BigQuery , Dataflow , and Pub/Sub. Experience with CI/CD and DevOps tools in data engineering workflows. Exposure to Agile methodologies and team collaboration tools.
Posted 3 weeks ago
2.0 - 5.0 years
7 - 11 Lacs
Hyderabad
Work from Office
Support and monitor Matillion ETL jobs for Snowflake, Redshift, and Azure Synapse. Troubleshoot failed data loads, pipeline execution issues, and performance bottlenecks. Manage patching, software updates, and ensure version compatibility. Ensure SLA adherence by tracking system performance and error resolution. Skills & Qualifications: 2-5 years of experience with Matillion ETL for cloud data platforms. Strong understanding of ETL, data pipelines, and API integrations.
Posted 3 weeks ago
5.0 - 10.0 years
20 - 35 Lacs
Kochi, Bengaluru
Work from Office
Job Summary: We are seeking a highly skilled and motivated Machine Learning Engineer with a strong foundation in programming and machine learning, hands-on experience with AWS Machine Learning services (especially SageMaker), and a solid understanding of Data Engineering and MLOps practices. You will be responsible for designing, developing, deploying, and maintaining scalable ML solutions in a cloud-native environment. Key Responsibilities: • Design and implement machine learning models and pipelines using AWS SageMaker and related services. • Develop and maintain robust data pipelines for training and inference workflows. • Collaborate with data scientists, engineers, and product teams to translate business requirements into ML solutions. • Implement MLOps best practices including CI/CD for ML, model versioning, monitoring, and retraining strategies. • Optimize model performance and ensure scalability and reliability in production environments. • Monitor deployed models for drift, performance degradation, and anomalies. • Document processes, architectures, and workflows for reproducibility and compliance. Required Skills & Qualifications: • Strong programming skills in Python and familiarity with ML libraries (e.g., scikitlearn, TensorFlow, PyTorch). • Solid understanding of machine learning algorithms, model evaluation, and tuning. • Hands-on experience with AWS ML services, especially SageMaker, S3, Lambda, Step Functions, and CloudWatch. • Experience with data engineering tools (e.g., Apache Airflow, Spark, Glue) and workflow orchestration. Machine Learning Engineer - Job Description • Proficiency in MLOps tools and practices (e.g., MLflow, Kubeflow, CI/CD pipelines, Docker, Kubernetes). • Familiarity with monitoring tools and logging frameworks for ML systems. • Excellent problem-solving and communication skills. Preferred Qualifications: • AWS Certification (e.g., AWS Certified Machine Learning Specialty). • Experience with real-time inference and streaming data. • Knowledge of data governance, security, and compliance in ML systems
Posted 3 weeks ago
5.0 - 10.0 years
5 - 9 Lacs
Bengaluru
Work from Office
The data engineer is responsible for Designing, Architecting and implementing robust, scalable and maintainable data pipelines. candidate will work directly with upstream stockholders (applications owners, data providers) and downstream stakeholders (Data consumers, Data analysts, Data Scientists) to define the data pipeline requirements, implement solutions that serve downstream stakeholders needs through APIs, materialized views. On day to day, Candidate works in conjunction with the Data Analyst, for the aggregation and preparation of data. interacts with security, continuity and IT architecture to validate the IT assets designs and develops. Furthermore, works with BNP Paribas international team Direct Responsibilities Work on the stages from data ingestion to analytics, encompass integration, transformation, warehousing and maintenance The Data Engineer designs architecture, orchestrate, deploys and monitors reliable data processing systems. Implement Batch and streaming data pipelines to ingest data into the data warehouse Perform undercurrent activities (Data architecture, Data management, DataOps, Security) Perform data transformation and modeling, to convert data from OLTP to OLAP to speed up data querying, and best align with business needs Serve downstream stakeholders across the organization, whose improved access to standardized data will make them more effective at delivering use cases, building dashboards and guiding decisions Technical Competencies Master Data engineering fundamentals concepts (Data warehouse, Data Lake, Data Lakehouse) Master Golang, Bash, SQL, Python Master of HTTP and REST API Best practices Master batch and streaming datapipeline using Kafka Master code versioning with Git and best practices for continuous integration & delivery (CI/CD) Master writing clean and tested code following software engineering best practices (Readable, Modular, Reusable, Extensible) Master data modeling (3NF, Kimball, Vault) Knowledge ofdata orchestration using Airflow or Dagster Knowledge to self-host and manage tools like Metabase, DBT, Knowledge of cloud principals and infrastructure management (IAM, Logging, Terraform, Ansible) Knowledge of data abstraction layers (Object Storage, Relational, NoSQL, Document, Trino, and Graph databases) Knowledge with Containerization and workload orchestration with (Docker, Kubernetes, Artifactory) Background in working in an agile environment (knowledge of the methods and their limits) Skills Referential Behavioural Skills : (Please select up to 4 skills) Communication skills - oral & written Attention to detail / rigor Adaptability Ability to synthetize / simplify Transversal Skills: Ability to develop and adapt a process Ability to understand, explain and support change Analytical Ability Ability to set up relevant performance indicators Ability to anticipate business / strategic evolution Education Level: Bachelor Degree or equivalent Experience Level At least 5 years
Posted 3 weeks ago
7.0 - 12.0 years
10 - 20 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Skill: Data Engineer Experience: 7+ Years Location: Warangal, Bangalore, Chennai, Hyderabad, Mumbai, Pune, Delhi, Noida, Gurgaon, Kolkata, Jaipur, Jodhpur Notice Period: Immediate - 15 Days Job Description: Design & Build Data Pipelines Develop scalable ETL/ELT workflows to ingest, transform, and load data into Snowflake using SQL, Python, or data integration tools. Data Modeling Create and optimize Snowflake schemas, tables, views, and materialized views to support business analytics and reporting needs. Performance Optimization Tune Snowflake compute resources (warehouses), optimize query performance, and manage clustering and partitioning strategies. Data Quality & Validation Security & Access Control Automation & CI/CD Monitoring & Troubleshooting Documentation
Posted 3 weeks ago
6.0 - 10.0 years
15 - 25 Lacs
Bengaluru
Work from Office
Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to dive headfirst into the captivating world of data engineering at Kyndryl? As a Data Engineer, you'll be the visionary behind our data platforms, crafting them into powerful tools for decision-makers. Your role? Ensuring a treasure trove of pristine, harmonized data is at everyone's fingertips. As a Data Engineer at Kyndryl, you'll be at the forefront of the data revolution, crafting and shaping data platforms that power our organization's success. This role is not just about code and databases; it's about transforming raw data into actionable insights that drive strategic decisions and innovation. In this role, you'll be engineering the backbone of our data infrastructure, ensuring the availability of pristine, refined data sets. With a well-defined methodology, critical thinking, and a rich blend of domain expertise, consulting finesse, and software engineering prowess, you'll be the mastermind of data transformation. Your journey begins by understanding project objectives and requirements from a business perspective, converting this knowledge into a data puzzle. You'll be delving into the depths of information to uncover quality issues and initial insights, setting the stage for data excellence. But it doesn't stop there. You'll be the architect of data pipelines, using your expertise to cleanse, normalize, and transform raw data into the final dataset—a true data alchemist. Armed with a keen eye for detail, you'll scrutinize data solutions, ensuring they align with business and technical requirements. Your work isn't just a means to an end; it's the foundation upon which data-driven decisions are made – and your lifecycle management expertise will ensure our data remains fresh and impactful. So, if you're a technical enthusiast with a passion for data, we invite you to join us in the exhilarating world of data engineering at Kyndryl. Let's transform data into a compelling story of innovation and growth. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical and Professional Expertise Minimum 5+ years of Data consulting experience with a proven track record of building and maintaining client relationships Have experience in cloud services of AWS / GCP / Azure to deploy AI products Ability to develop end to end AI PoC projects using Python, Flask, FastAPI and Streamlit Good understanding of AI technologies, including machine learning, deep learning, and generative AI techniques (such as generative adversial networks (GANs), variational autoencoders (VAEs), etc.) Good knowledge of AI frameworks, tools, and platform such as TensorFlow, PyTorch Knowledge of GenAI libraries and tool sets included HuggingFace, LangChain, RAGAS and more Ability to collaborate with customers and account partners to identify new cloud opportunities, and build proposals and pitch materials to position Kyndryl as a trusted partner for AI & Data transformation Preferred Technical and Professional Expertise Bachelor's degree in computer science, Information Security, or a related field Skilled in planning, organization, analytics, and problem-solving. Excellent communication and interpersonal skills to work collaboratively with clients and team members. Comfortable working with statistics. Advanced skills in PowerPoint and Excel. Strong Business acumen and understanding of industry trends and challenges in AI and Generative AI . Strong leadership skills and proactive and self-driven attitude. Advanced communication and storytelling skills. Highly adaptable and flexible through shifting priorities. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.
Posted 3 weeks ago
7.0 - 12.0 years
15 - 27 Lacs
Pune
Hybrid
Notice Period - Immediate joiner Responsibilities Lead, develop and support analytical pipelines to acquire, ingest and process data from multiple sources Debug, profile and optimize integrations and ETL/ELT processes Design and build data models to conform to our data architecture Collaborate with various teams to deliver effective, high value reporting solutions by leveraging an established DataOps delivery methodology Continually recommend and implement process improvements and tools for data collection, analysis, and visualization Address production support issues promptly, keeping stakeholders informed of status and resolutions Partner closely with on and offshore technical resources Provide on-call support outside normal business hours as needed Provide status updates to the stakeholders. Identify obstacles and seek assistance with enough lead time to ensure delivery on time Demonstrate technical ability, thoroughness, and accuracy in all assignments Document and communicate on proper operations, standards, policies, and procedures Keep abreast on all new tools and technologies that are related to our Enterprise data architecture Foster a positive work environment by promoting teamwork and open communication. Skills/Qualifications Bachelors degree in computer science with focus on data engineering preferable. 6+ years of experience in data warehouse development, building and managing data pipelines in cloud computing environments Strong proficiency in SQL and Python Experience with Azure cloud services, including Azure Data Lake Storage, Data Factory, and Databricks Expertise in Snowflake or similar cloud warehousing technologies Experience with GitHub, including GitHub Actions. Familiarity with data visualization tools, such as Power BI or Spotfire Excellent written and verbal communication skills Strong team player with interpersonal skills to interact at all levels Ability to translate technical information for both technical and non-technical audiences Proactive mindset with a sense of urgency and initiative Adaptability to changing priorities and needs If you are interested share your updated resume on mail - recruit5@focusonit.com. Also Request you to please spread this message across your Networks or Contacts.
Posted 3 weeks ago
4.0 - 6.0 years
12 - 18 Lacs
Chennai, Bengaluru
Work from Office
Key Skills : Python, SQL, PySpark, Databricks, AWS, Data Pipeline, Data Integration, Airflow, Delta Lake, Redshift, S3, Data Security, Cloud Platforms, Life Sciences. Roles & Responsibilities : Develop and maintain robust, scalable data pipelines for ingesting, transforming, and optimizing large datasets from diverse sources. Integrate multi-source data into performant, query-optimized formats such as Delta Lake, Redshift, and S3. Tune data processing jobs and storage layers to ensure cost efficiency and high throughput. Automate data workflows using orchestration tools like Airflow and Databricks APIs for ingestion, transformation, and reporting. Implement data validation and quality checks to ensure reliable and accurate data. Manage and optimize AWS and Databricks infrastructure to support scalable data operations. Lead cloud platform migrations and upgrades, transitioning legacy systems to modern, cloud-native solutions. Enforce security best practices, ensuring compliance with regulatory standards such as IAM and data encryption. Collaborate with cross-functional teams, including data scientists, analysts, and business stakeholders to deliver data solutions. Experience Requirement : 4-6 years of hands-on experience in data engineering with expertise in Python, SQL, PySpark, Databricks, and AWS. Strong background in designing and building data pipelines, and optimizing data storage and processing. Proficiency in using cloud services such as AWS (S3, Redshift, Lambda) for building scalable data solutions. Hands-on experience with containerized environments and orchestration tools like Airflow for automating data workflows. Expertise in data migration strategies and transitioning legacy data systems to modern cloud platforms. Experience with performance tuning, cost optimization, and lifecycle management of cloud data solutions. Familiarity with regulatory compliance (GDPR, HIPAA) and security practices (IAM, encryption). Experience in the Life Sciences or Pharma domain is highly preferred, with an understanding of industry-specific data requirements. Strong problem-solving abilities with a focus on delivering high-quality data solutions that meet business needs. Education : Any Graduation.
Posted 3 weeks ago
6.0 - 9.0 years
18 - 25 Lacs
Chennai
Work from Office
Key Skills : Python, SQL, PySpark, Databricks, AWS, Data Pipeline, Data Governance, Data Security, Leadership, Cloud Platforms, Life Sciences, Migration, Airflow. Roles & Responsibilities : Lead a team of data engineers and developers, defining technical strategy, best practices, and architecture for data platforms. Architect, develop, and manage scalable, secure, and high-performing data solutions on AWS and Databricks. Oversee the design and development of robust data pipelines for ingestion, transformation, and storage of large-scale datasets. Enforce data validation, lineage, and quality checks across the data lifecycle, defining standards for metadata, cataloging, and governance. Design automated workflows using Airflow, Databricks Jobs/APIs, and other orchestration tools for end-to-end data operations. Implement performance tuning strategies, cost optimization best practices, and efficient cluster configurations on AWS/Databricks. Define and enforce data security standards, IAM policies, and ensure compliance with industry-specific regulatory frameworks. Work closely with business users, analysts, and data scientists to translate requirements into scalable technical solutions. Drive strategic data migrations from on-prem/legacy systems to cloud-native platforms with minimal risk and downtime. Mentor junior engineers, contribute to talent development, and ensure continuous learning within the team. Experience Requirement : 6-9 years of hands-on experience in data engineering with expertise in Python, SQL, PySpark, Databricks, and AWS. Strong leadership experience in data engineering or data architecture roles, with a proven track record in leading teams and delivering large-scale data solutions. Expertise in designing and developing data pipelines, optimizing performance, and ensuring data quality. Solid experience with cloud platforms (AWS, Databricks), data governance, and security best practices. Experience in data migration strategies and leading transitions from on-premises to cloud-based environments. Experience in the Life Sciences or Pharma domain is highly preferred, with a deep understanding of industry-specific data requirements. Strong communication and interpersonal skills with the ability to collaborate across teams and engage stakeholders. Education : Any Graduation.
Posted 3 weeks ago
6.0 - 10.0 years
0 Lacs
hyderabad, telangana
On-site
Our company We're Hitachi Digital Services, a global digital solutions and transformation business with a bold vision of our world's potential. We're people-centric and here to power good. Every day, we future-proof urban spaces, conserve natural resources, protect rainforests, and save lives. This is a world where innovation, technology, and deep expertise come together to take our company and customers from what's now to what's next. We make it happen through the power of acceleration. Imagine the sheer breadth of talent it takes to bring a better tomorrow closer to today. We don't expect you to "fit" every requirement - your life experience, character, perspective, and passion for achieving great things in the world are equally as important to us. The role Data Enginering Support (GDC) Minimum experience of around 6 yrs as a Data Engineer with Spark, Scala, Hadoop, Java and python. Good working experience on message broker Active MQ, JMS and Kafka. Strong working experience with Experience with relational SQL and NoSQL databases, including Postgres and Cassandra and analyzing large datasets. Should have strong experience on AWS architecture best practices, AWS EC2, AWS service APIs, AWS CLI, and SDKs to write applications and services. Good experience with ETL activities, data pipeline and workflow management tools. Good experience on Data Bricks is an added advantage. Should have good working experience on Agile methodology Should have experience on ServiceNow/JIRA tools Nice to have experience on any search engines like Solr, Lucene, ELK is added advantage. Strong understanding of OOP & SOA principles, design patterns, industry best practices Analyzes production applications and services issues and determines the most efficient and economical programming solutions. Perform change management activities such as source code management, creating activity records for production implementation, creating implementation plans Strong written and verbal communication skills and attend client meetings Should have strong understanding of the Coding Standards and guidelines Strong interpersonal skills and time management skills Strong analytical and troubleshooting skills Should have lead experience to lead the team and handle day-to-day activities Should be a good team player Should work from office in Hybrid mode and shift hours. About us We're a global, team of innovators. Together, we harness engineering excellence and passion to co-create meaningful solutions to complex challenges. We turn organizations into data-driven leaders that can make a positive impact on their industries and society. If you believe that innovation can bring a better tomorrow closer to today, this is the place for you. Championing diversity, equity, and inclusion Diversity, equity, and inclusion (DEI) are integral to our culture and identity. Diverse thinking, a commitment to allyship, and a culture of empowerment help us achieve powerful results. We want you to be you, with all the ideas, lived experience, and fresh perspective that brings. We support your uniqueness and encourage people from all backgrounds to apply and realize their full potential as part of our team. How we look after you We help take care of your today and tomorrow with industry-leading benefits, support, and services that look after your holistic health and wellbeing. We're also champions of life balance and offer flexible arrangements that work for you (role and location dependent). We're always looking for new ways of working that bring out our best, which leads to unexpected ideas. So here, you'll experience a sense of belonging, and discover autonomy, freedom, and ownership as you work alongside talented people you enjoy sharing knowledge with. ,
Posted 3 weeks ago
5.0 - 9.0 years
0 Lacs
haryana
On-site
Genpact is a global professional services and solutions firm focused on delivering outcomes that shape the future. With over 125,000 employees in more than 30 countries, we are driven by curiosity, agility, and the desire to create lasting value for our clients. Our purpose is the relentless pursuit of a world that works better for people, serving and transforming leading enterprises, including Fortune Global 500 companies, through deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. We are currently seeking applications for the position of Lead Consultant-Databricks Developer - AWS. As a Databricks Developer in this role, you will be responsible for solving cutting-edge real-world problems to meet both functional and non-functional requirements. Responsibilities: - Stay updated on new and emerging technologies and explore their potential applications for service offerings and products. - Collaborate with architects and lead engineers to design solutions that meet functional and non-functional requirements. - Demonstrate knowledge of relevant industry trends and standards. - Showcase strong analytical and technical problem-solving skills. - Possess excellent coding skills, particularly in Python or Scala, with a preference for Python. Qualifications: Minimum qualifications: - Bachelor's Degree in CS, CE, CIS, IS, MIS, or an engineering discipline, or equivalent work experience. - Stay informed about new technologies and their potential applications. - Collaborate with architects and lead engineers to develop solutions. - Demonstrate knowledge of industry trends and standards. - Exhibit strong analytical and technical problem-solving skills. - Proficient in Python or Scala coding. - Experience in the Data Engineering domain. - Completed at least 2 end-to-end projects in Databricks. Additional qualifications: - Familiarity with Delta Lake, dbConnect, db API 2.0, and Databricks workflows orchestration. - Understanding of Databricks Lakehouse concept and its implementation in enterprise environments. - Ability to create complex data pipelines. - Strong knowledge of Data structures & algorithms. - Proficiency in SQL and Spark-SQL. - Experience in performance optimization to enhance efficiency and reduce costs. - Worked on both Batch and streaming data pipelines. - Extensive knowledge of Spark and Hive data processing framework. - Experience with cloud platforms (Azure, AWS, GCP) and common services like ADLS/S3, ADF/Lambda, CosmosDB/DynamoDB, ASB/SQS, Cloud databases. - Skilled in writing unit and integration test cases. - Excellent communication skills and experience working in teams of 5 or more. - Positive attitude towards learning new skills and upskilling. - Knowledge of Unity catalog and basic governance. - Understanding of Databricks SQL Endpoint. - Experience in CI/CD to build pipelines for Databricks jobs. - Exposure to migration projects for building Unified data platforms. - Familiarity with DBT, Docker, and Kubernetes. This is a full-time position based in India-Gurugram. The job posting was on August 5, 2024, and the unposting date is set for October 4, 2024.,
Posted 3 weeks ago
2.0 - 5.0 years
5 - 11 Lacs
Chennai
Hybrid
Job Posting: Support Analyst Big Data & Application Support (Chennai) Location: Chennai, India (Prefers only chennai based candidates ) Experience: 2 to 5 years Employment Type: Full-Time | Hybrid Model Department: Digital Technology Services IT Digital Function: DaaS (Data as a Service), AI & RPA Support * Note : Only candidates with above criteria will be contacted for further process ! Role Overview We are looking for a Support Analyst to join our dynamic DTS IT Digital team in Chennai. In this role, you will support and maintain data platforms, AI/RPA systems, and big data ecosystems. You'll play a key part in production support, rapid incident recovery, and platform improvements, working with global stakeholders. Key Responsibilities Serve as L2/L3 support and point of contact for global support teams Perform detailed root cause analysis (RCA) and prevent incident recurrence Maintain, monitor, and support big data platforms and ETL tools Coordinate with multiple teams for incident and change management Contribute to disaster recovery planning, resiliency events, and capacity management Document support processes, fixes, and participate in monthly RCA reviews Technical Skills Required Proficient in Unix/Linux command line, basic Windows server operations Hands-on with big data and ETL tools such as: Hadoop, MapR, HDFS, Spark, Apache Drill, Yarn, Oozie Ab Initio, Alteryx, Spotfire Strong SQL skills and understanding of data processing Familiarity with problem/change/incident management processes Good scripting knowledge (Shell/Python – optional but preferred) What We’re Looking For Bachelor's degree in Computer Science, IT, or a related field 2 to 5 years of experience in application support or big data platform support Ability to communicate technical issues clearly to non-technical stakeholders Strong problem-solving skills and a collaborative mindset Experience in banking, financial services, or enterprise-grade systems is a plus Why Join Us? Be part of a global innovation and technology team Opportunity to work on AI, RPA, and large-scale data platforms Hybrid work culture with strong global collaboration Career development in a stable and inclusive banking giant Ready to Apply? If you're a passionate technologist with strong support experience and big data platform knowledge, we want to hear from you!
Posted 3 weeks ago
5.0 - 10.0 years
16 - 31 Lacs
Pune
Hybrid
Software Engineer - Lead/Sr.Engineer Bachelor in Computer Science, Engineering, or equivalent experience 7+ years of experience in core JAVA, Spring Framework (Required) 2 years of Cloud experience (GCP, AWS, Azure, GCP preferred ) (Required) Experience in big data processing, on a distributed system. (required) Experience in databases RDBMS, NoSQL databases Cloud natives. (Required) Experience in handling various data formats like Flat file, jSON, Avro, xml etc with defining the schemas and the contracts. (required) Experience in implementing the data pipeline (ETL) using Dataflow (Apache beam) Experience in Microservices and integration patterns of the APIs with data processing. Experience in data structure, defining and designing the data models.
Posted 3 weeks ago
6.0 - 9.0 years
9 - 13 Lacs
Chennai
Work from Office
Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough