Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 7.0 years
7 - 9 Lacs
Bengaluru
Work from Office
As a senior SAP Consultant, you will serve as a client-facing practitioner working collaboratively with clients to deliver high-quality solutions and be a trusted business advisor with deep understanding of SAP Accelerate delivery methodology or equivalent and associated work products. You will work on projects that assist clients in integrating strategy, process, technology, and information to enhance effectiveness, reduce costs, and improve profit and shareholder value. There are opportunities for you to acquire new skills, work across different disciplines, take on new challenges, and develop a comprehensive understanding of various industries. Your primary responsibilities include: Strategic SAP Solution FocusWorking across technical design, development, and implementation of SAP solutions for simplicity, amplification, and maintainability that meet client needs. Comprehensive Solution DeliveryInvolvement in strategy development and solution implementation, leveraging your knowledge of SAP and working with the latest technologies. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS; Exposure to streaming solutions and message brokers like Kafka technologies. Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers
Posted 1 month ago
5.0 - 7.0 years
7 - 9 Lacs
Bengaluru
Work from Office
As a senior SAP Consultant, you will serve as a client-facing practitioner working collaboratively with clients to deliver high-quality solutions and be a trusted business advisor with deep understanding of SAP Accelerate delivery methodology or equivalent and associated work products. You will work on projects that assist clients in integrating strategy, process, technology, and information to enhance effectiveness, reduce costs, and improve profit and shareholder value. There are opportunities for you to acquire new skills, work across different disciplines, take on new challenges, and develop a comprehensive understanding of various industries. Your primary responsibilities include: Strategic SAP Solution FocusWorking across technical design, development, and implementation of SAP solutions for simplicity, amplification, and maintainability that meet client needs. Comprehensive Solution DeliveryInvolvement in strategy development and solution implementation, leveraging your knowledge of SAP and working with the latest technologies. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS; Exposure to streaming solutions and message brokers like Kafka technologies. Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers
Posted 1 month ago
6.0 - 10.0 years
8 - 14 Lacs
Bengaluru
Work from Office
Work Location : Bangalore (CV Ramen Nagar location) Notice Period : Immediate - 30 days Mandatory Skills : Big Data, Python, SQL, Spark/Pyspark, AWS Cloud JD and required Skills & Responsibilities : - Actively participate in all phases of the software development lifecycle, including requirements gathering, functional and technical design, development, testing, roll-out, and support. - Solve complex business problems by utilizing a disciplined development methodology. - Produce scalable, flexible, efficient, and supportable solutions using appropriate technologies. - Analyse the source and target system data. Map the transformation that meets the requirements. - Interact with the client and onsite coordinators during different phases of a project. - Design and implement product features in collaboration with business and Technology stakeholders. - Anticipate, identify, and solve issues concerning data management to improve data quality. - Clean, prepare, and optimize data at scale for ingestion and consumption. - Support the implementation of new data management projects and re-structure the current data architecture. - Implement automated workflows and routines using workflow scheduling tools. - Understand and use continuous integration, test-driven development, and production deployment frameworks. - Participate in design, code, test plans, and dataset implementation performed by other data engineers in support of maintaining data engineering standards. - Analyze and profile data for the purpose of designing scalable solutions. - Troubleshoot straightforward data issues and perform root cause analysis to proactively resolve product issues. Required Skills : - 5+ years of relevant experience developing Data and analytic solutions. - Experience building data lake solutions leveraging one or more of the following AWS, EMR, S3, Hive & PySpark - Experience with relational SQL. - Experience with scripting languages such as Python. - Experience with source control tools such as GitHub and related dev process. - Experience with workflow scheduling tools such as Airflow. - In-depth knowledge of AWS Cloud (S3, EMR, Databricks) - Has a passion for data solutions. - Has a strong problem-solving and analytical mindset - Working experience in the design, Development, and test of data pipelines. - Experience working with Agile Teams. - Able to influence and communicate effectively, both verbally and in writing, with team members and business stakeholders - Able to quickly pick up new programming languages, technologies, and frameworks. - Bachelor's degree in computer science
Posted 1 month ago
4.0 - 6.0 years
15 - 25 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Warm Greetings from SP Staffing!! Role: AWS Data Engineer Experience Required :4 to 6 yrs Work Location :Bangalore/Pune/Hyderabad/Chennai Required Skills, Pyspark AWS Glue Interested candidates can send resumes to nandhini.spstaffing@gmail.com
Posted 1 month ago
7.0 - 12.0 years
9 - 14 Lacs
Pune, Hinjewadi
Work from Office
Job Summary Synechron is seeking an experienced and technically proficient Senior PySpark Data Engineer to join our data engineering team. In this role, you will be responsible for developing, optimizing, and maintaining large-scale data processing solutions using PySpark. Your expertise will support our organizations efforts to leverage big data for actionable insights, enabling data-driven decision-making and strategic initiatives. Software Requirements Required Skills: Proficiency in PySpark Familiarity with Hadoop ecosystem components (e.g., HDFS, Hive, Spark SQL) Experience with Linux/Unix operating systems Data processing tools like Apache Kafka or similar streaming platforms Preferred Skills: Experience with cloud-based big data platforms (e.g., AWS EMR, Azure HDInsight) Knowledge of Python (beyond PySpark), Java or Scala relevant to big data applications Familiarity with data orchestration tools (e.g., Apache Airflow, Luigi) Overall Responsibilities Design, develop, and optimize scalable data processing pipelines using PySpark. Collaborate with data engineers, data scientists, and business analysts to understand data requirements and deliver solutions. Implement data transformations, aggregations, and extraction processes to support analytics and reporting. Manage large datasets in distributed storage systems, ensuring data integrity, security, and performance. Troubleshoot and resolve performance issues within big data workflows. Document data processes, architectures, and best practices to promote consistency and knowledge sharing. Support data migration and integration efforts across varied platforms. Strategic Objectives: Enable efficient and reliable data processing to meet organizational analytics and reporting needs. Maintain high standards of data security, compliance, and operational durability. Drive continuous improvement in data workflows and infrastructure. Performance Outcomes & Expectations: Efficient processing of large-scale data workloads with minimum downtime. Clear, maintainable, and well-documented code. Active participation in team reviews, knowledge transfer, and innovation initiatives. Technical Skills (By Category) Programming Languages: Required: PySpark (essential); Python (needed for scripting and automation) Preferred: Java, Scala Databases/Data Management: Required: Experience with distributed data storage (HDFS, S3, or similar) and data warehousing solutions (Hive, Snowflake) Preferred: Experience with NoSQL databases (Cassandra, HBase) Cloud Technologies: Required: Familiarity with deploying and managing big data solutions on cloud platforms such as AWS (EMR), Azure, or GCP Preferred: Cloud certifications Frameworks and Libraries: Required: Spark SQL, Spark MLlib (basic familiarity) Preferred: Integration with streaming platforms (e.g., Kafka), data validation tools Development Tools and Methodologies: Required: Version control systems (e.g., Git), Agile/Scrum methodologies Preferred: CI/CD pipelines, containerization (Docker, Kubernetes) Security Protocols: Optional: Basic understanding of data security practices and compliance standards relevant to big data management Experience Requirements Minimum of 7+ years of experience in big data environments with hands-on PySpark development. Proven ability to design and implement large-scale data pipelines. Experience working with cloud and on-premises big data architectures. Preference for candidates with domain-specific experience in finance, banking, or related sectors. Candidates with substantial related experience and strong technical skills in big data, even from different domains, are encouraged to apply. Day-to-Day Activities Develop, test, and deploy PySpark data processing jobs to meet project specifications. Collaborate in multi-disciplinary teams during sprint planning, stand-ups, and code reviews. Optimize existing data pipelines for performance and scalability. Monitor data workflows, troubleshoot issues, and implement fixes. Engage with stakeholders to gather new data requirements, ensuring solutions are aligned with business needs. Contribute to documentation, standards, and best practices for data engineering processes. Support the onboarding of new data sources, including integration and validation. Decision-Making Authority & Responsibilities: Identify performance bottlenecks and propose effective solutions. Decide on appropriate data processing approaches based on project requirements. Escalate issues that impact project timelines or data integrity. Qualifications Bachelors degree in Computer Science, Information Technology, or related field. Equivalent experience considered. Relevant certifications are preferred: Cloudera, Databricks, AWS Certified Data Analytics, or similar. Commitment to ongoing professional development in data engineering and big data technologies. Demonstrated ability to adapt to evolving data tools and frameworks. Professional Competencies Strong analytical and problem-solving skills, with the ability to model complex data workflows. Excellent communication skills to articulate technical solutions to non-technical stakeholders. Effective teamwork and collaboration in a multidisciplinary environment. Adaptability to new technologies and emerging trends in big data. Ability to prioritize tasks effectively and manage time in fast-paced projects. Innovation mindset, actively seeking ways to improve data infrastructure and processes.
Posted 1 month ago
8.0 - 13.0 years
18 - 33 Lacs
Bengaluru
Hybrid
Warm Greetings from SP Staffing!! Role: AWS Data Engineer Experience Required :8 to 15 yrs Work Location :Bangalore Required Skills, Technical knowledge of data engineering solutions and practices. Implementation of data pipelines using tools like EMR, AWS Glue, AWS Lambda, AWS Step Functions, API Gateway, Athena Proficient in Python and Spark, with a focus on ETL data processing and data engineering practices. Interested candidates can send resumes to nandhini.spstaffing@gmail.com
Posted 1 month ago
5.0 - 8.0 years
7 - 10 Lacs
Hyderabad
Work from Office
Grade Level (for internal use): 10 Market Intelligence The Role: Senior Full Stack Developer Grade level :10 The Team: You will work with a team of intelligent, ambitious, and hard-working software professionals. The team is responsible for the architecture, design, development, quality, and maintenance of the next-generation financial data web platform. Other responsibilities include transforming product requirements into technical design and implementation. You will be expected to participate in the design review process, write high-quality code, and work with a dedicated team of QA Analysts, and Infrastructure Teams The Impact: Market Intelligence is seeking a Software Developer to create software design, development, and maintenance for data processing applications. This person would be part of a development team that manages and supports the internal & external applications that is supporting the business portfolio. This role expects a candidate to handle any data processing, big data application development. We have teams made up of people that learn how to work effectively together while working with the larger group of developers on our platform. Whats in it for you: Opportunity to contribute to the development of a world-class Platform Engineering team . Engage in a highly technical, hands-on role designed to elevate team capabilities and foster continuous skill enhancement. Be part of a fast-paced, agile environment that processes massive volumes of dataideal for advancing your software development and data engineering expertise while working with a modern tech stack. Contribute to the development and support of Tier-1, business-critical applications that are central to operations. Gain exposure to and work with cutting-edge technologies including AWS Cloud , EMR and Apache NiFi . Grow your career within a globally distributed team , with clear opportunities for advancement and skill development. Responsibilities: Design and develop applications, components, and common services based on development models, languages and tools, including unit testing, performance testing and monitoring and implementation Support business and technology teams as necessary during design, development and delivery to ensure scalable and robust solutions Build data-intensive applications and services to support and enhance fundamental financials in appropriate technologies.( C#, .Net Core, Databricsk, Spark ,Python, Scala, NIFI , SQL) Build data modeling, achieve performance tuning and apply data architecture concepts Develop applications adhering to secure coding practices and industry-standard coding guidelines, ensuring compliance with security best practices (e.g., OWASP) and internal governance policies. Implement and maintain CI/CD pipelines to streamline build, test, and deployment processes; develop comprehensive unit test cases and ensure code quality Provide operations support to resolve issues proactively and with utmost urgency Effectively manage time and multiple tasks Communicate effectively, especially written with the business and other technical groups What Were Looking For: Basic Qualifications: BachelorsMasters Degree in Computer Science, Information Systems or equivalent. Minimum 5 to 8 years of strong hand-development experience in C#, .Net Core, Cloud Native, MS SQL Server backend development. Proficiency with Object Oriented Programming. Advance SQL programming skills Preferred experience or familiarity with tools and technologies such as Odata, Grafana, Kibana, Big Data platforms, Apache Kafka, GitHub, AWS EMR, Terraform, and emerging areas like AI/ML and GitHub Copilot. Highly recommended skillset in Databricks, SPARK, Scalatechnologies. Understanding of database performance tuning in large datasets Ability to manage multiple priorities efficiently and effectively within specific timeframes Excellent logical, analytical and communication skills are essential, with strong verbal and writing proficiencies Knowledge of Fundamentals, or financial industry highly preferred. Experience in conducting application design and code reviews Proficiency with following technologies: Object-oriented programming Programing Languages (C#, .Net Core) Cloud Computing Database systems (SQL, MS SQL) Nice to have: No-SQL (Databricks, Spark, Scala, python), Scripting (Bash, Scala, Perl, Powershell) Preferred Qualifications: Hands-on experience with cloud computing platforms including AWS , Azure , or Google Cloud Platform (GCP) . Proficient in working with Snowflake and Databricks for cloud-based data analytics and processing. Benefits: Health & Wellness: Health care coverage designed for the mind and body. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: Its not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awardssmall perks can make a big difference.
Posted 1 month ago
4.0 - 9.0 years
12 - 16 Lacs
Kochi
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies. Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers.
Posted 1 month ago
5.0 - 7.0 years
12 - 16 Lacs
Bengaluru
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS; Exposure to streaming solutions and message brokers like Kafka technologies. Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers AWS S3 , Redshift , and EMR for data storage and distributed processing. AWS Lambda , AWS Step Functions , and AWS Glue to build serverless, event-driven data workflows and orchestrate ETL processes
Posted 1 month ago
4.0 - 9.0 years
12 - 22 Lacs
Gurugram
Work from Office
To Apply - Submit Details via Google Form - https://forms.gle/8SUxUV2cikzjvKzD9 As a Senior Consultant in our Consulting team, youll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations Seeking experienced AWS Data Engineers to design, implement, and maintain robust data pipelines and analytics solutions using AWS services. The ideal candidate will have a strong background in AWS data services, big data technologies, and programming languages. Role & responsibilities 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Preferred candidate profile 1. Bachelors degree in computer science, Information Technology, or related field 2. Relevant years of experience as a Data Engineer, with at least 60% of experience focusing on AWS 3. Strong proficiency in AWS data services: Glue, EMR, Lambda, Athena, Redshift, S3 4. Experience with data lake technologies, particularly Delta Lake 5. Expertise in database systems: Aurora, RDS, DynamoDB, PostgreSQL 6. Proficiency in Python and PySpark programming 7. Strong SQL skills and experience with PostgreSQL 8. Experience with AWS Step Functions for workflow orchestration Technical Skills: - AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB , Step Functions - Big Data: Hadoop, Spark, Delta Lake - Programming: Python, PySpark - Databases: SQL, PostgreSQL, NoSQL - Data Warehousing and Analytics - ETL/ELT processes - Data Lake architectures - Version control: Git - Agile methodologies
Posted 1 month ago
4.0 - 9.0 years
12 - 22 Lacs
Gurugram, Bengaluru
Work from Office
To Apply - Submit Details via Google Form - https://forms.gle/8SUxUV2cikzjvKzD9 As a Senior Consultant in our Consulting team, youll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations Seeking experienced AWS Data Engineers to design, implement, and maintain robust data pipelines and analytics solutions using AWS services. The ideal candidate will have a strong background in AWS data services, big data technologies, and programming languages. Role & responsibilities 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Preferred candidate profile 1. Bachelors degree in computer science, Information Technology, or related field 2. Relevant years of experience as a Data Engineer, with at least 60% of experience focusing on AWS 3. Strong proficiency in AWS data services: Glue, EMR, Lambda, Athena, Redshift, S3 4. Experience with data lake technologies, particularly Delta Lake 5. Expertise in database systems: Aurora, RDS, DynamoDB, PostgreSQL 6. Proficiency in Python and PySpark programming 7. Strong SQL skills and experience with PostgreSQL 8. Experience with AWS Step Functions for workflow orchestration Technical Skills: - AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB , Step Functions - Big Data: Hadoop, Spark, Delta Lake - Programming: Python, PySpark - Databases: SQL, PostgreSQL, NoSQL - Data Warehousing and Analytics - ETL/ELT processes - Data Lake architectures - Version control: Git - Agile methodologies
Posted 1 month ago
5.0 - 10.0 years
15 - 30 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
EPAM has presence across 40+ countries globally with 55,000 + professionals & numerous delivery centers, Key locations are North America, Eastern Europe, Central Europe, Western Europe, APAC, Mid East & Development Centers in India (Hyderabad, Pune & Bangalore). Location: Gurgaon/Pune/Hyderabad/Bengaluru/Chennai Work Mode: Hybrid (2-3 days office in a week) Job Description: 5-14 Years of in Big Data & Data related technology experience Expert level understanding of distributed computing principles Expert level knowledge and experience in Apache Spark Hands on programming with Python Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming Good understanding of Big Data querying tools, such as Hive, and Impala Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files Good understanding of SQL queries, joins, stored procedures, relational schemas Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge of ETL techniques and frameworks Performance tuning of Spark Jobs Experience with native Cloud data services AWS Ability to lead a team efficiently Experience with designing and implementing Big data solutions Practitioner of AGILE methodology WE OFFER Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: • Health benefits, Medical Benefits• Retirement benefits• Paid time off• Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc
Posted 1 month ago
7.0 - 11.0 years
10 - 14 Lacs
Chennai
Work from Office
What you'll do As a Software Engineer, you will work with a world class team developing and deploying new technologies on a cutting edge network. You will design, develop and deploy new and innovative technology into a service provider network. Viasats unique position as a service provider and equipment manufacturer allows you to experience the whole life cycle of software development all the way from design to deployment. The day-to-day You will be a member of the software team that is involved in the embedded software development . It interacts with different network elements both on Access Network towards adapting with L2 Subsystem, CSN Network towards adapting with service network components. Our team members enjoy working closely with each other utilizing an agile development methodology. Priorities can change quickly, but our team members are able to stay ahead of deadlines to delight every one of our customers whether they are internal or external to Via sat. We are searching for candidates who enjoy working with people and have a technical mind that excels when being challenged What you'll need 7 to 11 years of software engineering experience in Java with strong emphasis on software architecture and design in the Unix/Linux based platforms. Experience with network programming and concurrent/multithreaded programming. Experience building CI/CD pipeline and automated software deployments. Experience working in cloud environment AWS EMR. Familiarity with Hadoop and data processing technologies such as Kafka is advantageous. Problem-solving experience and possess a DevOps approach Strong oral and written communication skills. Bachelors degree in Computer Science, Electrical Engineering, or related Engineering Disciplines. Up to 10% of travel. What will help you on the job Knowledge on tools like Jenkins, JIRA, and Git. Experience with bash, ansible and Python scripting in Linux Experience with telecom/networking/satellite/wireless communications.
Posted 1 month ago
4.0 - 9.0 years
6 - 10 Lacs
Hyderabad
Work from Office
We are seeking a detail-oriented and highly skilled Data Engineering Test Automation Engineer to ensure the quality, reliability, and performance of our data pipelines and platforms. The ideal candidate will have a strong background in data testing , ETL validation , and test automation frameworks . You will work closely with data engineers, analysts, and DevOps teams to build robust test suites for large-scale data solutions. This role combines deep technical execution with a solid foundation in QA best practices including test planning, defect tracking, and test lifecycle management . You will be responsible for designing and executing manual and automated test strategies for complex real-time and batch data pipelines , contributing to the design of automation frameworks , and ensuring high-quality data delivery across our AWS and Databricks-based analytics platforms . The role is highly technical and hands-on , with a strong focus on automation, data accuracy, completeness, consistency , and ensuring data governance practices are seamlessly integrated into development pipelines. Roles & Responsibilities Design, develop, and maintain automated test scripts for data pipelines, ETL jobs, and data integrations. Validate data accuracy, completeness, transformations, and integrity across multiple systems. Collaborate with data engineers to define test cases and establish data quality metrics. Develop reusable test automation frameworks and CI/CD integrations (e.g., Jenkins, GitHub Actions). Perform performance and load testing for data systems. Maintain test data management and data mocking strategies. Identify and track data quality issues, ensuring timely resolution. Perform root cause analysis and drive corrective actions. Contribute to QA ceremonies (standups, planning, retrospectives) and drive continuous improvement in QA processes and culture. Must-Have Skills Experience in QA roles, with strong exposure to data pipeline validation and ETL Testing. Domain Knowledge of R&D domain of life science. Validate data accuracy, transformations, schema compliance, and completeness across systems using PySpark and SQL . Strong hands-on experience with Python, and optionally PySpark, for developing automated data validation scripts. Proven experience in validating ETL workflows, with a solid understanding of data transformation logic, schema comparison, and source-to-target mapping. Experience working with data integration and processing platforms like Databricks/Snowflake, AWS EMR, Redshift etc Experience in manual and automated testing of data pipelines executions for both batch and real-time data pipelines. Perform performance testing of large-scale complex data engineering pipelines. Ability to troubleshoot data issues independently and collaborate with engineering teams for root cause analysis Strong understanding of QA methodologies, test planning, test case design, and defect lifecycle management. Hands-on experience with API testing using Postman, pytest, or custom automation scripts Experience integrating automated tests into CI/CD pipelines using tools like Jenkins, GitHub Actions, or similar. Knowledge of cloud platforms such as AWS, Azure, GCP. Good-to-Have Skills Certifications in Databricks, AWS, Azure, or data QA (e.g., ISTQB). Understanding of data privacy, compliance, and governance frameworks. Knowledge of UI automated testing frameworks like Selenium, JUnit, TestNG Familiarity with monitoring/observability tools such as Datadog, Prometheus, or Cloud Watch Education and Professional Certifications Masters degree and 3 to 7 years of Computer Science, IT or related field experience OR Bachelors degree and 4 to 9 years of Computer Science, IT or related field experience Soft Skills Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills.
Posted 1 month ago
4.0 - 9.0 years
12 - 22 Lacs
Hyderabad, Chennai
Work from Office
Interested can also apply with Sanjeevan Natarajan sanjeevan.natarajan@careernet.in Role & responsibilities Technical Leadership Lead a team of data engineers and developers; define technical strategy, best practices, and architecture for data platforms. End-to-End Solution Ownership Architect, develop, and manage scalable, secure, and high-performing data solutions on AWS and Databricks. Data Pipeline Strategy Oversee the design and development of robust data pipelines for ingestion, transformation, and storage of large-scale datasets. Data Governance & Quality Enforce data validation, lineage, and quality checks across the data lifecycle. Define standards for metadata, cataloging, and governance. Orchestration & Automation Design automated workflows using Airflow, Databricks Jobs/APIs, and other orchestration tools for end-to-end data operations. Cloud Cost & Performance Optimization Implement performance tuning strategies, cost optimization best practices, and efficient cluster configurations on AWS/Databricks. Security & Compliance Define and enforce data security standards, IAM policies, and compliance with industry-specific regulatory frameworks. Collaboration & Stakeholder Engagement Work closely with business users, analysts, and data scientists to translate requirements into scalable technical solutions. Migration Leadership Drive strategic data migrations from on-prem/legacy systems to cloud-native platforms with minimal risk and downtime. Mentorship & Growth Mentor junior engineers, contribute to talent development, and ensure continuous learning within the team. Preferred candidate profile Python , SQL , PySpark , Databricks , AWS (Mandatory) Leadership Experience in Data Engineering/Architecture Added Advantage: Experience in Life Sciences / Pharma
Posted 1 month ago
8.0 - 12.0 years
12 - 17 Lacs
Chennai
Work from Office
Strong working experience in Python programming. Expertise with one of the Python frameworks - pyspark, Strong experience with using pandas, numpy, joblib and other popular libraries. Must have experience With AWS EMR and Pyspark Good working experience with parallel batch processing with python Good working experience On AWS Batch and Step functions Should have the expertise to write an effective, scalable, highly performable code Good to have Apache Airflow Should have implemented 2 or more large-scale projects and be a part of end-to-end system implementation. Good analytical and problem-solving skills
Posted 1 month ago
4.0 - 9.0 years
15 - 25 Lacs
Hyderabad, Chennai
Work from Office
Interested can also apply with sanjeevan.natarajan@careernet.in Role & responsibilities Technical Leadership Lead a team of data engineers and developers; define technical strategy, best practices, and architecture for data platforms. End-to-End Solution Ownership Architect, develop, and manage scalable, secure, and high-performing data solutions on AWS and Databricks. Data Pipeline Strategy Oversee the design and development of robust data pipelines for ingestion, transformation, and storage of large-scale datasets. Data Governance & Quality Enforce data validation, lineage, and quality checks across the data lifecycle. Define standards for metadata, cataloging, and governance. Orchestration & Automation Design automated workflows using Airflow, Databricks Jobs/APIs, and other orchestration tools for end-to-end data operations. Cloud Cost & Performance Optimization Implement performance tuning strategies, cost optimization best practices, and efficient cluster configurations on AWS/Databricks. Security & Compliance Define and enforce data security standards, IAM policies, and compliance with industry-specific regulatory frameworks. Collaboration & Stakeholder Engagement Work closely with business users, analysts, and data scientists to translate requirements into scalable technical solutions. Migration Leadership Drive strategic data migrations from on-prem/legacy systems to cloud-native platforms with minimal risk and downtime. Mentorship & Growth Mentor junior engineers, contribute to talent development, and ensure continuous learning within the team. Preferred candidate profile Python , SQL , PySpark , Databricks , AWS (Mandatory) Leadership Experience in Data Engineering/Architecture Added Advantage: Experience in Life Sciences / Pharma
Posted 2 months ago
5.0 - 10.0 years
15 - 30 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
EPAM has presence across 40+ countries globally with 55,000 + professionals & numerous delivery centers, Key locations are North America, Eastern Europe, Central Europe, Western Europe, APAC, Mid East & Development Centers in India (Hyderabad, Pune & Bangalore). Location: Gurgaon/Pune/Hyderabad/Bengaluru/Chennai Work Mode: Hybrid (2-3 days office in a week) Job Description: 5-14 Years of in Big Data & Data related technology experience Expert level understanding of distributed computing principles Expert level knowledge and experience in Apache Spark Hands on programming with Python Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming Good understanding of Big Data querying tools, such as Hive, and Impala Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files Good understanding of SQL queries, joins, stored procedures, relational schemas Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge of ETL techniques and frameworks Performance tuning of Spark Jobs Experience with native Cloud data services AWS/Azure/GCP Ability to lead a team efficiently Experience with designing and implementing Big data solutions Practitioner of AGILE methodology WE OFFER Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: • Health benefits, Medical Benefits• Retirement benefits• Paid time off• Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc
Posted 2 months ago
5.0 - 7.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Introduction In this role, youll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. In this role, youll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your role and responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelors Degree Preferred education Masters Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS Exposure to streaming solutions and message brokers like Kafka technologies. Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers AWS S3, Redshift, and EMR for data storage and distributed processing. AWS Lambda, AWS Step Functions, and AWS Glue to build serverless, event-driven data workflows and orchestrate ETL processes
Posted 2 months ago
7.0 - 12.0 years
18 - 20 Lacs
Hyderabad
Work from Office
We are Hiring Senior Python with Machine Learning Engineer Level 3 for a US based IT Company based in Hyderabad. Candidates with minimum 7 Years of experience in python and machine learning can apply. Job Title : Senior Python with Machine Learning Engineer Level 3 Location : Hyderabad Experience : 7+ Years CTC : 28 LPA - 30 LPA Working shift : Day shift Job Description: We are seeking a highly skilled and experienced Python Developer with a strong background in Machine Learning (ML) to join our advanced analytics team. In this Level 3 role, you will be responsible for designing, building, and deploying robust ML pipelines and solutions across real-time, batch, event-driven, and edge computing environments. The ideal candidate will have extensive hands-on experience in developing and deploying ML workflows using AWS SageMaker , building scalable APIs, and integrating ML models into production systems. This role also requires a strong grasp of the complete ML lifecycle and DevOps practices specific to ML projects. Key Responsibilities: Develop and deploy end-to-end ML pipelines for real-time, batch, event-triggered, and edge environments using Python Utilize AWS SageMaker to build, train, deploy, and monitor ML models using SageMaker Pipelines, MLflow, and Feature Store Build and maintain RESTful APIs for ML model serving using FastAPI , Flask , or Django Work with popular ML frameworks and tools such as scikit-learn , PyTorch , XGBoost , LightGBM , and MLflow Ensure best practices across the ML lifecycle: data preprocessing, model training, validation, deployment, and monitoring Implement CI/CD pipelines tailored for ML workflows using tools like Bitbucket , Jenkins , Nexus , and AUTOSYS Design and maintain ETL workflows for ML pipelines using PySpark , Kafka , AWS EMR , and serverless architectures Collaborate with cross-functional teams to align ML solutions with business objectives and deliver impactful results Required Skills & Experience: 5+ years of hands-on experience with Python for scripting and ML workflow development 4+ years of experience with AWS SageMaker for deploying ML models and pipelines 3+ years of API development experience using FastAPI , Flask , or Django 3+ years of experience with ML tools such as scikit-learn , PyTorch , XGBoost , LightGBM , and MLflow Strong understanding of the complete ML lifecycle: from model development to production monitoring Experience implementing CI/CD for ML using Bitbucket , Jenkins , Nexus , and AUTOSYS Proficient in building ETL processes for ML workflows using PySpark , Kafka , and AWS EMR Nice to Have: Experience with H2O.ai for advanced machine learning capabilities Familiarity with containerization using Docker and orchestration using Kubernetes For further assistance contact/whatsapp : 9354909517 or write to hema@gist.org.in
Posted 2 months ago
3.0 - 6.0 years
0 Lacs
, India
On-site
About the Role: 10 One of the most valuable asset in today's Financial industry is the data which can provide businesses the intelligence essential to making business and financial decisions with conviction. This role will provide an opportunity to you to work on Ratings and Research related data. You will get an opportunity to work on cutting edge big data technologies and will be responsible for development of both Data feeds as well as API work. The Team: RatingsXpress is at the heart of financial workflows when it comes to providing and analyzing data. We provide Ratings and Research information to clients . Our work deals with content ingestion, data feeds generation as well as exposing the data to clients via API calls. This position in part of the Ratings Xpresss team and is focused on providing clients the critical data they need to make the most informed investment decisions possible. Impact: As a member of the Xpressfeed Team in S&P Global Market Intelligence, you will work with a group of intelligent and visionary engineers to build impactful content management tools for investment professionals across the globe. Our Software Engineers are involved in the full product life cycle, from design through release. You will be expected to participate in application designs , write high-quality code and innovate on how to improve the overall system performance and customer experience. If you are a talented developer and want to help drive the next phase for Data Management Solutions at S&P Global and can contribute great ideas, solutions and code and understand the value of Cloud solutions, we would like to talk to you. What's in it for you: We are currently seeking a Software Developer with a passion for full-stack development. In this role, you will have the opportunity to work on cutting-edge cloud technologies such as Databricks , Snowflake , and AWS , while also engaging in Scala and SQL Server -based database development. This position offers a unique opportunity to grow both as a Full Stack Developer and as a Cloud Engineer , expanding your expertise across modern data platforms and backend development. Responsibilities: Analyze, design and develop solutions within a multi-functional Agile team to support key business needs for the Data feeds Design, implement and test solutions using AWS EMR for content Ingestion. Work on complex SQL server projects involving high volume data Engineer components, and common services based on standard corporate development models, languages and tools Apply software engineering best practices while also leveraging automation across all elements of solution delivery Collaborate effectively with technical and non-technical stakeholders. Must be able to document and demonstrate technical solutions by developing documentation, diagrams, code comments, etc. Basic Qualifications: Bachelor's degree in Computer Science, Information Technology, Engineering, or a related field. 3-6 years of experience in application development. Minimum of 2 years of hands-on experience with Scala. Minimum of 2 years of hands-on experience with Microsoft SQL Server. Solid understanding of Amazon Web Services (AWS) and cloud-based development. In-depth knowledge of system architecture, object-oriented programming, and design patterns. Excellent communication skills, with the ability to convey complex ideas clearly both verbally and in writing. Preferred Qualifications: Familiarity with AWS Services, EMR, Auto scaling, EKS Working knowledge of snowflake. Preferred experience in Python development. Familiarity with the Financial Services domain and Capital Markets is a plus. Experience developing systems that handle large volumes of data and require high computational performance. What's In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology-the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide-so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We're committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We're constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That's why we provide everything you-and your career-need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It's not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards-small perks can make a big difference. For more information on benefits by country visit: Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - ----------------------------------------------------------- 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority - Ratings - (Strategic Workforce Planning)
Posted 2 months ago
0.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Job Description: Job description Skills AWS EMR Key Responsibilities: A day in the life of an Infoscion As part of the Infosys delivery team your primary role would be to interface with the client for quality assurance issue resolution and ensuring high customer satisfaction You will understand requirements create and review designs validate the architecture and ensure high levels of service offerings to clients in the technology domain You will participate in project estimation provide inputs for solution delivery conduct technical risk planning perform code reviews and unit test plan reviews You will lead and guide your teams towards developing optimized high quality code deliverables continual knowledge management and adherence to the organizational guidelines and processes You would be a key contributor to building efficient programs systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey this is the place for you If you think you fit right in to help our clients navigate their next in their digital transformation journey this is the place for you Technical Requirements: Primary skills Technology Big Data Data Processing Map Reduce Preferred Skills: Technology->Big Data - Data Processing->Map Reduce
Posted 2 months ago
6.0 - 11.0 years
4 - 8 Lacs
Kolkata
Work from Office
Must have knowledge in Azure Datalake, Azure function, Azure Databricks, Azure data factory, PostgreSQL Working knowledge in Azure devops, Git flow would be an added advantage. (OR) SET 2 Must have working knowledge in AWS Kinesis, AWS EMR, AWS Glue, AWS RDS, AWS Athena, AWS RedShift. Should have demonstrable knowledge and expertise in working with timeseries data. Working knowledge in delivering data engineering / data science projects in Industry 4.0 is an added advantage. Should have knowledge on Palantir. Strong problem-solving skills with an emphasis on sustainable and reusable development. Experience using statistical computer languages to manipulate data and draw insights from large data sets Python/PySpark, Pandas, Numpy seaborn / matplotlib, Knowledge in Streamlit.io is a plus Familiarity with Scala, GoLang, Java would be added advantage. Experience with big data toolsHadoop, Spark, Kafka, etc. Experience with relational databases such as Microsoft SQL Server, MySQL, PostGreSQL, Oracle and NoSQL databases such as Hadoop, Cassandra, Mongo dB Experience with data pipeline and workflow management toolsAzkaban, Luigi, Airflow, etc Experience building and optimizing big data data pipelines, architectures and data sets. Strong analytic skills related to working with unstructured datasets. Primary Skills Provide innovative solutions to the data engineering problems that are faced in the project and solve them with technically superior code & skills. Where possible, should document the process of choosing technology or usage of integration patterns and help in creating a knowledge management artefact that can be used for other similar areas. Create & apply best practices in delivering the project with clean code. Should work innovatively and have a sense of proactiveness in fulfilling the project needs. Additional Information: Reporting to Director- Intelligent Insights and Data Strategy Travel Must be willing to be deployed at client locations anywhere in the world for long and short term as well as should be flexible to travel on shorter duration within India and abroad
Posted 2 months ago
8.0 - 11.0 years
35 - 37 Lacs
Kolkata, Ahmedabad, Bengaluru
Work from Office
Dear Candidate, Seeking a Cloud Monitoring Specialist to set up observability and real-time monitoring in cloud environments. Key Responsibilities: Configure logging and metrics collection. Set up alerts and dashboards using Grafana, Prometheus, etc. Optimize system visibility for performance and security. Required Skills & Qualifications: Familiar with ELK stack, Datadog, New Relic, or Cloud-native monitoring tools. Strong troubleshooting and root cause analysis skills. Knowledge of distributed systems. Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies
Posted 2 months ago
5.0 - 8.0 years
8 - 18 Lacs
Bengaluru
Hybrid
Technical Skills: Python, Py Spark, Sql, Redshift , S3 , Cloud Watch, Lambda, AWS Glue EMR Step Function Databricks Having knowledge on visulalization tool will add value Experience : Should have worked in technical delivery of above services preferable in similar organizations and having good communication skills. Certifications Preference of AWS Data Engineer Certification
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough