Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4 - 9 years
14 - 18 Lacs
Noida
Work from Office
Who We Are Build a brighter future while learning and growing with a Siemens company at the intersection of technology, community and s ustainability. Our global team of innovators is always looking to create meaningful solutions to some of the toughest challenges facing our world. Find out how far your passion can take you. What you need * BS in an Engineering or Science discipline, or equivalent experience * 7+ years of software/data engineering experience using Java, Scala, and/or Python, with at least 5 years' experience in a data focused role * Experience in data integration (ETL/ELT) development using multiple languages (e.g., Java, Scala, Python, PySpark, SparkSQL) * Experience building and maintaining data pipelines supporting a variety of integration patterns (batch, replication/CD C, event streaming) and data lake/warehouse in production environments * Experience with AWS-based data services technologies (e.g., Kinesis, Glue, RDS, Athena, etc.) and Snowflake CDW * Experience of working in the larger initiatives building and rationalizing large scale data environments with a large variety of data pipelines, possibly with internal and external partner integrations, would be a plus * Willingness to experiment and learn new approaches and technology applications * Knowledge and experience with various relational databases and demonstrable proficiency in SQL and supporting analytics uses and users * Knowledge of software engineering and agile development best practices * Excellent written and verbal communication skills The Brightly culture We"™re guided by a vision of community that serves the ambitions and wellbeing of all people, and our professional communities are no exception. We model that ideal every day by being supportive, collaborative partners to one another, conscientiousl y making space for our colleagues to grow and thrive. Our passionate team is driven to create a future where smarter infrastructure protects the environments that shape and connect us all. That brighter future starts with us.
Posted 2 months ago
4 - 7 years
7 - 11 Lacs
Hyderabad
Work from Office
Responsibilities: Design, develop, and maintain scalable data pipelines using Python and AWS Redshift Optimize and tune Redshift queries, schemas, and performance for large-scale datasets Implement ETL/ELT processes to ensure accurate and timely data availability Collaborate with data analysts, engineers, and product teams to understand data requirements Ensure data quality, consistency, and integrity across systems Automate data workflows and improve pipeline efficiency using scripting and orchestration tools Monitor data pipeline performance and troubleshoot issues proactively Maintain documentation for data models, pipelines, and system configurations Ensure compliance with data governance and security standards
Posted 2 months ago
12 - 15 years
0 - 0 Lacs
Thiruvananthapuram
Work from Office
About the Role We're looking for a sharp and experienced Consultant/Architect to guide our clients in strategically adopting Microsoft Fabric, with a specific focus on the design and implementation of the gold (semantic, reporting-ready) and silver (cleansed, integrated) data layers. Key Responsibilities Design end-to-end data architectures in Microsoft Fabric with a focus on the gold and silver layers. Collaborate with business units to understand data landscapes and translate requirements into actionable adoption strategies. Lead implementations involving Fabric components such as Data Factory, Synapse Data Warehouse, Data Lakehouse , and Power BI datasets . Develop and optimize data models for the silver and gold layers considering performance, scalability, and usability. Define and enforce data governance policies and standards within Microsoft Fabric. Identify and resolve performance issues in Fabric, especially in query performance and data processing. Mentor client and internal teams on best practices and architecture patterns for gold/silver layer development. Troubleshoot complex integration and transformation challenges in the Fabric ecosystem. Required Qualifications Deep understanding of modern data warehousing principles . Extensive experience designing and implementing data solutions on Microsoft Azure . Hands-on expertise with Microsoft Fabric components for building pipelines and analytical models. Proven ability to design dimensional and relational data models optimized for reporting (gold layer). Strong grasp of ETL/ELT patterns for building integrated data layers (silver layer). Experience with data quality checks and governance frameworks. Strong SQL skills for data manipulation and querying. Proficient in Power BI and its integration with Fabric datasets. Excellent communication and client engagement skills, with the ability to explain technical concepts to non-technical stakeholders. Preferred Qualifications Bachelor's degree in Computer Science, Data Science , or a related field (or equivalent experience). Microsoft certifications related to Azure data services or Fabric . Experience with real-time data processing and streaming in Fabric. Familiarity with data science workflows and their integration into Fabric-based solutions. Experience with Azure AI Foundry is a plus. Required Skills Fabric,Microsoft Technologies,Research Analysis
Posted 2 months ago
8 - 13 years
25 - 30 Lacs
Bengaluru
Work from Office
Education: A Bachelors degree in Computer Science, Engineering (B.Tech, BE), or a related field such as MCA (Master of Computer Applications) is required for this role. Experience: 8+ years in data engineering with a focus on building scalable and reliable data infrastructure. Skills: Language: Proficiency in Java or Python or Scala. Prior experience in Oil Gas, Titles Leases, or Financial Services is a must have. Databases: Expertise in relational and NoSQL databases like PostgreSQL, MongoDB, Redis, and Elasticsearch. Data Pipelines: Strong experience in designing and implementing ETL/ELT pipelines for large datasets. Tools: Hands-on experience with Databricks, Spark, and cloud platforms. Data Lakehouse: Expertise in data modeling, designing Data Lakehouses, and building data pipelines. Modern Data Stack: Familiarity with modern data stack and data governance practices. Data Orchestration: Proficient in data orchestration and workflow tools. Data Modeling: Proficient in modeling and building data architectures for high-throughput environments. Stream Processing: Extensive experience with stream processing technologies such as Apache Kafka. Distributed Systems: Strong understanding of distributed systems, scalability, and availability. DevOps: Familiarity with DevOps practices, continuous integration, and continuous deployment (CI/CD). Problem-Solving: Strong problem-solving skills with a focus on scalable data infrastructure. Key Responsibilities: This is a role with high expectations of hands on design and development. Design and development of systems for ingestion, persistence, consumption, ETL/ELT, versioning for different data types e.g. relational, document, geospatial, graph, timeseries etc. in transactional and analytical patterns. Drive the development of applications related to data extraction, especially from formats like TIFF, PDF, and others, including OCR and data classification/categorization. Analyze and improve the efficiency, scalability, and reliability of our data infrastructure. Assist in the design and implementation of robust ETL/ELT pipelines for processing large volumes of data. Collaborate with cross-functional scrum teams to respond quickly and effectively to business needs. Work closely with data scientists and analysts to define data requirements and develop comprehensive data solutions. Implement data quality checks and monitoring to ensure data integrity and reliability across all systems. Develop and maintain data models, schemas, and documentation to support data-driven decision-making. Manage and scale data infrastructure on cloud platforms, leveraging cloud-native tools and services. Benefits: Salary: Competitive and aligned with local standards. Performance Bonus: According to company policy. Benefits: Includes medical insurance and group term life insurance. Continuous learning and development.10 recognized public holidays. Parental Leave
Posted 2 months ago
8 - 13 years
6 - 11 Lacs
Gurugram
Work from Office
AHEAD is looking for a Senior Data Engineer to work closely with our dynamic project teams (both on-site and remotely). This Senior Data Engineer will be responsible for strategic planning and hands-on engineering of Big Data and cloud environments that support our clients advanced analytics, data science, and other data platform initiatives. This consultant will design, build, and support modern data environments that reside in the public cloud or multi-cloud enterprise architectures. They will be expected to be hands-on technically, but also present to leadership, and lead projects. The Senior Data Engineer will have responsibility for working on a variety of data projects. This includes orchestrating pipelines using modern Data Engineering tools/architectures as well as design and integration of existing transactional processing systems. As a Senior Data Engineer, you will design and implement data pipelines to enable analytics and machine learning on rich datasets. Roles and Responsibilities A Data Engineer should be able to design, build, operationalize, secure, and monitor data processing systems Create robust and automated pipelines to ingest and process structured and unstructured data from various source systems into analytical platforms using batch and streaming mechanisms leveraging cloud native toolset Implement custom applications using tools such as Kinesis, Lambda and other cloud native tools as required to address streaming use cases Engineers and supports data structures including but not limited to SQL and NoSQL databases Engineers and maintain ELT processes for loading data lake (Snowflake, Cloud Storage, Hadoop) Engineers APIs for returning data from these structures to the Enterprise Applications Leverages the right tools for the right job to deliver testable, maintainable, and modern data solutions Respond to customer/team inquiries and assist in troubleshooting and resolving challenges Works with other scrum team members to estimate and deliver work inside of a sprint Research data questions, identifies root causes, and interacts closely with business users and technical resources Qualifications 8+ years of professional technical experience 4+ years of hands-on Data Architecture and Data Modelling. 4+ years of experience building highly scalable data solutions using Hadoop, Spark, Databricks, Snowflake 4+ years of programming languages such as Python 2+ years of experience working in cloud environments (AWS and/or Azure) Strong client-facing communication and facilitation skills Key Skills Python, Cloud, Linux, Windows, NoSQL, Git, ETL/ELT, Spark, Hadoop, Data Warehouse, Data Lake, Snowflake, SQL/RDBMS, OLAP, Data Engineering
Posted 2 months ago
9 - 12 years
30 - 35 Lacs
Hyderabad
Work from Office
What you will do Lets do this. Lets change the world. In this vital role We are seeking a strategic and hands-on Specialist Software Engineer / AI Engineer Search to lead the design, development, and deployment of AI-powered search and knowledge discovery solutions across our pharmaceutical enterprise. In this role, you'll manage a team of engineers and work closely with data scientists, oncologists, and domain experts to build intelligent systems that help users across R&D, medical, and commercial functions find relevant, actionable information quickly and accurately. Architect and lead the development of scalable, intelligent search systems leveraging NLP, embeddings, LLMs, and vector search Own the end-to-end lifecycle of search solutions, from ingestion and indexing to ranking, relevancy tuning, and UI integration Build systems that surface scientific literature, clinical trial data, regulatory content, and real-world evidence using semantic and contextual search Integrate AI models that improve search precision, query understanding, and result summarization (e.g., generative answers via LLMs). Partner with platform teams to deploy search solutions on scalable infrastructure (e.g., Kubernetes, cloud-native services, Databricks, Snowflake). Experience in Generative AI on Search Engines Experience in integrating Generative AI capabilities and Vision Models to enrich content quality and user engagement. Building and owning the next generation of content knowledge platforms and other algorithms/systems that create high quality and unique experiences. Designing and implementing advanced AI Models for entity matching, data duplication. Experience Generative AI tasks such as content summarization. deduping and metadata quality. Researching and developing advanced AI algorithms, including Vision Models for visual content analysis. Implementing KPI measurement frameworks to evaluate the quality and performance of delivered models, including those utilizing Generative AI. Developing and maintaining Deep Learning models for data quality checks, visual similarity scoring, and content tagging. Continually researching current and emerging technologies and proposing changes where needed. Implement GenAI solutions, utilize ML infrastructure, and contribute to data preparation, optimization, and performance enhancements. Manage and mentor a cross-functional engineering team focused on AI, ML, and search infrastructure. Foster a collaborative, high-performance engineering culture with a focus on innovation and delivery. Work with domain experts, data stewards, oncologists, and product managers to align search capabilities with business and scientific need Basic Qualifications: Degree in computer science & engineering preferred with 9-12 years of software development experience Proficient in Spark, Kafka, Snowflake, Delta Lake, Hadoop, databricks, Mongo DB, S3 Buckets, ELT& ETL, API Integrations, Java Proven experience building search systems with technologies like Elasticsearch, Solr, OpenSearch, or vector DBs (e.g., Pinecone, FAISS). Hands-on experience with various AI models, GCP Search Engines, GCP Cloud services Strong understanding of NLP, embeddings, transformers, and LLM-based search applications Proficient in programming language AI/ML, Python, GraphQL, Java Crawlers, Java Script, SQL/NoSQL, Databricks/RDS, Data engineering, S3 Buckets, dynamo DB Strong problem solving, analytical skills; Ability to learn quickly; Excellent communication and interpersonal skills Experience deploying ML services and search infrastructure in cloud environments (AWS, Azure, or GCP) Preferred Qualifications: Experience in AI/ML, Java, Rest API, Python React, GraphQL, NLMS, Full stack applications, Solr Search Experienced with Fast Pythons API Experience with design patterns, data structures, data modelling, data algorithms Knowledge of ontologies and taxonomies such as MeSH, SNOMED CT, UMLS, or MedDRA. Familiarity with MLOps, CI/CD for ML, and monitoring of AI models in production. Experienced with AWS /Azure Platform, building and deploying the code Experience in PostgreSQL /Mongo DB SQL database, vector database for large language models, Databricks or RDS, S3 Buckets Experience in Google cloud Search and Google cloud Storage Experience with popular large language models Experience with LangChain or LlamaIndex framework for language models Experience with prompt engineering, model fine tuning Knowledge of NLP techniques for text analysis and sentiment analysis Experience with generative AI or retrieval-augmented generation (RAG) frameworks in pharma/biotech setting Experience in Agile software development methodologies Experience in End-to-End testing as part of Test-Driven Development Good to Have Skills Willingness to work on Full stack Applications Experience working with biomedical or scientific data (e.g., PubMed, clinical trial registries, internal regulatory databases) Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills. Ability to work effectively with global, remote teams. High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Strong presentation and public speaking skills.
Posted 2 months ago
5 - 9 years
20 - 25 Lacs
Pune, Gurugram, Bengaluru
Hybrid
We're looking for a motivated and detail-oriented Senior Snowflake Developer with strong SQL querying skills and a willingness to learn and grow with our team. As a Senior Snowflake Developer, you will play a key role in developing and maintaining our Snowflake data platform, working closely with our data engineering and analytics teams. Responsibilities: Write ingestion pipelines that are optimized and performant Manage a team of Junior Software Developers Write efficient and scalable SQL queries to support data analytics and reporting Collaborate with data engineers, architects and analysts to design and implement data pipelines and workflows Troubleshoot and resolve data-related issues and errors Conduct code reviews and contribute to the improvement of our Snowflake development standards Stay up-to-date with the latest Snowflake features and best practices Requirements: 5+ years of experience with Snowflake Strong SQL querying skills, including data modeling, data warehousing, and ETL/ELT design Advanced understanding of data engineering principles and practices Familiarity with Informatica Intelligent Cloud Services (IICS) or similar data integration tools is a plus Excellent problem-solving skills, attention to detail, and analytical mindset Strong communication and collaboration skills, with the ability to work effectively with cross-functional teams Nice to Have: Experience using Snowflake Streamlit, Cortex Knowledge of data governance, data quality, and data security best practices Familiarity with Agile development methodologies and version control systems like Git Certification in Snowflake or a related data platform is a plus
Posted 2 months ago
2 - 3 years
5 - 15 Lacs
Bengaluru
Work from Office
Role & responsibilities Design, develop, and maintain scalable ETL/ELT data pipelines Work with structured and unstructured data from various sources (APIs, databases, cloud storage, etc.) Optimize data workflows and ensure data quality, consistency, and reliability Collaborate with cross-functional teams to understand data requirements and deliver solutions Maintain and improve our data infrastructure and architecture Monitor pipeline performance and troubleshoot issues in real-time Preferred candidate profile 2-3 years of experience in data engineering or a similar role Proficiency in SQL and Python (or Scala/Java for data processing) Experience with ETL tools (e.g., Airflow, dbt, Luigi) Familiarity with cloud platforms like AWS, GCP, or Azure Hands-on experience with data warehouses (e.g., Redshift, BigQuery, Snowflake) Knowledge of distributed data processing frameworks like Spark or Hadoop Experience with version control systems (e.g., Git) Exposure to data modeling and schema design Experience working with CI/CD pipelines for data workflows Understanding of data privacy and security practices
Posted 2 months ago
13 - 20 years
45 - 50 Lacs
Pune
Work from Office
About The Role : Job Title Solution Architect, VP Location Pune, India Role Description As a solution architect supporting the individual business aligned, strategic or regulatory book of work, you will be working closely with the projects' business stakeholders, business analysts, data solution architects, engineers, the lead solution architect, the principal architect and the Chief Architect to help projects break out business requirements into implementable building blocks and design the solution's target architecture. It is the solution architect's responsibility to align the design against the general (enterprise) architecture principles, apply agreed best practices and patterns and help the engineering teams deliver against the architecture in an event driven, service oriented environment. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities In projects work with SMEs and stakeholders deriving the individual components of the solution Design the target architecture for a new solution or when adding new capabilities to an existing solution. Assure proper documentation of the High-Level Design and Low Level design of the delivered solution Quality assures the delivery against the agreed and approved architecture, i.e. provide delivery guidance and governance Prepare the High-Level Design for review and approval by design authorities for projects to proceed into implementation Support creation of the Low-Level Design as it is being delivered to support final go live Your skills and experience Very proficient at designing / architecting solutions in an event driven environment leveraging service oriented principles Proficient at Java and the delivery of Spring based services Proficient at building systems in a decoupled, event driven environment leveraging messaging / streaming, i.e Kafka Very good experience designing and implementing data streaming and data ingest pipelines for heavy throughput while providing different guarantees (At least once, Exactly once) Very good understanding of non-streaming ETL and ELT approaches for data ingest Solid understanding of containerized, distributed systems and building auto-scalable, state-less services in a cluster (Concepts of Quorum, Consensus) Solid understanding of standard RDBM systems and a proficiency at data engineering level in T-SQL and/or PL/SQL and/or PL/pgSQL, preferably all Good understanding of how RBMS generally work but also specific tuning experience on SQL Server, Oracle or PostgreSQL are welcome Understanding of modeling / implementing different data modelling approaches as well as understanding of respective pros and cons (e.g. Normalized, Denormalized, Star, Snowflake, DataVault 2.0, ...) A strong working experience on GCP architecture is a benefit (APIGee, BigQuery, Pub/Sub, Cloud SQL, DataFlow, ...) - appropriate GCP architecture level certification even more so Experience leveraging other languages more than welcome (C#, Python) How we'll support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm
Posted 2 months ago
3 - 5 years
11 - 16 Lacs
Bengaluru
Work from Office
Job Description Analyze, design develop, fix and debug software programs for commercial or end user applications. Writes code, completes programming and performs testing and debugging of applications. Career Level - IC2 Responsibilities Minimum 4-5 years hands-on, end to end DWH Implementation experience using ODI. Should have experience in developing ETL processes - ETL control tables, error logging, auditing, data quality, etc. Should be able to implement reusability, parameterization, workflow design, etc. Expertise in the Oracle ODI tool set and Oracle PL/SQL,ODI knowledge of ODI Master and work repository Knowledge of data modelling and ETL design Setting up topology, building objects in Designer, Monitoring Operator, different type of KMs, Agents etc Packaging components, database operations like Aggregate pivot, union etc. using ODI mappings, error handling, automation using ODI, Load plans, Migration of Objects Design and develop complex mappings, Process Flows and ETL scripts Must be well versed and hands-on in using and customizing Knowledge Modules (KM) Experience of performance tuning of mappings Ability to design ETL unit test cases and debug ETL Mappings Expertise in developing Load Plans, Scheduling Jobs Ability to design data quality and reconciliation framework using ODI Integrate ODI with multiple Source / Target Experience on Error recycling / management using ODI,PL/SQL Expertise in database development (SQL/ PLSQL) for PL/SQL based applications. Experience of creating PL/SQL packages, procedures, Functions , Triggers, views, Mat Views and exception handling for retrieving, manipulating, checking and migrating complex datasets in oracle Experience in Data Migration using Sql loader, import/export Experience in SQL tuning and optimization using explain plan and Sql trace files. Strong knowledge of ELT/ETL concepts, design and coding Partitioning and Indexing strategy for optimal performance Must have Good verbal and written communication in English, and good interpersonal, analytical and problem-solving abilities. Should have experience of interacting with customers in understanding business requirement documents and translating them into ETL specifications and High and Low level design documents. Experience in understanding sophisticated source system data structures preferably in Financial services (preferred) Industry Ability to work with minimal guidance or supervision in a time critical environment. Work with Oracle's world class technology to develop, implement, and support Oracle's global infrastructure. As a member of the IT organization, assist with the design, development, modifications, debugging, and evaluation of programs for use in internal systems within a specific function area. Duties and tasks are standard with some variation. Completes own role largely independently within defined policies and procedures. BS or equivalent experience in programming on enterprise or department servers or systems. Life at Oracle and Equal Opportunity:An Oracle career can span industries, roles, Countries and cultures, giving you the opportunity to flourish in new roles and innovate, while blending work life in. Oracle has thrived through 40+ years of change by innovating and operating with integrity while delivering for the top companies in almost every industry.In order to nurture the talent that makes this happen, we are committed to an inclusive culture that celebrates and values diverse insights and perspectives, a workforce that inspires thought leadership and innovation.Oracle offers a highly competitive suite of Employee Benefits designed on the principles of parity, consistency, and affordability. The overall package includes certain core elements such as Medical, Life Insurance, access to Retirement Planning, and much more. We also encourage our employees to engage in the culture of giving back to the communities where we live and do business.At Oracle, we believe that innovation starts with diversity and inclusion, and to create the future we need talent from various backgrounds, perspectives, and abilities. We ensure that individuals with disabilities are provided reasonable accommodation to successfully participate in the job application, interview process, and in potential roles to perform crucial job functions.Thats why were committed to creating a workforce where all individuals can do their best work. Its when everyones voice is heard and valued that were inspired to go beyond whats been done before.Disclaimer: Oracle is an Equal Employment Opportunity Employer*. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.Which includes being a United States Affirmative Action Employerhttps://www.oracle.com/corporate/careers/diversity-inclusion/
Posted 2 months ago
5 - 10 years
13 - 18 Lacs
Mumbai
Work from Office
About The Role RoleProduct Manager DurationFull Time LocationEdison, NJ / NYC, NY Mode Hybrid ( 3days WFO & 2 Days WFH) NEED Private Equity / Fund Administrations / Hedge Fund Experience. New York City based Private Equity Fund Administration Firm is looking for a product manager to assist in the implementation of client-facing technology products. This is a high visibility role, with access to senior management and founding members. Primary Responsibilities Will Include: Work closely with SME and management to understand product scope Document current & future state (e.g., functionality, data flow, reports, UX, etc.) Solicit and document requirements from business users Design and document future enhancements to Clients Private Exchange platform Own the platform transaction flow process, design, and development roadmap Assist the client onboarding & implementation teams with technology adoption Define automated and manual testing processes to ensure quality of data Liaise with technologists to further understand and document processes Define Success (aka Acceptance) Criteria to ensure all requirements are fulfilled Build and execute comprehensive software test plans User guide documentation Manage technology deployments and conversions Support technology solutions Manage sub-projects Job Requirements, Skills, Education and Experience: Bachelors degree required. Degree in Accounting, Finance, or Economics, a plus 5+ years of business analysis or business systems analysis experience 2+ years of financial services experience Private Equity experience a plus Extensive experience with Private Equity systems such as Allvue, Investran or eFront Performance, Analytics or Business Intelligence experience a plus Familiar with Agile SDLC and/or Project Management methods a plus Experience with data integration and mapping (e.g., ETL, ELT, etc.) a plus Familiarity with development languages a plus (e.g., VBA, SQL, Python, Java, C++, etc.) Extensive Microsoft Office skills - Excel, Word, Visio, PowerPoint Strong verbal and written communication skills Strong attention to detail and accuracy Ability to learn on-the-job quickly, apply learning to recommend solutions to issues Ability to quickly adapt to changes in prioritization, processes, and procedures Superior problem solving, judgement and decision-making skills Ability to think independently, prioritize, multi-task and meet deadlines
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough