Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
1.0 - 4.0 years
6 - 10 Lacs
Mumbai
Work from Office
Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for Indias top 1% Data Engineers for a unique job opportunity to work with the industry leaders. Who can be a part of the community. We are looking for top-tier Data Engineers who are proficient in designing, building and optimizing data pipelines. If you have experience in this field then this is your chance to collaborate with industry leaders. Whats in it for you. Pay above market standards. The role is going to be contract based with project timelines from 2 12 months, or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be:. Remote (Highly likely). Onsite on client location. Deccan AIs OfficeHyderabad or Bangalore. Responsibilities:. Design and architect enterprise-scale data platforms, integrating diverse data sources and tools. Develop real-time and batch data pipelines to support analytics and machine learning. Define and enforce data governance strategies to ensure security, integrity, and compliance along with optimizing data pipelines for high performance, scalability, and cost efficiency in cloud environments. Implement solutions for real-time streaming data (Kafka, AWS Kinesis, Apache Flink) and adopt DevOps/DataOps best practices. Required Skills: . Strong experience in designing scalable, distributed data systems and programming (Python, Scala, Java) with expertise in Apache Spark, Hadoop, Flink, Kafka, and cloud platforms (AWS, Azure, GCP). Proficient in data modeling, governance, warehousing (Snowflake, Redshift, BigQuery), and security/compliance standards (GDPR, HIPAA). Hands-on experience with CI/CD (Terraform, Cloud Formation, Airflow, Kubernetes) and data infrastructure optimization (Prometheus, Grafana). Nice to Have:. Experience with graph databases, machine learning pipeline integration, real-time analytics, and IoT solutions. Contributions to open-source data engineering communities. What are the next steps. Register on our Soul AI website. Our team will review your profile. Clear all the screening roundsClear the assessments once you are shortlisted. Profile matching and Project AllocationBe patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You!.
Posted 1 month ago
1.0 - 4.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for Indias top 1% Data Engineers for a unique job opportunity to work with the industry leaders. Who can be a part of the community. We are looking for top-tier Data Engineers who are proficient in designing, building and optimizing data pipelines. If you have experience in this field then this is your chance to collaborate with industry leaders. Whats in it for you. Pay above market standards. The role is going to be contract based with project timelines from 2 12 months, or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be:. Remote (Highly likely). Onsite on client location. Deccan AIs OfficeHyderabad or Bangalore. Responsibilities:. Design and architect enterprise-scale data platforms, integrating diverse data sources and tools. Develop real-time and batch data pipelines to support analytics and machine learning. Define and enforce data governance strategies to ensure security, integrity, and compliance along with optimizing data pipelines for high performance, scalability, and cost efficiency in cloud environments. Implement solutions for real-time streaming data (Kafka, AWS Kinesis, Apache Flink) and adopt DevOps/DataOps best practices. Required Skills: . Strong experience in designing scalable, distributed data systems and programming (Python, Scala, Java) with expertise in Apache Spark, Hadoop, Flink, Kafka, and cloud platforms (AWS, Azure, GCP). Proficient in data modeling, governance, warehousing (Snowflake, Redshift, BigQuery), and security/compliance standards (GDPR, HIPAA). Hands-on experience with CI/CD (Terraform, Cloud Formation, Airflow, Kubernetes) and data infrastructure optimization (Prometheus, Grafana). Nice to Have:. Experience with graph databases, machine learning pipeline integration, real-time analytics, and IoT solutions. Contributions to open-source data engineering communities. What are the next steps. Register on our Soul AI website. Our team will review your profile. Clear all the screening roundsClear the assessments once you are shortlisted. Profile matching and Project AllocationBe patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You!.
Posted 1 month ago
3.0 - 8.0 years
13 - 18 Lacs
Mumbai
Work from Office
Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for Indias top 1% Data Architects for a unique job opportunity to work with the industry leaders. Who can be a part of the community. We are looking for top-tier Data Architects who are proficient in designing, building and optimizing data pipelines. If you have experience in this field then this is your chance to collaborate with industry leaders. Whats in it for you. Pay above market standards. The role is going to be contract based with project timelines from 2 12 months, or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be:. Remote (Highly likely). Onsite on client location. Deccan AIs OfficeHyderabad or Bangalore. Responsibilities:. Design and architect enterprise-scale data platforms, integrating diverse data sources and tools. Develop real-time and batch data pipelines to support analytics and machine learning. Define and enforce data governance strategies to ensure security, integrity, and compliance along with optimizing data pipelines for high performance, scalability, and cost efficiency in cloud environments. Implement solutions for real-time streaming data (Kafka, AWS Kinesis, Apache Flink) and adopt DevOps/DataOps best practices. Required Skills: . Strong experience in designing scalable, distributed data systems and programming (Python, Scala, Java) with expertise in Apache Spark, Hadoop, Flink, Kafka, and cloud platforms (AWS, Azure, GCP). Proficient in data modeling, governance, warehousing (Snowflake, Redshift, Big Query), and security/compliance standards (GDPR, HIPAA). Hands-on experience with CI/CD (Terraform, Cloud Formation, Airflow, Kubernetes) and data infrastructure optimization (Prometheus, Grafana). Nice to Have:. Experience with graph databases, machine learning pipeline integration, real-time analytics, and IoT solutions. Contributions to open-source data engineering communities. What are the next steps. Register on our Soul AI website. Our team will review your profile. Clear all the screening roundsClear the assessments once you are shortlisted. Profile matching and Project AllocationBe patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You!.
Posted 1 month ago
1.0 - 4.0 years
6 - 10 Lacs
Kolkata
Work from Office
Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for Indias top 1% Data Engineers for a unique job opportunity to work with the industry leaders. Who can be a part of the community. We are looking for top-tier Data Engineers who are proficient in designing, building and optimizing data pipelines. If you have experience in this field then this is your chance to collaborate with industry leaders. Whats in it for you. Pay above market standards. The role is going to be contract based with project timelines from 2 12 months, or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be:. Remote (Highly likely). Onsite on client location. Deccan AIs OfficeHyderabad or Bangalore. Responsibilities:. Design and architect enterprise-scale data platforms, integrating diverse data sources and tools. Develop real-time and batch data pipelines to support analytics and machine learning. Define and enforce data governance strategies to ensure security, integrity, and compliance along with optimizing data pipelines for high performance, scalability, and cost efficiency in cloud environments. Implement solutions for real-time streaming data (Kafka, AWS Kinesis, Apache Flink) and adopt DevOps/DataOps best practices. Required Skills: . Strong experience in designing scalable, distributed data systems and programming (Python, Scala, Java) with expertise in Apache Spark, Hadoop, Flink, Kafka, and cloud platforms (AWS, Azure, GCP). Proficient in data modeling, governance, warehousing (Snowflake, Redshift, BigQuery), and security/compliance standards (GDPR, HIPAA). Hands-on experience with CI/CD (Terraform, Cloud Formation, Airflow, Kubernetes) and data infrastructure optimization (Prometheus, Grafana). Nice to Have:. Experience with graph databases, machine learning pipeline integration, real-time analytics, and IoT solutions. Contributions to open-source data engineering communities. What are the next steps. Register on our Soul AI website. Our team will review your profile. Clear all the screening roundsClear the assessments once you are shortlisted. Profile matching and Project AllocationBe patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You!.
Posted 1 month ago
3.0 - 8.0 years
13 - 18 Lacs
Kolkata
Work from Office
Step into the world of AI innovation with the Experts Community of Soul AI (By Deccan AI). We are looking for Indias top 1% Data Architects for a unique job opportunity to work with the industry leaders. Who can be a part of the community. We are looking for top-tier Data Architects who are proficient in designing, building and optimizing data pipelines. If you have experience in this field then this is your chance to collaborate with industry leaders. Whats in it for you. Pay above market standards. The role is going to be contract based with project timelines from 2 12 months, or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be:. Remote (Highly likely). Onsite on client location. Deccan AIs OfficeHyderabad or Bangalore. Responsibilities:. Design and architect enterprise-scale data platforms, integrating diverse data sources and tools. Develop real-time and batch data pipelines to support analytics and machine learning. Define and enforce data governance strategies to ensure security, integrity, and compliance along with optimizing data pipelines for high performance, scalability, and cost efficiency in cloud environments. Implement solutions for real-time streaming data (Kafka, AWS Kinesis, Apache Flink) and adopt DevOps/DataOps best practices. Required Skills: . Strong experience in designing scalable, distributed data systems and programming (Python, Scala, Java) with expertise in Apache Spark, Hadoop, Flink, Kafka, and cloud platforms (AWS, Azure, GCP). Proficient in data modeling, governance, warehousing (Snowflake, Redshift, Big Query), and security/compliance standards (GDPR, HIPAA). Hands-on experience with CI/CD (Terraform, Cloud Formation, Airflow, Kubernetes) and data infrastructure optimization (Prometheus, Grafana). Nice to Have:. Experience with graph databases, machine learning pipeline integration, real-time analytics, and IoT solutions. Contributions to open-source data engineering communities. What are the next steps. Register on our Soul AI website. Our team will review your profile. Clear all the screening roundsClear the assessments once you are shortlisted. Profile matching and Project AllocationBe patient while we align your skills and preferences with the available project. Skip the Noise. Focus on Opportunities Built for You!.
Posted 1 month ago
10.0 - 15.0 years
30 - 35 Lacs
Pune
Work from Office
: Job TitleEngineer, AVP LocationPune, India Role Description A Passion to Perform. Its what drives us. More than a claim, this describes the way we do business. Were committed to being the best financial services provider in the world, balancing passion with precision to deliver superior solutions for our clients. This is made possible by our peopleagile minds, able to see beyond the obvious and act effectively in an ever-changing global business landscape. As youll discover, our culture supports this. Diverse, international and shaped by a variety of different perspectives, were driven by a shared sense of purpose. At every level agile thinking is nurtured. And at every level agile mind are rewarded with competitive pay, support and opportunities to excel What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Designing, implementing and operationalising Java based software components for the Transaction Monitoring Data Controls applications Contributing to DevOps capabilities to ensure maximum automation of our applications. Leveraging best practices - Build Data Driven Decisions Collaborationacross the TDI areas such as Cloud Platform, Security, Data, Risk&Compliance areasto create optimum solutions for the business, increasing re-use, creating best practice and sharing knowledge. Your skills and experience 10+ years of hands-on experience of Java development (Java 11+) in either of: Spring Boot/Microservices/APIs/Transactional databases Java data processing frameworks such as Apache Spark, Apache Beam, Flink Experience of contributing to software design and architecture including consideration of meeting non-functional requirements (e.g., reliability, scalability, observability, testability) Understanding of relevant Architecture styles and their trade-offs - e.g., Microservices, Monolith, Batch. Professional experience inbuilding applications into one of the cloud platforms (Azure, AWS or GCP)and usage of their major infra components (Software Defined Networks, IAM, Compute, Storage, etc.) Professional experience of at least one data storage technology (e.g., Oracle, Big Query) Experience designing and implementing distributed enterprise applications Professional experience of at least one "CI/CD" tool such as Team City, Jenkins, GitHub Actions Professional experience of Agile build and deployment practices (DevOps) Professional experience of defining interface and internal data models both logical and physical Experience of working with a globally distributed team requiring remote interaction across locations, time zones and diverse cultures Excellent communication skills (verbal and written) Idealto Have Professional experience working with Java components on GCP (e.g. App Engine, GKE, Cloud Run) Professional experience working with RedHat OpenShift & Apache Spark Professional experience working with Kotlin Experience of working in one or more large data integration projects/products Experience and knowledge of Data Engineering topics such as partitioning, optimisation based on different goals (e.g. retrieval performance vs insert performance) A passion for problem solving with strong analytical capabilities. Experience related to any of payment scanning, fraud checking, integrity monitoring, payment lifecycle management Experience working with Drools or similar product Data modelling experience Understanding of data security principle, data masking s and implementation considerations Education/Qualifications Degree from an accredited college or university with a concentration in Engineering or Computer Science How well support you
Posted 1 month ago
3.0 - 5.0 years
3 - 6 Lacs
Hyderabad
Work from Office
Mandatory Skills: Flink. Experience3-5 Years.
Posted 1 month ago
3.0 - 7.0 years
12 - 18 Lacs
Chennai
Work from Office
Responsibilities: * Design, develop & maintain data pipelines using Python, SQL & Kinesis * Optimize performance through Apache Spark & Flink * Collaborate with cross-functional teams on CI/CD processes
Posted 1 month ago
5.0 - 7.0 years
4 - 8 Lacs
Pune, Chennai, Bengaluru
Work from Office
We are seeking an experienced Java + Rust Developer to join a full-time project with a leading global digital services company. As part of Awign Experts expert workforce, you will work on modern cloud-native applications involving microservices, GraphQL, real-time streaming, and cross-platform systems using Rust and Java. The ideal candidate will be comfortable with technologies like Docker, Kubernetes, NATS/WSS, Flink, AWS, and GCP. You will be a proactive contributor, capable of delivering results individually or as part of a team. Our client values dynamic, self-reliant developers with strong analytical skills and a passion for continuous learning. Location-Bengaluru,Pune,Chennai,Gurugram,Hyderabad,Mohali,Jaipur,Nagpur,Indore,Chandigarh,Mangalore, Trivandrum, Mysore
Posted 1 month ago
7.0 - 12.0 years
14 - 18 Lacs
Bengaluru
Work from Office
As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up. Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful. Overview about TII: At Target, we have a timeless purpose and a proven strategy. And that hasnt happened by accident. Some of the best minds from different backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Targets global team and has more than 4,000 team members supporting the companys global strategy and operations. Team Overview: Every time a guest enters a Target store or browses Target.com nor the app, they experience the impact of Targets investments in technology and innovation. Were the technologists behind one of the most loved retail brands, delivering joy to millions of our guests, team members, and communities. Join our global in-house technology team of more than 5,000 of engineers, data scientists, architects and product managers striving to make Target the most convenient, safe and joyful place to shop. We use agile practices and leverage open-source software to adapt and build best-in-class technology for our team members and guestsand we do so with a focus on diversity and inclusion, experimentation and continuous learning. Position Overview: As a Lead Data Engineer , you will serve as the technical anchor for the engineering team, responsible for designing and developing scalable, high-performance data solutions . You will own and drive data architecture that supports both functional and non-functional business needs, ensuring reliability, efficiency, and scalability .Your expertise in big data technologies, distributed systems, and cloud platforms will help shape the engineering roadmap and best practices for data processing, analytics, and real-time data serving . You will play a key role in architecting and optimizing data pipelines using Hadoop, Spark, Scala/Java, and cloud technologies to support enterprise-wide data initiatives.Additionally, experience with API development for serving low-latency data and Customer Data Platforms (CDP) will be a strong plus. Key Responsibilities: Architect and build scalable, high-performance data pipelines and distributed data processing solutions using Hadoop, Spark, Scala/Java, and cloud platforms (AWS/GCP/Azure) . Design and implement real-time and batch data processing solutions , ensuring data is efficiently processed and made available for analytical and operational use. Develop APIs and data services to expose low-latency, high-throughput data for downstream applications, enabling real-time decision-making. Optimize and enhance data models, workflows, and processing frameworks to improve performance, scalability, and cost-efficiency. Drive data governance, security, and compliance best practices. Collaborate with data scientists, product teams, and business stakeholders to understand requirements and deliver data-driven solutions . Lead the design, implementation, and lifecycle management of data services and solutions. Stay up to date with emerging technologies and drive adoption of best practices in big data engineering, cloud computing, and API development . Provide technical leadership and mentorship to engineering teams, promoting best practices in data engineering and API design . About You: 7+ years of experience in data engineering, software development, or distributed systems. Expertise in big data technologies such as Hadoop, Spark, and distributed processing frameworks. Strong programming skills in Scala and/or Java (Python is a plus). Experience with cloud platforms (AWS, GCP, or Azure) and their data ecosystem (e.g., S3, BigQuery, Databricks, EMR, Snowflake, etc.). Proficiency in API development using REST, GraphQL, or gRPC to serve real-time and batch data. Experience with real-time and streaming data architectures (Kafka, Flink, Kinesis, etc.). Strong knowledge of data modeling, ETL pipeline design, and performance optimization . Understanding of data governance, security, and compliance in large-scale data environments. Experience with Customer Data Platforms (CDP) or customer-centric data processing is a strong plus. Strong problem-solving skills and ability to work in complex, unstructured environments . Excellent communication and collaboration skills, with experience working in cross-functional teams . Why Join Us Work with cutting-edge big data, API, and cloud technologies in a fast-paced, collaborative environment. Influence and shape the future of data architecture and real-time data services at Target. Solve high-impact business problems using scalable, low-latency data solutions . Be part of a culture that values innovation, learning, and growth . Life at Target- https://india.target.com/ Benefits- https://india.target.com/life-at-target/workplace/benefits Culture- https://india.target.com/life-at-target/belonging
Posted 1 month ago
8.0 - 13.0 years
0 - 3 Lacs
Bangalore Rural, Chennai, Bengaluru
Work from Office
Greetings from Sight Spectrum Technologies We are hiring for Data Lead Position. Experience:8+Yrs Location: Bangalore, Chennai Description: Required Skills: * Proficiency in multiple programming languages - ideally Python * Proficiency in at least one cluster computing frameworks (preferably Spark, alternatively Flink or Storm) * Proficiency in at least one cloud data lakehouse platforms (preferably AWS data lake services or Databricks, alternatively Hadoop), at least one relational data stores (Postgres, Oracle or similar) and at least one NOSQL data stores (Cassandra, Dynamo, MongoDB or similar) * Proficiency in at least one scheduling/orchestration tools (preferably Airflow, alternatively AWS Step Functions or similar) * Proficiency with data structures, data serialization formats (JSON, AVRO, Protobuf, or similar), big-data storage formats (Parquet, Iceberg, or similar), data processing methodologies (batch, micro-batching, and stream), one or more data modelling techniques (Dimensional, Data Vault, Kimball, Inmon, etc.), Agile methodology (develop PI plans and roadmaps), TDD (or BDD) and CI/CD tools (Jenkins, Git,) * Strong organizational, problem-solving and critical thinking skills; Strong documentation skills Preferred skills: * Proficiency in IaC (preferably Terraform, alternatively AWS cloud formation) If Interested kindly share your resume to roopavahini@sightspectrum.in
Posted 1 month ago
8.0 - 13.0 years
85 - 90 Lacs
Noida
Work from Office
About the Role We are looking for a Staff EngineerReal-time Data Processing to design and develop highly scalable, low-latency data streaming platforms and processing engines. This role is ideal for engineers who enjoy building core systems and infrastructure that enable mission-critical analytics at scale. Youll work on solving some of the toughest data engineering challenges in healthcare. A Day in the Life Architect, build, and maintain a large-scale real-time data processing platform. Collaborate with data scientists, product managers, and engineering teams to define system architecture and design. Optimize systems for scalability, reliability, and low-latency performance. Implement robust monitoring, alerting, and failover mechanisms to ensure high availability. Evaluate and integrate open-source and third-party streaming frameworks. Contribute to the overall engineering strategy and promote best practices for stream and event processing. Mentor junior engineers and lead technical initiatives. What You Need 8+ years of experience in backend or data engineering roles, with a strong focus on building real-time systems or platforms. Hands-on experience with stream processing frameworks like Apache Flink, Apache Kafka Streams, or Apache Spark Streaming. Proficiency in Java, Scala, or Python or Go for building high-performance services. Strong understanding of distributed systems, event-driven architecture, and microservices. Experience with Kafka, Pulsar, or other distributed messaging systems. Working knowledge of containerization tools like Docker and orchestration tools like Kubernetes. Proficiency in observability tools such as Prometheus, Grafana, OpenTelemetry. Experience with cloud-native architectures and services (AWS, GCP, or Azure). Bachelor's or Masters degree in Computer Science, Engineering, or a related field.
Posted 1 month ago
8.0 - 13.0 years
25 - 30 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Develop and maintain Kafka-based data pipelines for real-time processing. Implement Kafka producer and consumer applications for efficient data flow. Optimize Kafka clusters for performance, scalability, and reliability. Design and manage Grafana dashboards for monitoring Kafka metrics. Integrate Grafana with Elasticsearch, or other data sources. Set up alerting mechanisms in Grafana for Kafka system health monitoring. Collaborate with DevOps, data engineers, and software teams. Ensure security and compliance in Kafka and Grafana implementations. Requirements: 8+ years of experience in configuring Kafka, ElasticSearch and Grafana Strong understanding of Apache Kafka architecture and Grafana visualization. Proficiency in .Net, or Python for Kafka development. Experience with distributed systems and message-oriented middleware. Knowledge of time-series databases and monitoring tools. Familiarity with data serialization formats like JSON. Expertise in Azure platforms and Kafka monitoring tools. Good problem-solving and communication skills. Mandate : Create the Kafka dashboards , Python/.NET Note: Candidate must be immediate joiner.
Posted 1 month ago
4.0 - 8.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Who we are. About Stripe. Stripe is a financial infrastructure platform for businesses. Millions of companies—from the world’s largest enterprises to the most ambitious startups—use Stripe to accept payments, grow their revenue, and accelerate new business opportunities. Our mission is to increase the GDP of the internet, and we have a staggering amount of work ahead. That means you have an unprecedented opportunity to put the global economy within everyone’s reach while doing the most important work of your career.. About The Team. The Reporting Platform Data Foundations group maintains and evolves the core systems that power reporting data for Stripe's users. We're responsible for Aqueduct, the data ingestion and processing platform that powers core reporting data for millions of businesses on Stripe. We integrate with the latest Data Platform tooling, such as Falcon for real-time data. Our goal is to provide a robust, scalable, and efficient data infrastructure that enables clear and timely insights for Stripe's users.. What you'll do. As a Software Engineer on the Reporting Platform Data Foundations group, you will lead efforts to improve and redesign core data ingestion and processing systems that power reporting for millions of Stripe users. You'll tackle complex challenges in data management, scalability, and system architecture.. Responsibilities. Design and implement a new backfill model for reporting data that can handle hundreds of millions of row additions and updates efficiently. Revamp the end-to-end experience for product teams adding or changing API-backed datasets, improving ergonomics and clarity. Enhance the Aqueduct Dependency Resolver system, responsible for determining what critical data to update for Stripe’s users based on events. Areas include error management, observability, and delegation of issue resolution to product teams. Lead integration with the latest Data Platform tooling, such as Falcon for real-time data, while managing deprecation of older systems. Implement and improve data warehouse management practices, ensuring data freshness and reliability. Collaborate with product teams to understand their reporting needs and data requirements. Design and implement scalable solutions for data ingestion, processing, and storage. Onboard, spin up, and mentor engineers, and set the group’s technical direction and strategy. Who you are. We're looking for someone who meets the minimum requirements to be considered for the role. If you meet these requirements, you are encouraged to apply. The preferred qualifications are a bonus, not a requirement.. Minimum Requirements. 8+ years of professional experience writing high quality production level code or software programs.. Extensive experience in designing and implementing large-scale data processing systems. Strong background in distributed systems and data pipeline architectures. Proficiency in at least one modern programming language (e.g., Go, Java, Python, Scala). Experience with big data technologies (e.g., Hadoop, Flink, Spark, Kafka, Pinot, Trino, Iceberg). Solid understanding of data modeling and database systems. Excellent problem-solving skills and ability to tackle complex technical challenges. Strong communication skills and ability to work effectively with cross-functional teams. Experience mentoring other engineers and driving technical initiatives. Preferred Qualifications. Experience with real-time data processing and streaming systems. Knowledge of data warehouse technologies and best practices. Experience in migrating legacy systems to modern architectures. Contributions to open-source projects or technical communities. In-office expectations. Office-assigned Stripes in most of our locations are currently expected to spend at least 50% of the time in a given month in their local office or with users. This expectation may vary depending on role, team and location. For example, Stripes in Stripe Delivery Center roles in Mexico City, Mexico and Bengaluru, India work 100% from the office. Also, some teams have greater in-office attendance requirements, to appropriately support our users and workflows, which the hiring manager will discuss. This approach helps strike a balance between bringing people together for in-person collaboration and learning from each other, while supporting flexibility when possible.. Pay and benefits. Stripe does not yet include pay ranges in job postings in every country. Stripe strongly values pay transparency and is working toward pay transparency globally.. Show more Show less
Posted 1 month ago
2.0 - 5.0 years
7 - 11 Lacs
Gurugram
Work from Office
About NCR Atleos Overview Data is at the heart of our global financial network. In fact, the ability to consume, store, analyze and gain insight from data has become a key component of our competitive advantage. Our goal is to build and maintain a leading-edge data platform that provides highly available , consistent data of the highest quality for all users of the platform, including our customers, operations teams and data scientists. We focus on evolving our platform to deliver exponential scale to NCR Atleos , powering our future growth. Data & AI Engineers at NCR Atleos experience working at one of the largest and most recognized financial companies in the world, while being part of a software development team responsible for next generation technologies and solutions. Our engineers design and build large scale data storage, computation and distribution systems. They partner with data and AI experts to deliver high quality AI solutions and derived data to our consumers. We are looking for Data & AI Engineers who like to innovate and seek complex problems. We recognize that strength comes from diversity and will embrace your unique skills, curiosity, drive, and passion while giving you the opportunity to grow technically and as an individual. Engineers looking to work in the areas of orchestration, data modelling , data pipelines, APIs, storage, distribution, distributed computation, consumption and infrastructure are ideal candidates. Responsibilities As a Data Engineer, you will be joining a Data & AI team transforming our global financial network and improving the quality of our products and services we provide to our customers. and you will be responsible for designing, implementing, and maintaining data pipelines and systems to support the organizations data needs. Your role will involve collaborating with data scientists, analysts, and other stakeholders to ensure data accuracy, reliability, and accessibility. Key Responsibilities Data Pipeline DevelopmentDesign, build, and maintain scalable and efficient data pipelines to collect, process, and store structured and unstructured data from various sources. Data IntegrationIntegrate data from multiple sources such as databases, APIs, flat files, and streaming platforms into centralized data repositories. Data ModelingDevelop and optimize data models and schemas to support analytical and operational requirements. Implement data transformation and aggregation processes as needed. Data Quality AssuranceImplement data validation and quality assurance processes to ensure the accuracy, completeness, and consistency of data throughout its lifecycle. Performance Optimization Monitor and optimize data processing and storage systems for performance, reliability, and cost-effectiveness. Identify and resolve bottlenecks and inefficiencies in data pipelines and leverage Automation and AI to improve overall Operations. Infrastructure ManagementManage and configure cloud-based or on-premises infrastructure components such as databases, data warehouses, compute clusters, and data processing frameworks. CollaborationCollaborate with cross-functional teams including data scientists, analysts, software engineers, and business stakeholders to understand data requirements and deliver solutions that meet business objectives . Documentation and Best PracticesDocument data pipelines, systems architecture, and best practices for data engineering. Share knowledge and provide guidance to colleagues on data engineering principles and techniques. Continuous ImprovementStay updated with the latest technologies, tools, and trends in data engineering and recommend improvements to existing processes and systems. Qualifications and Skills: Bachelors degree or higher in Computer Science, Engineering, or a related field. Proven experience in data engineering or related roles, with a strong understanding of data processing concepts and technologies. Mastery of programming languages such as Python, Java, or Scala. Knowledge of database systems such as SQL, NoSQL, and data warehousing solutions. Knowledge of stream processing technologies such as Kafka or Apache Beam. Experience with distributed computing frameworks such as Apache Spark, Hadoop, or Apache Flink . Experience deploying pipelines in cloud platforms such as AWS, Azure, or Google Cloud Platform. Experience in implementing enterprise systems in production setting for AI, natural language processing. Exposure to self-supervised learning, transfer learning, and reinforcement learning is a plus . Have full stack experience to build the best fit solutions leveraging Large Language Models (LLMs) and Generative AI solutions with focus on privacy, security, fairness. Have good engineering skills to design the output from the AI with nodes and nested nodes in JSON or array, HTML formats for as-is consumption and display on the dashboards/portals. Strong problem-solving skills and attention to detail. Excellent communication and teamwork abilities. Experience with containerization and orchestration tools such as Docker and Kubernetes. Familiarity with data visualization tools such as Tableau or Power BI. EEO Statement NCR Atleos is an equal-opportunity employer. It is NCR Atleos policy to hire, train, promote, and pay associates based on their job-related qualifications, ability, and performance, without regard to race, color, creed, religion, national origin, citizenship status, sex, sexual orientation, gender identity/expression, pregnancy, marital status, age, mental or physical disability, genetic information, medical condition, military or veteran status, or any other factor protected by law. Statement to Third Party Agencies To ALL recruitment agenciesNCR Atleos only accepts resumes from agencies on the NCR Atleos preferred supplier list. Please do not forward resumes to our applicant tracking system, NCR Atleos employees, or any NCR Atleos facility. NCR Atleos is not responsible for any fees or charges associated with unsolicited resumes.
Posted 1 month ago
7.0 - 11.0 years
0 - 0 Lacs
Gurugram
Work from Office
Key Responsibilities: Design, develop, and deploy enterprise-grade applications using Java Spring Boot on Red Hat OpenShift . Build high-throughput, real-time data pipelines using Apache Kafka and ensure efficient data flow and processing. Implement complex transformation, routing, and orchestration logic using Apache Camel and Enterprise Integration Patterns (EIPs) . Develop microservices that interact with multiple protocols and data sources (HTTP, JMS , SQL/NoSQL databases). Integrate and utilize Apache Flink (or similar frameworks) for complex event processing and stream analytics. Embed observability and monitoring using tools such as Prometheus , Grafana , ELK , and OpenTelemetry to ensure system health and performance. Work closely with AI/ML teams to integrate intelligent features or enable data-driven services. Champion best practices by writing unit/integration tests , conducting code reviews , and supporting CI/CD pipelines . Analyze, troubleshoot, and optimize application and pipeline performance in production environments . Location: Gurugram Key Skills: Java, Spring boot, Apache Kafka, Apache Flink, Apache Camel, kuberbnetes, Docker.
Posted 1 month ago
11.0 - 21.0 years
50 - 100 Lacs
Bengaluru
Hybrid
Our Engineering team is driving the future of cloud securitydeveloping one of the worlds largest, most resilient cloud-native data platforms. At Skyhigh Security, were enabling enterprises to protect their data with deep intelligence and dynamic enforcement across hybrid and multi-cloud environments. As we continue to grow, were looking for a Principal Data Engineer to help us scale our platform, integrate advanced AI/ML workflows, and lead the evolution of our secure data infrastructure. Responsibilities: As a Principal Data Engineer, you will be responsible for: Leading the design and implementation of high-scale, cloud-native data pipelines for real-time and batch workloads. Collaborating with product managers, architects, and backend teams to translate business needs into secure and scalable data solutions. Integrating big data frameworks (like Spark, Kafka, Flink) with cloud-native services (AWS/GCP/Azure) to support security analytics use cases. Driving CI/CD best practices, infrastructure automation, and performance tuning across distributed environments. Evaluating and piloting the use of AI/LLM technologies in data pipelines (e.g., anomaly detection, metadata enrichment, automation). Evaluate and integrate LLM-based automation and AI-enhanced observability into engineering workflows. Ensure data security and privacy compliance. Mentoring engineers, ensuring high engineering standards, and promoting technical excellence across teams. What We’re Looking For (Minimum Qualifications) 10+ years of experience in big data architecture and engineering including deep proficiency with AWS cloud platform. Expertise in distributed systems and frameworks such as Apache Spark, Scala, Kafka, Flink and Elasticsearch with experience building production-grade data pipelines. Strong programming skills in Java for building scalable data applications. Hands-on experience with ETL tools and orchestration systems. Solid understanding of data modeling across both relational (PostgreSQL, MySQL) and NoSQL (Hbase) databases and performance tuning. What Will Make You Stand Out (Preferred Qualifications) Experience integrating AI/ML or LLM frameworks (e.g., LangChain, LlamaIndex) into data workflows. Experience implementing CI/CD pipelines with Kubernetes, Docker, and Terraform. Knowledge of modern data warehousing (e.g., BigQuery, Snowflake) and data governance principles (GDPR, HIPAA). Strong ability to translate business goals into technical architecture and mentor teams through delivery. Familiarity with visualization tools (Tableau, Power BI) to communicate data insights, even if not a primary responsibility.
Posted 1 month ago
1.0 - 6.0 years
7 - 11 Lacs
Mumbai
Work from Office
The Role: We are looking for a highly motivated and experienced engineer to join our team in developing the next-generation AI Agent enhanced Communications platform capable of seamlessly integrating and expanding across various channels such as voice calls, mobile applications, texting, email, and social media posts As a unified communication platform, it enables message delivery to customers and internal staff across several channels like Email, SMS, In-App messaging, and Social Media This platform is utilized by applications that cover areas such as discovery, sales, orders, ownership, and service across all business sectors, including Vehicle, Energy, Insurance, and more The platform guarantees the effective delivery of marketing campaigns and interactions between advisors and customers, Responsibilities: Design, development, and implementation of scalable applications that involves problem solving Must have Leverage technologies like Golang, Apache Kafka, Postgress, Opensearch Experience with integrating with LLM and inferring responses Nice to have Java, Apache Flink, Clickhouse Promote software engineering best practices via code reviews, building tools and documentation Leverage your existing skills while learning and implementing new, open-source technologies as Tesla grows, Work with product managers, content producers, QA engineers and release engineers to own your solution from development to production Define and develop unit tests and unit test libraries to ensure code development is robust and production ready, Drive software process improvements that enable progressively increased team efficiency, Requirements: BS or MS in Computer Science or equivalent discipline Expert Experience in developing scalable golang applications including SQL and NOSQL daatabases and other opensource technologies, Design software architecture based on business requirements, strategy, and priorities Good unit testing and integration testing practices Experience with message queue architecture Experience with Docker and Kubernetes Agile/SCRUM Software Development Process experience
Posted 1 month ago
8.0 - 10.0 years
7 - 10 Lacs
Bengaluru
Work from Office
The Digital and eCommerce team currently operates several B2B websites and direct digital sales channels via a globally deployed cloud-based platform that are a growth engine for org's life science business. We provide a comprehensive catalog of all products, enabling our customers to find products and purchase products as well as get detailed scientific information on those products. ESSENTIAL JOB FUNCTIONS Work as part of an Agile development team, taking ownership for one or more services Provides leadership to the Agile Development team, driving technical designs to support business goals Ensuring the entire team exemplifies excellence in design, code, test and operation A willingness to lead by example embracing change and foster a Growth and Learning culture on the team Mentoring team members through code review, design reviews Taking a lead role, working with product owners to help refine the backlog, breaking down features and epics into executable stories Have a high quality software mindset making sure that the code you write works Design and implement robust data pipelines to ingest, process, and store large volumes of data from various sources. Stay updated with the latest technologies and trends in data engineering and propose innovative solutions. QUALIFICATIONS Education: Bachelors/Masters degree in computer science or equivalent. Mandatory Skills: 8-10+ years of hands-on software engineering experience Recent experience in Java, Kotlin, Oracle, Postgres, Data handling, Spring, Spring Boot Experience in developing REST services. Experience in unit test frameworks. Ability to provide solutions based on business requirements. Ability to collaborate with cross-functional teams. Ability to work with global teams and a flexible work schedule. Must have excellent problem-solving skills and be customer-centric. Excellent communication skills. Preferred Skills: Experience with Microservices, CI/CD, Event Oriented Architectures and Distributed Systems Experience with cloud environments (e.g., Google Cloud Platform, Azure, Amazon Web Services, etc.) Experience leading product oriented engineering development teams is a plus Familiarity with web technologies (e,g,, JavaScript, HTML, CSS), data manipulation (e.g., SQL), and version control systems (e.g., GitHub, GitLab) Familiarity with DevOps practices/principles, Agile/Scrum methodologies, CI/CD pipelines and the product development lifecycle Familiarity with modern web APIs and full stack frameworks. Experience with Docker, Kubernetes, Kafka, in memory caching is a plus Experience with Apache Airflow, Apache Flink, Google BigQuery is a plus Experience developing eCommerce systems especially B2B eCommerce is a plus.
Posted 1 month ago
5.0 - 8.0 years
15 - 20 Lacs
Bengaluru
Work from Office
Job Description : We are looking for a highly skilled and experienced Full Stack Developer with deep expertise in microservices architecture, AWS, and modern JavaScript frameworks. The ideal candidate will have a strong background in building scalable applications and working with cloud-native technologies. Key Responsibilities : Design and develop microservices-based solutions using best practices and design patterns Build and consume RESTful APIs and GraphQL endpoints Develop scalable backend services using Node.js and Next.js Work with containerization tools like Docker and serverless principles on AWS Implement CI/CD pipelines and automation scripts Perform contract testing using PactFlow Monitor applications using tools such as New Relic and Datadog Follow agile delivery methodologies with story slicing and iterative planning Design and work with event-driven and event-sourcing architecture Ensure code quality through unit testing and automation practices Required Skills : Architecture & Development: Microservices principles and design patterns Event-driven/event-sourcing architecture RESTful APIs and GraphQL Programming Languages: JavaScript TypeScript Frameworks & Tools: Node.js Next.js Jest for unit testing Cloud & DevOps: AWS (EC2, ECS, S3, SNS, SQS, Lambda, API Gateway, CloudWatch) AWS CDK and other DevOps tools Docker and serverless architecture Jenkins, Buildkite (CI/CD pipelines) Database: DynamoDB Redis Practices: Iterative agile delivery Story slicing Continuous delivery Contract testing (PACTFlow) Automation testing and scripting Monitoring: New Relic Datadog or similar observability tools Preferred Qualifications : Experience in a fast-paced, agile development environment Strong problem-solving and communication skills Ability to collaborate effectively with cross-functional teams
Posted 1 month ago
4.0 - 9.0 years
6 - 11 Lacs
Hyderabad
Work from Office
Monday to Friday (WFO) Timings : 9 am to 6 pm Desired Skills Expertise: Strong experience and mathematical understanding in one or more of Natural Language Understanding, Computer Vision, Machine Learning, and Optimization Proven track record in effectively building and deploying ML systems using frameworks such as PyTorch, TensorFlow, Keras, scikit-learn, etc. Expertise in modular, typed, and object-oriented Python programming Proficiency with core data science languages (such as Python, R, Scala), and familiarity & flexibility with data systems (e.g., SQL, NoSQL, knowledge graphs) Experience with financial data analysis, time series forecasting, and risk modeling Knowledge of financial regulations and compliance requirements in the fintech industry Familiarity with cloud platforms (AWS, GCP, or Azure) and containerization technologies (Docker, Kubernetes) Understanding of blockchain technology and its applications in fintech Experience with real-time data processing and streaming analytics (e.g., Apache Kafka, Apache Flink) Excellent communication skills with a desire to work in multidisciplinary teams Ability to explain complex technical concepts to non-technical stakeholders
Posted 1 month ago
5.0 - 10.0 years
15 - 30 Lacs
Chennai
Hybrid
Job Summary: We are looking for a highly skilled Backend Data Engineer to join our growing FinTech team. In this role, you will design and implement robust data models and architectures, build scalable data ingestion pipelines, and ensure data quality across financial datasets. You will play a key role in enabling data-driven decision-making by developing efficient and secure data infrastructure tailored to the fast-paced FinTech environment. Key Responsibilities: Design and implement scalable data models and data architecture to support financial analytics, risk modeling, and regulatory reporting. Build and maintain data ingestion pipelines using Python or Java to process high-volume, high-velocity financial data from diverse sources. Lead data migration efforts from legacy systems to modern cloud-based platforms. Develop and enforce data validation processes to ensure accuracy, consistency, and compliance with financial regulations. Create and manage task schedulers to automate data workflows and ensure timely data availability. Collaborate with product, engineering, and data science teams to deliver reliable and secure data solutions. Optimize data processing for performance, scalability, and cost-efficiency in a cloud environment. Required Skills & Qualifications: Proficiency in Python and/or Java for backend data engineering tasks. Strong experience in data modelling , ETL/ELT pipeline development , and data architecture . Hands-on experience with data migration and transformation in financial systems. Familiarity with task scheduling tools (e.g., Apache Airflow, Cron, Luigi). Solid understanding of SQL and experience with relational and NoSQL databases. Knowledge of data validation frameworks and best practices in financial data quality. Experience with cloud platforms (AWS, GCP, or Azure), especially in data services. Understanding of data security , compliance , and regulatory requirements in FinTech. Preferred Qualifications: Experience with big data technologies (e.g., Spark, Kafka, Hadoop). Familiarity with CI/CD pipelines , containerization (Docker), and orchestration (Kubernetes). Exposure to financial data standards (e.g., FIX, ISO 20022) and regulatory frameworks (e.g., GDPR, PCI-DSS). Role & responsibilities Preferred candidate profile
Posted 1 month ago
5.0 - 10.0 years
7 - 12 Lacs
Kochi
Work from Office
Develop user-friendly web applications using Java and React.js while ensuring high performance. Design, develop, test, and deploy robust and scalable applications. Building and consuming RESTful APIs. Collaborate with the design and development teams to translate UI/UX design wireframes into functional components. Optimize applications for maximum speed and scalability. Stay up-to-date with the latest Java and React.js trends, techniques, and best practices. Participate in code reviews to maintain code quality and ensure alignment with coding standards. Identify and address performance bottlenecks and other issues as they arise. Help us shape the future of Event Driven technologies, including contributing to Apache Kafka, Strimzi, Apache Flink, Vert.x and other relevant open-source projects. Collaborate within a dynamic team environment to comprehend and dissect intricate requirements for event processing solutions. Translate architectural blueprints into actualized code, employing your technical expertise to implement innovative and effective solutions. Conduct comprehensive testing of the developed solutions, ensuring their reliability, efficiency, and seamless integration. Provide ongoing support for the implemented applications, responding promptly to customer inquiries, resolving issues, and optimizing performance. Serve as a subject matter expert, sharing insights and best practices related to product development, fostering knowledge sharing within the team. Continuously monitor the evolving landscape of event-driven technologies, remaining updated on the latest trends and advancements. Collaborate closely with cross-functional teams, including product managers, designers, and developers, to ensure a holistic and harmonious product development process. Take ownership of technical challenges and lead your team to ensure successful delivery, using your problem-solving skills to overcome obstacles. Mentor and guide junior developers, nurturing their growth and development by providing guidance, knowledge transfer, and hands-on training. Engage in agile practices, contributing to backlog grooming, sprint planning, stand-ups, and retrospectives to facilitate effective project management and iteration. Foster a culture of innovation and collaboration, contributing to brainstorming sessions and offering creative ideas to push the boundaries of event processing solutions. Maintain documentation for the developed solutions, ensuring comprehensive and up-to-date records for future reference and knowledge sharing. Involve in building and orchestrating containerized services Required education Bachelor's Degree Preferred education Bachelor's Degree Required technical and professional expertise Proven 5+ years of experience as aFull stack developer (Java and React.js) with a strong portfolio of previous projects. Proficiency in Java, JavaScript, HTML, CSS, and related web technologies. Familiarity with RESTfulAPIs and their integration into applications. Knowledge of modern CICD pipelines and tools like Jenkinsand Travis. Strong understanding of version control systems, particularly Git. Good communication skills and the ability to articulate technical concepts to both technical and non-technical team members. Familiarity with containerizationand orchestrationtechnologies like Docker and Kubernetes for deploying event processing applications. Proficiency in troubleshootingand debugging. Exceptional problem-solving and analytical abilities, with a knack for addressing technical challenges. Ability to work collaboratively in an agile and fast-paced development environment. Leadership skills to guide and mentorjunior developers, fostering their growth and skill development. Strong organizational and time management skills to manage multiple tasks and priorities effectively. Adaptability to stay current with evolving event-driven technologies and industry trends. Customer-focused mindset, with a dedication to delivering solutions that meet or exceed customer expectations. Creative thinking and innovation mindset to drive continuous improvement and explore new possibilities. Collaborative and team-oriented approach to work, valuing open communication and diverse perspectives. Preferred technical and professional ex
Posted 1 month ago
4.0 - 7.0 years
9 - 12 Lacs
Pune
Hybrid
So, what’s the role all about? In NiCE as a Senior Software professional specializing in designing, developing, and maintaining applications and systems using the Java programming language. They play a critical role in building scalable, robust, and high-performing applications for a variety of industries, including finance, healthcare, technology, and e-commerce How will you make an impact? Working knowledge of unit testing Working knowledge of user stories or use cases Working knowledge of design patterns or equivalent experience. Working knowledge of object-oriented software design. Team Player Have you got what it takes? Bachelor’s degree in computer science, Business Information Systems or related field or equivalent work experience is required. 4+ year (SE) experience in software development Well established technical problem-solving skills. Experience in Java, spring boot and microservices. Experience with Kafka, Kinesis, KDA, Apache Flink Experience in Kubernetes operators, Grafana, Prometheus Experience with AWS Technology including (EKS, EMR, S3, Kinesis, Lambda’s, Firehose, IAM, CloudWatch, etc) You will have an advantage if you also have: Experience with Snowflake or any DWH solution. Excellent communication skills, problem-solving skills, decision-making skills Experience in Databases Experience in CI/CD, git, GitHub Actions Jenkins based pipeline deployments. Strong experience in SQL What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 6965 Reporting into: Tech Manager Role Type: Individual Contributor
Posted 1 month ago
12.0 - 15.0 years
55 - 60 Lacs
Ahmedabad, Chennai, Bengaluru
Work from Office
Dear Candidate, We are hiring a Data Platform Engineer to build and maintain scalable, secure, and reliable data infrastructure for analytics and real-time processing. Key Responsibilities: Design and manage data pipelines, storage layers, and ingestion frameworks. Build platforms for batch and streaming data processing (Spark, Kafka, Flink). Optimize data systems for scalability, fault tolerance, and performance. Collaborate with data engineers, analysts, and DevOps to enable data access. Enforce data governance, access controls, and compliance standards. Required Skills & Qualifications: Proficiency with distributed data systems (Hadoop, Spark, Kafka, Airflow). Strong SQL and experience with cloud data platforms (Snowflake, BigQuery, Redshift). Knowledge of data warehousing, lakehouse, and ETL/ELT pipelines. Experience with infrastructure as code and automation. Familiarity with data quality, security, and metadata management. Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Srinivasa Reddy Kandi Delivery Manager Integra Technologies
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough