Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
India
On-site
Job Summary: Senior Engineer 2 (SDET) Location: New Delhi Division: Ticketmaster Sport International Engineering Line Manager: Andrew French Contract Terms: Permanent THE TEAM Ticketmaster Sport is the global leader in sports ticketing. From the smallest clubs to the biggest leagues and tournaments, we are trusted as their ticketing partner. You will be joining the Ticketmaster Sports International Engineering division which is dedicated to the creation and maintenance of industry standard ticketing software solutions. Our software is relied upon by our clients to manage and sell their substantial ticketing inventories. Our clients include some of the highest profile clubs and organisations in sport. Reliability, quality, and performance are expected by our clients. We provide an extensive catalogue of hosted services including back-office tooling, public-facing web sales channels, and other services and APIs. The team you will join is closely involved in all these areas. The Ticketmaster Sports International Engineering division comprises distributed software development teams working together in a highly collaborative environment. You will be joining our expanding engineering team based in New Delhi. THE JOB You will be joining a Microsoft .Net development team as a Senior Quality Assurance Engineer. The team you will be joining is responsible for data engineering in the Sport platform. This includes developing back-end systems which integrate with other internal Ticketmaster systems, as well as with our external business partners. You will be required to work with event-driven systems, message queueing, API development, and much more besides. There is a tremendous opportunity for you to make a difference. We are looking for QA engineers who can help us drive our platform forward from a quality assurance point of view, as well as act as a mentor for more junior members of the team. You will be working very closely with the team lead to ensure the quality of our software and to assist in the planning and decision-making process. Apart from standard manual testing activities you will help improve our automated test suites, as well as be involved with performance testing. In essence, your job will be to ensure our software solutions are of the highest quality, robustness, and performance. WHAT YOU WILL BE DOING Design, build, and maintain scalable and reusable test automation frameworks using C# .Net and Selenium. Collaborate with developers, product managers, and QA to understand requirements and build comprehensive test plans. Defining, developing, and implementing quality assurance practices and procedures and test plans. Create, execute, and maintain automated functional, regression, integration and performance tests. Ensure high code quality and testing standards across the team through code reviews and best practices. Investigate test failures, diagnose bugs and file detailed bug reports Producing test and quality reports. Integrate test automation with CI/CD pipelines (GitLab, Azure Devops, Jenkins). Operating effectively within an organisation with teams spread across the globe. Working effectively within a dynamic team environment to define and advocate for QA standards and best practices to ensure the highest level of quality. TECHNICAL SKILLS Must have: 5+ years of experience in test automation development, preferably in an SDET role Strong hands-on experience with C# .Net and Selenium Webdriver. Experience in tools like NUnit, Specflow, or similar test libraries. Solid understanding of object-oriented programming (OOP) and software design principles. Experience developing and maintaining custom automation frameworks from scratch. Proficiency in writing clear, concise and comprehensive test cases and test plans. Experience of working in scrum teams within Agile methodology. Experience in developing regression and functional test plans, managing defects. Understand Business requirements and identify scenarios of Automated and manual testing Experience in performance testing using Gatling. Experience working with Git CI/CD pipelines. Experience with web service e.g. RESTful services testing including test automation with Rest Assured/Postman. Be proficient working with relational databases such as MSSQL or other relational databases. A deep understanding of Web protocols and standards (e.g. HTTP, REST). Strong problem-solving mindset and a detail-oriented mindset. Nice to have: Exposure to performance testing tools Testing enterprise applications deployed to cloud environments such as AWS. Experience on static code analysis tools like SonarQube etc. Building test infrastructures using containerisation technologies such as Docker and working with continuous delivery or continuous release pipelines. Experience in microservice development. Experience with Octopus Deploy. Experience with TestRail. Experience with event-driven architectures, messaging patterns and practices. Experience with Kafka, AWS SQS or other similar technologies. YOU (BEHAVIOURAL SKILLS) Excellent communication and interpersonal skills. We work with people all over the Globe using English as a shared language. As a senior engineer you will be expected to help managers make decisions by describing problems and proposing solutions. To be able to respond positively to challenge. Excellent problem-solving skills. Desire to take on responsibility and to grow as a quality assurance software engineer. Enthusiasm for technology and a desire to communicate that to your fellow team members. The ability to pick up any ad-hoc technology and run with it. Continuous curiosity for new technologies on the horizon. LIFE AT TICKETMASTER We are proud to be a part of Live Nation Entertainment, the world’s largest live entertainment company. Our vision at Ticketmaster is to connect people around the world to the live events they love. As the world’s largest ticket marketplace and the leading global provider of enterprise tools and services for the live entertainment business, we are uniquely positioned to successfully deliver on that vision. We do it all with an intense passion for Live and an inspiring and diverse culture driven by accessible leaders, attentive managers, and enthusiastic teams. If you’re passionate about live entertainment like we are, and you want to work at a company dedicated to helping millions of fans experience it, we want to hear from you. Our work is guided by our values: Reliability - We understand that fans and clients rely on us to power their live event experiences, and we rely on each other to make it happen. Teamwork - We believe individual achievement pales in comparison to the level of success that can be achieved by a team Integrity - We are committed to the highest moral and ethical standards on behalf of the countless partners and stakeholders we represent Belonging - We are committed to building a culture in which all people can be their authentic selves, have an equal voice and opportunities to thrive EQUAL OPPORTUNITIES We are passionate and committed to our people and go beyond the rhetoric of diversity and inclusion. You will be working in an inclusive environment and be encouraged to bring your whole self to work. We will do all that we can to help you successfully balance your work and homelife. As a growing business we will encourage you to develop your professional and personal aspirations, enjoy new experiences, and learn from the talented people you will be working with. It's talent that matters to us and we encourage applications from people irrespective of their gender, race, sexual orientation, religion, age, disability status or caring responsibilities.
Posted 18 hours ago
7.0 - 9.0 years
6 - 8 Lacs
Hyderābād
On-site
General information Country India State Telangana City Hyderabad Job ID 45479 Department Development Description & Requirements Senior Java Developer is responsible for architecting and developing advanced Java solutions. This role involves leading the design and implementation of microservice architectures with Spring Boot, optimizing services for performance and scalability, and ensuring code quality. The Senior Developer will also mentor junior developers and collaborate closely with cross-functional teams to deliver comprehensive technical solutions. Essential Duties: Lead the development of scalable, robust, and secure Java components and services. Architect and optimize microservice solutions using Spring Boot. Translate customer requirements into comprehensive technical solutions. Conduct code reviews and maintain high code quality standards. Optimize and scale microservices for performance and reliability. Collaborate effectively with cross-functional teams to innovate and develop solutions. Experience in leading projects and mentoring engineers in best practices and innovative solutions. Coordinate with customer and client-facing teams for effective solution delivery. Basic Qualifications: Bachelor’s degree in Computer Science or a related field. 7-9 years of experience in Java development. Expertise in designing and implementing Microservices with Spring Boot. Extensive experience in applying design patterns, system design principles, and expertise in event-driven and domain-driven design methodologies. Extensive experience with multithreading, asynchronous and defensive programming. Proficiency in MongoDB, SQL databases, and S3 data storage. Experience with Kafka, Kubernetes, AWS services & AWS SDK. Hands-on experience with Apache Spark. Strong knowledge of Linux, Git, and Docker. Familiarity with Agile methodologies and tools like Jira and Confluence. Excellent communication and leadership skills. Preferred Qualifications Experience with Spark using Spring Boot. Familiarity with the C4 Software Architecture Model. Experience using tools like Lucidchart for architecture and flow diagrams. About Infor Infor is a global leader in business cloud software products for companies in industry specific markets. Infor builds complete industry suites in the cloud and efficiently deploys technology that puts the user experience first, leverages data science, and integrates easily into existing systems. Over 60,000 organizations worldwide rely on Infor to help overcome market disruptions and achieve business-wide digital transformation. For more information visit www.infor.com Our Values At Infor, we strive for an environment that is founded on a business philosophy called Principle Based Management™ (PBM™) and eight Guiding Principles: integrity, stewardship & compliance, transformation, principled entrepreneurship, knowledge, humility, respect, self-actualization. Increasing diversity is important to reflect our markets, customers, partners, and communities we serve in now and in the future. We have a relentless commitment to a culture based on PBM. Informed by the principles that allow a free and open society to flourish, PBM™ prepares individuals to innovate, improve, and transform while fostering a healthy, growing organization that creates long-term value for its clients and supporters and fulfillment for its employees. Infor is an Equal Opportunity Employer. We are committed to creating a diverse and inclusive work environment. Infor does not discriminate against candidates or employees because of their sex, race, gender identity, disability, age, sexual orientation, religion, national origin, veteran status, or any other protected status under the law. If you require accommodation or assistance at any time during the application or selection processes, please submit a request by following the directions located in the FAQ section at the bottom of the infor.com/about/careers webpage.
Posted 18 hours ago
12.0 years
5 - 9 Lacs
Hyderābād
On-site
Job Description Overview PepsiCo Data BI & Integration Platforms is seeking an experienced Cloud Platform Databricks SME, responsible for overseeing the Platform administration, Security, new NPI tools integration, migrations, platform maintenance and other platform administration activities on Azure/AWS.The ideal candidate will have hands-on experience with Azure/AWS services – Infrastructure as Code (IaC), platform provisioning & administration, cloud network design, cloud security principles and automation. Responsibilities Databricks Subject Matter Expert (SME) plays a pivotal role in admin, security best practices, platform sustain support, new tools adoption, cost optimization, supporting new patterns/design solutions using the Databricks platform. Here’s a breakdown of typical responsibilities: Core Technical Responsibilities Architect and optimize big data pipelines using Apache Spark, Delta Lake, and Databricks-native tools. Design scalable data ingestion and transformation workflows, including batch and streaming (e.g., Kafka, Spark Structured Streaming). Create integration guidelines to configure and integrate Databricks with other existing security tools relevant to data access control. Implement data security and governance using Unity Catalog, access controls, and data classification techniques. Support migration of legacy systems to Databricks on cloud platforms like Azure, AWS, or GCP. Manage cloud platform operations with a focus on FinOps support, optimizing resource utilization, cost visibility, and governance across multi-cloud environments. Collaboration & Advisory Act as a technical advisor to data engineering and analytics teams, guiding best practices and performance tuning. Partner with architects and business stakeholders to align Databricks solutions with enterprise goals. Lead proof-of-concept (PoC) initiatives to demonstrate Databricks capabilities for specific use cases. Strategic & Leadership Contributions Mentor junior engineers and promote knowledge sharing across teams. Contribute to platform adoption strategies, including training, documentation, and internal evangelism. Stay current with Databricks innovations and recommend enhancements to existing architectures. Specialized Expertise (Optional but Valuable) Machine Learning & AI integration using MLflow, AutoML, or custom models. Cost optimization and workload sizing for large-scale data processing. Compliance and audit readiness for regulated industries. Qualifications Bachelor’s degree in computer science. At least 12 years of experience in IT cloud infrastructure, architecture and operations, including security, with at least 5 years in a Platform admin role Strong understanding of data security principles and best practices. Expertise in Databricks platform, security features, Unity Catalog, and data access control mechanisms. Experience with data classification and masking techniques. Strong understanding of cloud cost management, with hands-on experience in usage analytics, budgeting, and cost optimization strategies across multi-cloud platforms. Strong knowledge of cloud architecture, design, and deployment principles and practices, including microservices, serverless, containers, and DevOps. Deep expertise in Azure/AWS big data & analytics technologies, including Databricks, real time data ingestion, data warehouses, serverless ETL, No SQL databases, DevOps, Kubernetes, virtual machines, web/function apps, monitoring and security tools. Deep expertise in Azure/AWS networking and security fundamentals, including network endpoints & network security groups, firewalls, external/internal DNS, load balancers, virtual networks and subnets. Proficient in scripting and automation tools, such as PowerShell, Python, Terraform, and Ansible. Excellent problem-solving, analytical, and communication skills, with the ability to explain complex technical concepts to non-technical audiences. Certifications in Azure/AWS/Databricks platform administration, networking and security are preferred. Strong self-organization, time management and prioritization skills A high level of attention to detail, excellent follow through, and reliability Strong collaboration, teamwork and relationship building skills across multiple levels and functions in the organization Ability to listen, establish rapport, and credibility as a strategic partner vertically within the business unit or function, as well as with leadership and functional teams Strategic thinker focused on business value results that utilize technical solutions Strong communication skills in writing, speaking, and presenting Capable to work effectively in a multi-tasking environment. Fluent in English language.
Posted 18 hours ago
7.0 years
0 Lacs
India
On-site
About Us: MatchMove is a leading embedded finance platform that empowers businesses to embed financial services into their applications. We provide innovative solutions across payments, banking-as-a-service, and spend/send management, enabling our clients to drive growth and enhance customer experiences. Are You The One? As a Technical Lead Engineer - Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to: Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements: At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum. Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale. Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. MatchMove Culture: We cultivate a dynamic and innovative culture that fuels growth, creativity, and collaboration. Our fast-paced fintech environment thrives on adaptability, agility, and open communication. We focus on employee development, supporting continuous learning and growth through training programs, learning on the job and mentorship. We encourage speaking up, sharing ideas, and taking ownership. Embracing diversity, our team spans across Asia, fostering a rich exchange of perspectives and experiences. Together, we harness the power of fintech and e-commerce to make a meaningful impact on people's lives. Personal Data Protection Act: By submitting your application for this job, you are authorizing MatchMove to: collect and use your personal data, and to disclose such data to any third party with whom MatchMove or any of its related corporation has service arrangements, in each case for all purposes in connection with your job application, and employment with MatchMove; and retain your personal data for one year for consideration of future job opportunities (where applicable).
Posted 18 hours ago
3.0 years
15 - 22 Lacs
Gurugram, Haryana, India
Remote
Experience : 3.00 + years Salary : INR 1500000-2200000 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: LINEN.Cloud) (*Note: This is a requirement for one of Uplers' client - LINEN.Cloud) What do you need for this opportunity? Must have skills required: Cloud Foundry, Java Web Services, Kafka., RabbitMQ, Spring Boot, Docker, JavaScript, Kubernetes LINEN.Cloud is Looking for: Java Developer Function: Technical Management → Engineering Management, Software Engineering → Backend Development, Full-Stack Development Java, Angular. Microservices, React.js, SQL We are looking for highly skilled developers with experience building web applications and REST APIs in Java. You will collaborate with cross-functional teams to translate business requirements into high-quality, scalable, and maintainable code. The ideal candidate should have a strong foundation in Java development, along with excellent problem-solving skills and a passion for building innovative solutions. Responsibilities: Designing, implementing, and Unit testing Java applications. Aligning application design with business goals. Debugging and resolving technical problems that arise. Recommending changes to the existing Java infrastructure. Ensuring continuous professional self-development. Requirements: Experience developing and testing Java Web Services RESTful (primary), XML, JSON, and supporting integration and enabling access via API calls. Experience with Tomcat, Apache, and similar web server technologies. Hands-on experience working with RabbitMQ and Kafka. Experience with the Spring Boot framework. Hands-on with Angular/Node.js is preferred. Working knowledge of ELK (Elasticsearch, Logstash, and Kibana) or Solr is a big plus. Experience with virtualization like Cloud Foundry (PCF), Kubernetes (PKS), Docker, etc, is a big plus. Agile/Scrum expertise. Experience establishing and enforcing branching and software development processes and deployment via CI/CD. Competencies: Aligning application design with business goals. Debugging and resolving technical problems that arise. Recommending changes to the existing Java infrastructure. Ensuring continuous professional self-development. Team spirit and strong communication skills Customer- and service-oriented, confident appearance in an international environment Very high proficiency in English. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 18 hours ago
6.0 years
2 - 10 Lacs
Hyderābād
On-site
India Information Technology (IT) Group Functions Job Reference # 325109BR City Hyderabad Job Type Full Time Your role Are you passionate about data? Do you enjoy problem solving in elegant ways? Do you enjoy using cutting edge technology and developing a variety of solutions to achieve your goals? We are looking for a hands-on data engineer to: Develop innovative solutions for managing sensitive information Participate in the analysis of business problems, working directly with different teams Be part of the design process for new solutions Experience the reality of day-to-day data management Your team You’ll be working in the Enterprise Data Mesh team in Hyderabad. We are a global team that provides innovative solutions to manage sensitive data. The solution portfolio ranges from data analysis over data transformation to AI based data generation. The team is international, with footprints in three continents. You will be working closely with technology experts across UBS Technology. Your expertise You have: At least 6 years of experience in developing data management applications using Python and its data engineering frameworks Interest in using state of the art technologies (Pandas, Spark, Kafka, Azure) to develop innovative solutions Command of application, data and infrastructure architecture disciplines Experience working in delivery-oriented agile teams Strong command of Python. Hands-on experience with: Pandas, SQL, Spark Desired: Exposure to cloud development with Azure and or AWS Experience with databases (relational and others) Experience designing RESTful APIs and/or Kafka clients Experience building modern solutions (Kafka data streaming, Azure cloud-native) Financial Services background is advantageous, but not mandatory Experience with Machine Learning and Artificial Intelligence is a plus You are: Willing to take full ownership of the problem and code, and can hit the ground running and deliver great solutions A strong thinker that sees problems around the corner and resolves them Adept in communicating with both technical and non-technical audiences About us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors. We have a presence in all major financial centers in more than 50 countries. How we hire We may request you to complete one or more assessments during the application process. Learn more Join us At UBS, we know that it's our people, with their diverse skills, experiences and backgrounds, who drive our ongoing success. We’re dedicated to our craft and passionate about putting our people first, with new challenges, a supportive team, opportunities to grow and flexible working options when possible. Our inclusive culture brings out the best in our employees, wherever they are on their career journey. We also recognize that great work is never done alone. That’s why collaboration is at the heart of everything we do. Because together, we’re more than ourselves. We’re committed to disability inclusion and if you need reasonable accommodation/adjustments throughout our recruitment process, you can always contact us. Contact Details UBS Business Solutions SA UBS Recruiting Disclaimer / Policy statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.
Posted 18 hours ago
0 years
0 Lacs
Gurugram, Haryana, India
Remote
Every day, tens of millions of people come to Roblox to explore, create, play, learn, and connect with friends in 3D immersive digital experiences– all created by our global community of developers and creators. At Roblox, we’re building the tools and platform that empower our community to bring any experience that they can imagine to life. Our vision is to reimagine the way people come together, from anywhere in the world, and on any device. We’re on a mission to connect a billion people with optimism and civility, and looking for amazing talent to help us get there. A career at Roblox means you’ll be working to shape the future of human interaction, solving unique technical challenges at scale, and helping to create safer, more civil shared experiences for everyone. Roblox Operating System (ROS) is our internal productivity platform that governs how Roblox operates as a company. Through an integrated suite of tools, ROS shapes how we make talent and personnel decisions, plan and organize work, discover knowledge, and scale efficiently. We are seeking a Senior Data Engineer to enhance our data posture and architecture, synchronizing data across vital third-party systems like Workday, Greenhouse, GSuite, and JIRA, as well as our internal Roblox OS application database. Our Roblox OS app suite encompasses internal tools and third-party applications for People Operations, Talent Acquisition, Budgeting, Roadmapping, and Business Analytics. We envision an integrated platform that streamlines processes while providing employees and leaders with the information they need to support the business. This is a new team in our Roblox India location, working closely with data scientists & analysts, product & engineering, and other stakeholders in India & US. You will report to the Engineering Manager of the Roblox OS Team in your local location and collaborate with Roblox internal teams globally. Work Model : This role is based in Gurugram and follows a hybrid structure — 3 days from the office (Tuesday, Wednesday & Thursday) and 2 days work from home. Shift Time : 2:00pm - 10:30pm IST (Cabs will be provided) You Will Design and Build Scalable Data Pipelines: Architect, develop, and maintain robust, scalable data pipelines using orchestration frameworks like Airflow to synchronize data between internal systems. Implement and Optimize ETL Processes: Apply strong understanding of ETL (Extract, Transform, Load) processes and best practices for seamless data integration and transformation. Develop Data Solutions with SQL: Utilize your proficiency in SQL and relational databases (e.g., PostgreSQL) for advanced querying, data modeling, and optimizing data solutions. Contribute to Data Architecture: Actively participate in data architecture and implementation discussions, ensuring data integrity and efficient data transposition. Manage and optimize data infrastructure, including database, cloud storage solutions, and API endpoints. Write High-Quality Code: Focus on developing clear, readable, testable, modular, and well-monitored code for data manipulation, automation, and software development with a strong emphasis on data integrity. Troubleshoot and Optimize Performance: Apply excellent analytical and problem-solving skills to diagnose data issues and optimize pipeline performance. Collaborate Cross-Functionally: Work effectively with cross-functional teams, including data scientists, analysts, and business stakeholders, to translate business needs into technical data solutions. Ensure Data Governance and Security: Implement data anonymization and pseudonymization techniques to protect sensitive data, and contribute to master data management (MDM) concepts including data quality, lineage, and governance frameworks. You Have Data Engineering Expertise: At least 6+ Proven experience designing, building, and maintaining scalable data pipelines, coupled with a strong understanding of ETL processes and best practices for data integration. Database and Data Warehousing Proficiency: Deep proficiency in SQL and relational databases (e.g., PostgreSQL), and familiarity with at least one cloud-based data warehouse solution (e.g., Snowflake, Redshift, BigQuery). Technical Acumen: Strong scripting skills for data manipulation and automation. Familiarity with data streaming platforms (e.g., Kafka, Kinesis), and knowledge of containerization (e.g., Docker) and cloud infrastructure (e.g., AWS, Azure, GCP) for deploying and managing data solutions. Data & Cloud Infrastructure Management: Experience with managing and optimizing data infrastructure, including database, cloud storage solutions, and configuring API endpoints. Software Development Experience: Experience in software development with a focus on data integrity and transposition, and a commitment to writing clear, readable, testable, modular, and well-monitored code. Problem-Solving & Collaboration Skills: Excellent analytical and problem-solving abilities to troubleshoot complex data issues, combined with strong communication and collaboration skills to work effectively across teams. Passion for Data: A genuine passion for working with amounts of data from various sources, understanding the critical impact of data quality on company strategy at an executive level. Adaptability: Ability to thrive and deliver results in a fast-paced environment with competing priorities. Roles that are based in an office are onsite Tuesday, Wednesday, and Thursday, with optional presence on Monday and Friday (unless otherwise noted). Roblox provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. Roblox also provides reasonable accommodations for all candidates during the interview process.
Posted 18 hours ago
10.0 years
30 - 34 Lacs
India
Remote
**Need Databricks SME *** Location - offshore ( Anywhere from India - Remote ) - Need to work in EST Time (US shift) Need 10+ Years of experience. 5 Must Haves: 1. Data Expertise -- worked in Azure Data Bricks/Pipeline/ Shut Down Clusters--2 or more years' experience 2. Unity Catalog migration -- well versed--done tera form scripting in Dev Ops--coding & understand the code--understanding the logics of the behind the scenes--automate functionality 3. Tera Form Expertise -- code building --- 3 or more years 4. Understanding data mesh architecture -- decoupling applications -- ability to have things run in Parallel -- clear understanding -- 2 plus years of experience Microsoft Azure Cloud Platform 5. Great problem Solver Key Responsibilities: Architect, configure, & optimize Databricks Pipelines for large-scale data processing within an Azure Data Lakehouse environment. Set up & manage Azure infrastructure components including Databricks Workspaces, Azure Containers (AKS/ACI), Storage Accounts, & Networking. Design & implement a monitoring & observability framework using tools like Azure Monitor, Log Analytics, & Prometheus / Grafana. Collaborate with platform & data engineering teams to enable microservices-based architecture for scalable & modular data solutions. Drive automation & CI / CD practices using Terraform, ARM templates, & GitHub Actions/Azure DevOps. Required Skills & Experience: Strong hands - on experience with Azure Databricks, Delta Lake, & Apache Spark. Deep understanding of Azure services: Resource Manager, AKS, ACR, Key Vault, & Networking. Proven experience in microservices architecture & container orchestration. Expertise in infrastructure-as-code, scripting (Python, Bash), & DevOps tooling. Familiarity with data governance, security, & cost optimization in cloud environments. Bonus: Experience with event - driven architectures (Kafka / Event Grid). Knowledge of data mesh principles & distributed data ownership. Interview: Two rounds of interviews (1st with manager & 2nd with the team) Job Type: Full-time Pay: ₹3,000,000.00 - ₹3,400,000.00 per year Schedule: US shift
Posted 18 hours ago
3.0 years
3 - 6 Lacs
Cochin
On-site
Job Title: Software Engineer - Java Job Code: CUB/2025/SE/011 Experience: 3-5 years Location: Infopark, Kochi Why Join Us? Innovative Environment : Join a forward-thinking company that encourages creativity and problem-solving. Career Growth : Opportunities for professional development and career advancement. Collaborative Culture : Work in a team-oriented environment where your contributions are valued. Competitive Compensation : Attractive salary package and performance-based incentives. Job Requirements 2-4 years of professional experience in Java development. Ability to contribute to scalable application development using Java and Spring Boot. Exposure to containerized deployments and modern development workflows. Key Responsibilities Develop and maintain Java applications using Java 1.8+ , Flux/Reactive programming , lambdas, and Streams. Contribute to solutions using Spring Boot (Spring Data, Spring Stream, Spring Task/Batch, Web Client). Work with PostgreSQL and integrate with NoSQL databases . Assist in implementing Kafka-based streaming solutions. Support containerized deployments with Docker and Kubernetes . Collaborate with senior developers and participate in code reviews. Ensure code quality through best practices and testing. Learn and adapt to evolving tools and technologies. Required Skills Programming: Java 1.8+ with basic Flux/Reactive programming concepts. Frameworks: Spring Boot (Spring Data, Stream, Task/Batch, WebClient). Databases: PostgreSQL and one NoSQL database (MongoDB, Cassandra, or Neo4j). Streaming: Kafka (basic hands-on experience). Containerization: Docker and Kubernetes (working knowledge). Preferred Skills Familiarity with ArangoDB. Experience in CI/CD processes. Exposure to Agile development environments. Soft Skills Strong communication skills. Good problem-solving ability. Eagerness to learn and grow in a team environment. Qualifications Bachelor’s degree in Computer Science, IT, or a related field (or equivalent experience).
Posted 18 hours ago
6.0 years
20 - 40 Lacs
Cochin
On-site
An exciting opportunity to join an established UK based company with 20% year on year growth rate. A rapidly growing UK based software (SaaS) company dedicated to providing cutting-edge solutions for the logistics and transportation industry. With ongoing investment in new products, we offer the excitement and innovation of a start-up coupled with the stability and benefits of an established business. Knowledge, Skills and Experience Required: Able to communicate clearly and accurately on technical topics in English (verbal and written) Can write performant, testable, and maintainable JAVA code with 6+ years of proven commercial JAVA experience. Knowledge of best practice and patterns across the implementation, build and deployment of JAVA services. Proven extensive experience of Java ecosystem and related technologies and frameworks. o Spring Boot, Spring libraries and frameworks. o Hibernate o Maven Fluent in TDD and familiar with BDD Knowledge of Git, JIRA, Confluence, Maven, Docker and using Jenkins Solid experience of working with RESTful services in microservices oriented architectures Solid knowledge of working within a cloud-based infrastructure, ideally AWS Knowledge of NoSQL and relational database management systems, especially PostgreSQL Experience of building services within event or stream-based systems using either SQS, Kafka or Pulsar, CQRS Thorough understanding of Computer Science fundamentals and software patterns Nice to have: Experience with AWS Services such as Lambda, SQS, S3, Rekognition Face Liveness Experience with Camunda BPMN Job Types: Full-time, Permanent Pay: ₹2,000,000.00 - ₹4,000,000.00 per year Location Type: In-person Schedule: Day shift Evening shift Monday to Friday Morning shift Ability to commute/relocate: Ernakulam, Kerala: Reliably commute or planning to relocate before starting work (Required) Experience: Total : 10 years (Required) Java: 10 years (Required) Work Location: In person Expected Start Date: 30/08/2025
Posted 18 hours ago
3.0 years
3 - 7 Lacs
Cochin
On-site
Minimum Required Experience : 3 years Full Time Skills Azure Cloud Kubernetes Helm Charts Git Docker Description Job Title: Software DevOps Engineer (3-5 Years Experience) or Senior Software DevOps Engineer (5-10 Years Experience) Job Description: Responsibilities: Design, implement, and maintain CI/CD pipelines to ensure efficient and reliable software delivery. Collaborate with Development, QA, and Operations teams to streamline the deployment and operation of applications. Monitor system performance, identify bottlenecks, and troubleshoot issues to ensure high availability and reliability. Automate repetitive tasks and processes to improve efficiency and reduce manual intervention. Participate in code reviews and contribute to the improvement of best practices and standards. Implement and manage infrastructure as code (IaC) using Terraform. Document processes, configurations, and procedures for future reference. Stay updated with the latest industry trends and technologies to continuously improve DevOps processes. Create POC for the latest tools and technologies. Requirements: Bachelor's degree in Computer Science, Information Technology, or a related field. 1-3 years of experience in a DevOps or related role. Proficiency with version control systems (e.g., Git). Experience with scripting languages (e.g., Python, Bash). Strong understanding of CI/CD concepts and tools (e.g., Azure DevOps, Jenkins, GitLab CI). Experience with cloud platforms (e.g., AWS, Azure, GCP). Familiarity with containerization technologies (e.g., Docker, Kubernetes). Basic understanding of networking and security principles. Strong problem-solving skills and attention to detail. Excellent communication and teamwork skills. Ability to learn and adapt to new technologies and methodologies. Ready to work with clients directly. Mandatory Skill: Azure Cloud, Azure DevOps, CI\CD Pipeline, Version control (git) Linux Commands, Bash Script Docker, Kubernetes, Helm Charts Any Monitoring tools such as Grafana, Prometheus, ELK Stack, Azure Monitoring Azure, AKS, Azure Storage, Virtual Machine Understanding of micro-services architecture, orchestration, Sql Server. Optional Skill: Ansible Script, Kafka, MongoDB Key Vault Azure Cli
Posted 18 hours ago
7.0 years
0 Lacs
Gurgaon
On-site
Position- Data Engineer Budget- 1.8 LPM Exp- 7 yrs Location- Gurgaon Minimum of 7+ years of experience in the data analytics field. Proven experience with Azure/AWS Databricks in building and optimizing data pipelines, architectures, and datasets. Strong expertise in Scala or Python, PySpark, and SQL for data engineering tasks. Ability to troubleshoot and optimize complex queries on the Spark platform. Knowledge of structured and unstructured data design, modelling, access, and storage techniques. Experience designing and deploying data applications on cloud platforms such as Azure or AWS. Hands-on experience in performance tuning and optimizing code running in Databricks environments. Strong analytical and problem-solving skills, particularly within Big Data environments. Experience with Big Data management tools and technologies including Cloudera, Python, Hive, Scala, Data Warehouse, Data Lake, AWS, Azure. Technical and Professional Skills: Must Have: Excellent communication skills with the ability to interact directly with customers. Azure/AWS Databricks. Python / Scala / Spark / PySpark. Strong SQL and RDBMS expertise. HIVE / HBase / Impala / Parquet. Sqoop, Kafka, Flume. Airflow. Job Type: Full-time Pay: ₹100,000.00 - ₹1,300,000.00 per year Schedule: Day shift Work Location: In person
Posted 18 hours ago
2.0 years
0 Lacs
Gurgaon
On-site
Job Description Overview The role will play a pivotal role in software development activities and collaboration across the Strategy & Transformation (S&T) organization. Software Engineering is the cornerstone of scalable digital transformation across PepsiCo’s value chain. Work across the full stack, building highly scalable distributed solutions that enable positive user experiences. The role requires to deliver the best possible software solutions, customer obsessed and ensure they are generating incremental value. The engineer is expected to work closely with the user experience, product, IT, and process engineering teams to develop new products and prioritize deliver solutions across S&T core priorities. The ideal candidate should have foundational knowledge of both front-end and back-end technologies, a passion for learning, and the ability to work in a collaborative environment. Responsibilities Assist in designing, developing, and maintaining scalable web applications. Collaborate with senior developers and designers to implement features from concept to deployment. Work on both front-end (React, Angular, Vue.js, etc.) and back-end (Node.js, Python, Java etc.) development tasks. Develop and consume RESTful APIs and integrate third-party services. Participate in code reviews, testing, and bug fixing. Write clean, maintainable, and well-documented code. Stay updated on emerging technologies and industry best practices. Qualifications Minimum Qualifications: A Bachelor’s Degree in Computer Science or a related field 2+ years of relevant software development. Commanding knowledge of data structures, algorithms, and object-oriented design. Strong system design fundamentals and experience building distributed scalable systems. Expertise in Java and its related technologies. Restful or GraphQL API (preferred) experience. Expertise in Java and Spring / SpringBoot ecosystem, JUnit, BackEnd MicroServices, Serverless Computing. Experience with JavaScript/TypeScript, Node.js, React or React Native or related frameworks. Experience with large scale messaging systems such as Kafka is a bonus. Experience is non SQL DB is good to have. Hands on experience with any cloud platform such as AWS or GCP or Azure (preferred). Qualities Strong attention to detail and extremely well-organized Ability to work cross functionally with product, service design and operations across the organization. Demonstrated passion for excellence with respect to Engineering services, education, and support. Strong interpersonal skills, ability to navigate through a complex and matrixed internal environment. Ability to work collaboratively with regional and global partners in other functional units.
Posted 18 hours ago
1.0 years
3 - 6 Lacs
Chennai
On-site
In this role as an Integration Engineer, you will: Analyze integration requirements and understand the design specifications shared by stakeholders or architects. Develop scalable and efficient integration processes using Boomi AtomSphere, adhering to best practices and defined standards. Perform unit testing and integration testing of the developed Boomi processes to ensure functionality and data accuracy. Prepare simulators/test harnesses to mimic external systems for testing purposes and validate integration flows. Maintain proper documentation of integration processes, mappings, and test cases for reusability and compliance. Skills Required : 1+ years of hands-on experience with Boomi AtomSphere for developing and managing integrations and workflows. Familiarity with creating, consuming, and managing REST/SOAP APIs within integration flows. Basic understanding and experience with EDI transactions and standards for data exchange. Knowledge of using Data Hub components for master data management and synchronization across systems. Experience working with queuing systems such as JMS, Kafka, or similar for asynchronous integrations..
Posted 18 hours ago
2.0 years
8 - 9 Lacs
Chennai
On-site
We are seeking a passionate and experienced Full Stack AI/ML Engineer with a strong background in machine learning and a drive for building intelligent systems. As a Full-Stack AI/ML Engineer on the Ford Pro Charging team, you will design, build, and ship intelligent services that power our global EV-charging platform. If you love turning data into real-world impact and thrive on end-to-end ownership—from research notebooks to production APIs—this is your playground. Required Skills & Qualifications: Experience: 2+ years of professional experience in Artificial Intelligence, Machine Learning, or Data Science roles, with a proven track record of delivering production-grade AI/ML solutions (or equivalent demonstrable expertise). Technical Expertise: Proficiency in Python and strong experience with core AI/ML libraries and frameworks (e.g., TensorFlow, PyTorch, scikit-learn, Hugging Face Transformers). Solid grasp of various machine learning algorithms (supervised, unsupervised, reinforcement learning) and deep learning architectures. Demonstrated experience applying machine learning to complex datasets, including structured and unstructured data. Proficient in API design (REST, GraphQL), microservices, and database design (SQL/NoSQL); production experience on at least one major cloud (AWS, Azure, or GCP). Practical knowledge of Docker, Kubernetes, and CI/CD pipelines (GitHub Actions, Argo, or similar). Problem-Solving: Excellent analytical and problem-solving skills, with proven ability to break down complex problems into iterative experiments and devise effective, scalable AI/ML solutions Enthusiasm & Learning: A genuine passion for technology, coupled with a self-driven commitment to continuous learning and mastery of new techniques. We value individuals who proactively identify challenges, conceptualize solutions, and lead ideation and innovation, beyond mere task execution Communication: Strong communication skills to articulate complex technical concepts to both technical and non-technical stakeholders. Education: Bachelor's or master's degree in computer science, Artificial Intelligence, Machine Learning, or a related quantitative field. Bonus Points: Domain expertise in EV charging, smart-grid, or energy-management systems. Experience with distributed data technologies (Spark, Flink, Kafka Streams). Contributions to open-source ML projects or peer-reviewed publications. Knowledge of ethical and responsible AI frameworks, including bias detection and model explainability. Design & Develop AI Solutions: Lead the design, development, training, and evaluation of machine learning models and AI solutions across various domains to enhance our products and services. Identify AI Opportunities: Proactively identify and explore opportunities to apply data-driven solutions to improve existing products, optimize internal processes, and create new value propositions. Model Implementation & Optimization: Implement, optimize, and deploy various machine learning algorithms and deep learning architectures to solve complex problems. Data Management & Engineering: Collaborate with data engineers to ensure robust data collection, preprocessing, feature engineering, and pipeline development for effective model training and performance. Backend Integration: Design and implement robust APIs and services to integrate AI/ML models and solutions seamlessly into our existing backend infrastructure, ensuring scalability, reliability, and maintainability. Performance Monitoring & Improvement: Continuously monitor, evaluate, and fine-tune the performance, accuracy, and efficiency of deployed AI/ML models and systems. Research & Innovation: Stay abreast of the latest advancements in AI, ML, and relevant technologies, and propose innovative solutions to push the boundaries of our product capabilities. Testing & Deployment: Participate in the rigorous testing, deployment, and ongoing maintenance of AI/ML solutions in production environments.
Posted 18 hours ago
4.0 - 6.0 years
0 Lacs
Chennai
On-site
The Applications Development Intermediate Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Responsibilities: Strong exposure to Java (Java 17 good to have), Spring Framework, Spring Boot, Kafka, Micro-Service , MongoDB, Openshift, REST, Maven, Git, Cloud 12 factors, JUnit, TDD, Agile,Strong knowledge on CI CD pipeline and exposure to system design with resiliency backed by observability . Having GEN AI tools knowledge is plus Exposure to Angular , UI libraries I e. ag-grid, angular, ngrx , karma, jasmine, sonar, typescript, exceljs, rxjs, lodash, rx-stomp, html, css, js. Serve as advisor or coach to new or lower level analysts Identify problems, analyze information, and make evaluative judgements to recommend and implement solutions Resolve issues by identifying and selecting solutions through the applications of acquired technical experience and guided by precedents Has the ability to operate with a limited level of direct supervision. Can exercise independence of judgement and autonomy. Acts as SME to senior stakeholders and /or other team members. Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: 4-6 years of relevant experience in the Financial Service industry Intermediate level experience in Applications Development role Consistently demonstrates clear and concise written and verbal communication Demonstrated problem-solving and decision-making skills Ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. - Job Family Group: Technology - Job Family: Applications Development - Time Type: Full time - Most Relevant Skills Please see the requirements listed above. - Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. - Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi . View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 18 hours ago
10.0 - 12.0 years
6 - 8 Lacs
Chennai
On-site
The Applications Development Senior Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Responsibilities: Conduct tasks related to feasibility studies, time and cost estimates, IT planning, risk technology, applications development, model development, and establish and implement new or revised applications systems and programs to meet specific business needs or user areas Monitor and control all phases of development process and analysis, design, construction, testing, and implementation as well as provide user and operational support on applications to business users Utilize in-depth specialty knowledge of applications development to analyze complex problems/issues, provide evaluation of business process, system process, and industry standards, and make evaluative judgement Recommend and develop security measures in post implementation analysis of business usage to ensure successful system design and functionality Consult with users/clients and other technology groups on issues, recommend advanced programming solutions, and install and assist customer exposure systems Ensure essential procedures are followed and help define operating standards and processes Serve as advisor or coach to new or lower level analysts Has the ability to operate with a limited level of direct supervision. Can exercise independence of judgement and autonomy. Acts as SME to senior stakeholders and /or other team members. Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: 10-12 years of relevant experience Experience in systems analysis and programming of software applications Experience in managing and implementing successful projects Working knowledge of consulting/project management techniques/methods Ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements Required Skills: Experience to Design and develop robust backend applications using Spring Boot, Spring Batch and other Spring ecosystem modules. Experience to Architect, develop, and deploy microservices solutions on cloud platforms using containerization and orchestration tools. Experience in Lightspeed. Experience in Kafka or any messaging tools. Experience with Java-RDBMS (Oracle) development Experience in Client reporting like Advice and Statements is Knowledge of operating Systems – Linux/Unix (SUN/IBM), Windows Working experience with Application servers - WebLogic, WebSphere Any experience with ISIS Papyrus and ETL tools would be a plus Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. - Job Family Group: Technology - Job Family: Applications Development - Time Type: Full time - Most Relevant Skills Please see the requirements listed above. - Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. - Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 18 hours ago
6.0 - 10.0 years
4 - 8 Lacs
Chennai
On-site
Job ID: 28554 Location: Chennai, IN Area of interest: Technology Job type: Regular Employee Work style: Office Working Opening date: 8 Jul 2025 Key Responsibilities The Engineer will be a key member of Trade Engineering team and responsible for delivering the product as per the business requirements to the application components. The role holder will form a key part of a fast-paced, ambitious, delivery programme. Software Delivery Work closely with the Engineering Analysts and Product Owners to ensure a healthy, refined Product Backlog for the team and code as per the Stories Ensure coding standard followed for all the stories developed Ensure adherence to delivery schedules and attendance at Daily Stand-up meetings Remove roadblocks and obstacles where the team is not able to do so Enable close cooperation across all roles and functions Facilitate team member specialization and generalization of tasks Schedule Sprint Reviews at the end of each sprint with the Product Owner Primarily a facilitative, servant leader and a Scrum process coach. Regularly and physically meets with the team members Deliver Functional Design Participate in business process re-engineering Assess value, develop cases and prioritize stories, epics and themes with acceptance criteria to ensure focus on those with maximum value that are aligned with product strategy Providing support during User Acceptance/Integration Testing Walkthrough the User Manual/ Production User Verification scripts with users Adherence to Risk & Data Quality Management Requirements Proactively identify issues and actions Monitor and remediate issues and actions from audits Awareness of the regulatory world and knowledge of AML, Fraud, Screening, Data Quality needs and ensuring these are catered for in the functional design Stakeholder Management Coordinate with resources and teams across different location Key Stakeholders Operations (Group, Regional, Country and Hubs) Business - Product/Sales/Segments (Group, Regional, Country) Technology – Delivery Teams and Technical Architecture) Hive Leads Chapter Leads Trade Technology Team Interface IT Teams Technology Services Teams Skills and Experience Mandatory Skills: Technical Stack: Java, Spring and its extensions (Boot, Cloud, Data, Security, Streams), PostgreSQL/Oracle, Kafka, Redis, Elasticsearch, AWS modules (S3, SMS, Opensearch), Hibernate, Rest API (development, contract), Maven, Azure DevOps (ADO), Technical Tools: Confluence/ADO/Bitbucket or Git, CI / CD (Maven, Git, Jenkins), Java IDEs (Eclipse, Intellij, VSCode) Agile development experience Strong presentation and communication skills Ability to understand business requirements and convert them into solution designs Knowledge of web-based systems architecture, Micro service-based architecture, enterprise application architecture as well as experience managing expectations when balancing alternatives against business and financial constraints Experience with design and development of REST API platform using Apigee/APIM, converting web services from SOAP to REST Experience with Security frameworks (e.g., JWT, OATH2) Experience in API layer like security, custom analytics, throttling, caching, logging, monetization, request and response modifications Experience in creating REST API documentation using Swagger and YAML or similar tools desirable Experience with Unix, Linux Operating Systems Prior Experience: Relevant experience of 6-10 years Banking/Fintech experience mandatory Trade Finance domain experience preferred Qualifications Education - Bachelor’s Degree in Computer Science, Software Engineering or equivalent degree Certifications - CBAP-certified preferred About Standard Chartered We're an international bank, nimble enough to act, big enough for impact. For more than 170 years, we've worked to make a positive difference for our clients, communities, and each other. We question the status quo, love a challenge and enjoy finding new opportunities to grow and do better than before. If you're looking for a career with purpose and you want to work for a bank making a difference, we want to hear from you. You can count on us to celebrate your unique talents and we can't wait to see the talents you can bring us. Our purpose, to drive commerce and prosperity through our unique diversity, together with our brand promise, to be here for good are achieved by how we each live our valued behaviours. When you work with us, you'll see how we value difference and advocate inclusion. Together we: Do the right thing and are assertive, challenge one another, and live with integrity, while putting the client at the heart of what we do Never settle, continuously striving to improve and innovate, keeping things simple and learning from doing well, and not so well Are better together, we can be ourselves, be inclusive, see more good in others, and work collectively to build for the long term What we offer In line with our Fair Pay Charter, we offer a competitive salary and benefits to support your mental, physical, financial and social wellbeing. Core bank funding for retirement savings, medical and life insurance, with flexible and voluntary benefits available in some locations. Time-off including annual leave, parental/maternity (20 weeks), sabbatical (12 months maximum) and volunteering leave (3 days), along with minimum global standards for annual and public holiday, which is combined to 30 days minimum. Flexible working options based around home and office locations, with flexible working patterns. Proactive wellbeing support through Unmind, a market-leading digital wellbeing platform, development courses for resilience and other human skills, global Employee Assistance Programme, sick leave, mental health first-aiders and all sorts of self-help toolkits A continuous learning culture to support your growth, with opportunities to reskill and upskill and access to physical, virtual and digital learning. Being part of an inclusive and values driven organisation, one that embraces and celebrates our unique diversity, across our teams, business functions and geographies - everyone feels respected and can realise their full potential. www.sc.com/careers
Posted 18 hours ago
3.0 years
5 - 6 Lacs
Chennai
On-site
We are looking for a skilled Software Engineer with 3+ years of hands-on experience in backend and frontend technologies. The role involves building scalable applications using Node.js/NestJS, Python, TypeScript, MongoDB, and front-end frameworks like React and Angular. A solid grasp of computer science fundamentals, system design, and experience with Kafka, GitHub/GitLab, CI/CD pipelines, and working in Agile teams is expected. You’ll be part of a cross-functional team, contributing to the development of scalable, high-quality applications in an Agile environment, and collaborating with a US-based team when needed. Bachelor’s degree in Computer Science, Engineering, or related field. Minimum 3 years of professional development experience. Proficient in Node.js, NestJS, Python, TypeScript, and MongoDB. Experience with React, Angular, and RESTful API development. Solid understanding of data structures, algorithms, and design patterns. Experience with Kafka, event-driven systems, GitHub/GitLab, CI/CD pipelines and cloud platforms (AWS/GCP/Azure). Familiar with Agile methodology and tools like Jira. Strong communication skills and ability to work with globally distributed teams. Develop and maintain scalable backend services using Node.js/NestJS, Python, and MongoDB. Build user interfaces using React and Angular. Design efficient APIs and implement complex business logic. Work with Kafka and event-driven microservices. Manage version control with GitHub/GitLab and understanding of CI/CD pipelines or Github Actions. Follow best practices in code quality, testing, and deployment. Collaborate with cross-functional teams and participate in Agile processes via Jira. Align with US-based teams on collaboration and delivery schedules.
Posted 18 hours ago
5.0 years
3 - 6 Lacs
Chennai
Remote
Education Degree, Post graduate in Computer Science or related field (or equivalent industry experience) Experience Minimum 5 years of coding experience in ReactJS (TypeScript), HTML, CSS-Pre-processors, or CSS-in-JS in creating Enterprise Applications with high performance for Responsive Web Applications. Minimum 5 years of coding experience in NodeJS, JavaScript & TypeScript and NoSQL Databases. Developing and implementing highly responsive user interface components using React concepts. (self-contained, reusable, and testable modules and components) Architecting and automating the build process for production, using task runners or scripts Knowledge of Data Structures for TypeScript. Monitoring and improving front-end performance. Banking or Retail domains knowledge is good to have. Hands on experience in performance tuning, debugging, monitoring. Technical Skills Excellent knowledge developing scalable and highly available Restful APIs using NodeJS technologies Well versed with CI/CD principles, and actively involved in solving, troubleshooting issues in distributed services ecosystem Understanding of containerization, experienced in Dockers, Kubernetes. Exposed to API gateway integrations like 3Scale. Understanding of Single-Sign-on or token-based authentication (Rest, JWT, OAuth) Possess expert knowledge of task/message queues include but not limited to: AWS, Microsoft Azure, Pushpin and Kafka. Practical experience with GraphQL is good to have. Writing tested, idiomatic, and documented JavaScript, HTML and CSS Experiencing in Developing responsive web-based UI Have experience on Styled Components, Tailwind CSS, Material UI and other CSS-in-JS techniques Thorough understanding of the responsibilities of the platform, database, API, caching layer, proxies, and other web services used in the system Writing non-blocking code, and resorting to advanced techniques such as multi-threading, when needed Strong proficiency in JavaScript, including DOM manipulation and the JavaScript object model Documenting the code inline using JSDoc or other conventions Thorough understanding of React.js and its core principles Familiarity with modern front-end build pipelines and tools Experience with popular React.js workflows (such as Flux or Redux or ContextAPI or Data Structures) A knack for benchmarking and optimization Proficient with the latest versions of ECMAScript (JavaScript or TypeScript) Knowledge of React and common tools used in the wider React ecosystem, such as npm, yarn etc Familiarity with common programming tools such as RESTful APIs, TypeScript, version control software, and remote deployment tools, CI/CD tools An understanding of common programming paradigms and fundamental React principles, such as React components, hooks, and the React lifecycle Unit testing using Jest, Enzyme, Jasmine or equivalent framework Understanding of linter libraries (TSLINT, Prettier etc) Functional Skills Experience in following best Coding, Security, Unit testing and Documentation standards and practices Experience in Agile methodology. Ensure quality of technical and application architecture and design of systems across the organization. Effectively research and benchmark technology against other best in class technologies. Soft Skills Able to influence multiple teams on technical considerations, increasing their productivity and effectiveness, by sharing deep knowledge and experience. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 18 hours ago
7.0 - 10.0 years
0 Lacs
Noida
Remote
Job Title: Senior Software Development Engineer (Sr. SDE) Location: Noida About Us: At Clearwater Analytics, we are on a mission to become the world's most trusted and comprehensive technology platform for investment management, reporting, accounting, and analytics. We partner with sophisticated institutional investors worldwide and are seeking a Software Development Engineer who shares our passion for innovation and client commitment. Role Overview: We are seeking a skilled Software Development Engineer with strong coding and design skills, as well as hands-on experience in cloud technologies and distributed architecture. This role focuses on delivering high-quality software solutions within the FinTech sector, particularly in the Front Office, OEMS, PMS, and Asset Management domains. Key Responsibilities: Design and develop scalable, high-performance software solutions in a distributed architecture environment. Collaborate with cross-functional teams to ensure engineering strategies align with business objectives and client needs. Implement real-time and asynchronous systems with a focus on event-driven architecture. Ensure operational excellence by adhering to best practices in software development and engineering. Present technical concepts and project updates clearly to stakeholders, fostering effective communication. Requirements: 7 - 10 years of hands-on experience in software development, ideally within the FinTech sector. Strong coding and design skills, with a solid understanding of software development principles. Deep expertise in cloud platforms (AWS/GCP/Azure) and distributed architecture. Experience with real-time systems, event-driven architecture, and engineering excellence in a large-scale environment. Proficiency in Java and familiarity with messaging systems (JMS/Kafka/MQ). Excellent verbal and written communication skills. Desired Qualifications: Experience in the FinTech sector, particularly in Front Office, OEMS, PMS, and Asset Management at scale. Bonus: Experience with BigTech, Groovy, Bash, Python, and knowledge of GenAI/AI technologies. What we offer: Business casual atmosphere in a flexible working environment Team-focused culture that promotes innovation and ownership Access cutting-edge investment reporting technology and expertise Defined and undefined career pathways, allowing you to grow your way Competitive medical, dental, vision, and life insurance benefits Maternity and paternity leave Personal Time Off and Volunteer Time Off to give back to the community RSUs, as well as an employee stock purchase plan and a 401 (k) with a match Work from anywhere 3 weeks out of the year Work from home Fridays Why Join Us? This is an incredible opportunity to be part of a dynamic engineering team that is shaping the future of investment management technology. If you're ready to make a significant impact and advance your career, apply now!
Posted 18 hours ago
3.0 years
8 - 10 Lacs
Noida
On-site
Job Summary: We are seeking a skilled ASP.NET MVC Developer to join our team and contribute to the development and maintenance of a web application focused on media and content management. The ideal candidate will have strong experience with ASP.NET MVC, C#, Razor Views, and RESTful APIs, and should also be comfortable working with video files, image and PowerPoint content in a backend system. Key Responsibilities : Design, develop, and maintain scalable ASP.NET MVC web applications using C# and Razor Views Develop and integrate RESTful APIs for frontend and backend communication Handle video file upload, conversion, storage (e.g., to Azure/aws), and playback integration Process and manage PowerPoint (PPT) content, including upload, conversion (PDF/images), preview, and rendering in UI Ensure the application is responsive, secure, and performant Collaborate with UI/UX designers and frontend developers for seamless integration Write clean, modular, well-documented, and reusable code Implement unit testing and participate in code reviews Troubleshoot and debug production issues as needed Required Skills: Strong programming skills in C#, ASP.NET MVC, and .NET Framework/Core Hands-on experience with Razor view engine and HTML/CSS/JavaScript Experience building and consuming RESTful APIs Experience with video processing libraries or platforms (e.g., FFmpeg, Azure Media Services, AWS Elemental, etc.) Experience handling PowerPoint file processing (.ppt/.pptx) — conversion to images/PDF, preview rendering Familiarity with Entity Framework, LINQ, and SQL Server Understanding of authentication and authorization mechanisms (e.g., JWT, OAuth) Experience with Git or other version control systems Good to Have: Experience with Azure/AWS for media storage and content delivery Knowledge of SignalR, kafka, rabbit mq, active mq for real-time communication (optional) Familiarity with Blazor, .NET Core, or transitioning legacy apps to .NET Core Exposure to Agile/Scrum methodologies Job Types: Full-time, Permanent Pay: ₹800,000.00 - ₹1,000,000.00 per year Benefits: Provident Fund Application Question(s): Are you currently serving notice Period? Can you Join within 10-15 Days? Can you Join Immediately? Experience: .NET: 3 years (Required) ASP.NET MVC: 3 years (Required) AWS: 1 year (Required) Azure: 1 year (Required) Work Location: In person
Posted 18 hours ago
6.0 years
0 Lacs
India
Remote
Experience : 6.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Airflow, LLMs, MLOps, Generative AI, Python Netskope is Looking for: About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. The Data Engineering team builds and optimizes systems spanning data ingestion, processing, storage optimization and more. We work closely with engineers and the product team to build highly scalable systems that tackle real-world data problems and provide our customers with accurate, real-time, fault tolerant solutions to their ever-growing data needs. We support various OLTP and analytics environments, including our Advanced Analytics and Digital Experience Management products. We are looking for skilled engineers experienced with building and optimizing cloud-scale distributed systems to develop our next-generation ingestion, processing and storage solutions. You will work closely with other engineers and the product team to build highly scalable systems that tackle real-world data problems. Our customers depend on us to provide accurate, real-time and fault tolerant solutions to their ever growing data needs. This is a hands-on, impactful role that will help lead development, validation, publishing and maintenance of logical and physical data models that support various OLTP and analytics environments. What's In It For You You will be part of a growing team of renowned industry experts in the exciting space of Data and Cloud Analytics Your contributions will have a major impact on our global customer-base and across the industry through our market-leading products You will solve complex, interesting challenges, and improve the depth and breadth of your technical and business skills. What You Will Be Doing Lead the design, development, and deployment of AI/ML models for threat detection, anomaly detection, and predictive analytics in cloud and network security. Architect and implement scalable data pipelines for processing large-scale datasets from logs, network traffic, and cloud environments. Apply MLOps best practices to deploy and monitor machine learning models in production. Collaborate with cloud architects and security analysts to develop cloud-native security solutions leveraging platforms like AWS, Azure, or GCP. Build and optimize Retrieval-Augmented Generation (RAG) systems by integrating large language models (LLMs) with vector databases for real-time, context-aware applications. Analyze network traffic, log data, and other telemetry to identify and mitigate cybersecurity threats. Ensure data quality, integrity, and compliance with GDPR, HIPAA, or SOC 2 standards. Drive innovation by integrating the latest AI/ML techniques into security products and services. Mentor junior engineers and provide technical leadership across projects. Required Skills And Experience AI/ML Expertise Proficiency in advanced machine learning techniques, including neural networks (e.g., CNNs, Transformers) and anomaly detection. Experience with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong understanding of MLOps practices and tools (e.g., MLflow, Kubeflow). Experience building and deploying Retrieval-Augmented Generation (RAG) systems, including integration with LLMs and vector databases. Data Engineering Expertise designing and optimizing ETL/ELT pipelines for large-scale data processing. Hands-on experience with big data technologies (e.g., Apache Spark, Kafka, Flink). Proficiency in working with relational and non-relational databases, including ClickHouse and BigQuery. Familiarity with vector databases such as Pinecone and PGVector and their application in RAG systems. Experience with cloud-native data tools like AWS Glue, BigQuery, or Snowflake. Cloud and Security Knowledge Strong understanding of cloud platforms (AWS, Azure, GCP) and their services. Experience with network security concepts, extended detection and response, and threat modeling. Software Engineering Proficiency in Python, Java, or Scala for data and ML solution development. Expertise in scalable system design and performance optimization for high-throughput applications. Leadership and Collaboration Proven ability to lead cross-functional teams and mentor engineers. Strong communication skills to present complex technical concepts to stakeholders. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 18 hours ago
50.0 years
0 Lacs
Noida
On-site
Who we are: Irdeto is the world leader in digital platform cybersecurity, empowering businesses to innovate for a secure, connected future. Building on over 50 years of expertise in security, Irdeto’s services and solutions protect revenue, enable growth and fight cybercrime in video entertainment, video games, and connected industries including transport, health and infrastructure. Irdeto is the security partner dedicated to empowering a secure world where people can connect with confidence. With teams and offices around the world, Irdeto’s greatest asset is its people - our diversity is celebrated through an inclusive workplace, where everyone has an equal opportunity to drive innovation and contribute to Irdeto's success. The Role: As a Software Engineer you will be joining our Video Entertainment team and will play a pivotal role in developing and enhancing our Solutions and products. You'll work as part of a dynamic and cross-functional team to ensure the seamless delivery of high-quality Deliverables. You will work on the latest technologies in the streaming industry and Your expertise will contribute to the innovation and enhancement of our solutions, ensuring our global customers have the best possible experience. Your mission at Irdeto: Develop and maintain software applications and services for our OTT platform, ensuring high- performance, scalability, and reliability. Debug, troubleshoot, and resolve software defects and performance issues, ensuring a seamless user experience. Write clean, efficient, and maintainable code, following coding standards and software development processes. Stay up to date with industry trends and best practices and contribute to the continuous improvement of our software development processes. How you can add value to the team? Bachelor’s degree in computer science, Software Engineering, or a related field. 3+ years of experience in backend development with modern frameworks (Node.js, Go, Typescript, or Java preferred) Deep understanding of REST APIs, microservices, asynchronous processing, and scalable architectures Experience with cloud platforms (AWS, GCP, or Azure) and container orchestration (Docker, Kubernetes) Familiarity with AI/ML pipelines – either integrating ML models into backend or building services to serve AI functionality Hands-on experience with databases (SQL and NoSQL), caching, and pub/sub messaging systems (Kafka, RabbitMQ) Strong grasp of security, performance, and reliability considerations in streaming systems Excellent communication skills and a passion for collaborative problem-solving What you can expect from us: We invest in our talented employees and promote collaboration, creativity, and innovation while supporting health and well-being across our global workforce. In addition to competitive remuneration, we offer: A multicultural and international environment where diversity is celebrated Professional education opportunities and training programs Innovation sabbaticals Volunteer Day State-of-the-art office spaces Additional perks tailored to local offices (e.g., on-site gyms, fresh fruit, parking, yoga rooms, etc.) Equal Opportunity at Irdeto Irdeto is proud to be an equal opportunity employer. All decisions are based on qualifications and business needs, and we do not tolerate discrimination or harassment. We welcome applications from individuals with diverse abilities and provide accommodation during the hiring process upon request. If you’re excited about this role but don’t meet every qualification, we encourage you to apply. We believe diverse perspectives and experiences make our teams stronger. Welcome to Irdeto!
Posted 18 hours ago
3.0 years
15 - 22 Lacs
India
Remote
Experience : 3.00 + years Salary : INR 1500000-2200000 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: LINEN.Cloud) (*Note: This is a requirement for one of Uplers' client - LINEN.Cloud) What do you need for this opportunity? Must have skills required: Cloud Foundry, Java Web Services, Kafka., RabbitMQ, Spring Boot, Docker, JavaScript, Kubernetes LINEN.Cloud is Looking for: Java Developer Function: Technical Management → Engineering Management, Software Engineering → Backend Development, Full-Stack Development Java, Angular. Microservices, React.js, SQL We are looking for highly skilled developers with experience building web applications and REST APIs in Java. You will collaborate with cross-functional teams to translate business requirements into high-quality, scalable, and maintainable code. The ideal candidate should have a strong foundation in Java development, along with excellent problem-solving skills and a passion for building innovative solutions. Responsibilities: Designing, implementing, and Unit testing Java applications. Aligning application design with business goals. Debugging and resolving technical problems that arise. Recommending changes to the existing Java infrastructure. Ensuring continuous professional self-development. Requirements: Experience developing and testing Java Web Services RESTful (primary), XML, JSON, and supporting integration and enabling access via API calls. Experience with Tomcat, Apache, and similar web server technologies. Hands-on experience working with RabbitMQ and Kafka. Experience with the Spring Boot framework. Hands-on with Angular/Node.js is preferred. Working knowledge of ELK (Elasticsearch, Logstash, and Kibana) or Solr is a big plus. Experience with virtualization like Cloud Foundry (PCF), Kubernetes (PKS), Docker, etc, is a big plus. Agile/Scrum expertise. Experience establishing and enforcing branching and software development processes and deployment via CI/CD. Competencies: Aligning application design with business goals. Debugging and resolving technical problems that arise. Recommending changes to the existing Java infrastructure. Ensuring continuous professional self-development. Team spirit and strong communication skills Customer- and service-oriented, confident appearance in an international environment Very high proficiency in English. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 19 hours ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Kafka, a popular distributed streaming platform, has gained significant traction in the tech industry in recent years. Job opportunities for Kafka professionals in India have been on the rise, with many companies looking to leverage Kafka for real-time data processing and analytics. If you are a job seeker interested in Kafka roles, here is a comprehensive guide to help you navigate the job market in India.
These cities are known for their thriving tech industries and have a high demand for Kafka professionals.
The average salary range for Kafka professionals in India varies based on experience levels. Entry-level positions may start at around INR 6-8 lakhs per annum, while experienced professionals can earn between INR 12-20 lakhs per annum.
Career progression in Kafka typically follows a path from Junior Developer to Senior Developer, and then to a Tech Lead role. As you gain more experience and expertise in Kafka, you may also explore roles such as Kafka Architect or Kafka Consultant.
In addition to Kafka expertise, employers often look for professionals with skills in: - Apache Spark - Apache Flink - Hadoop - Java/Scala programming - Data engineering and data architecture
As you explore Kafka job opportunities in India, remember to showcase your expertise in Kafka and related skills during interviews. Prepare thoroughly, demonstrate your knowledge confidently, and stay updated with the latest trends in Kafka to excel in your career as a Kafka professional. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough