Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 12.0 years
0 Lacs
hyderabad, telangana
On-site
You will be joining Salesforce, the Customer Company, known for inspiring the future of business by combining AI, data, and CRM technologies. As part of the Marketing AI/ML Algorithms and Applications team, you will play a crucial role in enhancing Salesforce's marketing initiatives by implementing cutting-edge machine learning solutions. Your work will directly impact the effectiveness of marketing efforts, contributing to Salesforce's growth and innovation in the CRM and Agentic enterprise space. In the position of Lead / Staff Machine Learning Engineer, you will be responsible for developing and deploying ML model pipelines that drive marketing performance and deliver customer value. Working closely with cross-functional teams, you will lead the design, implementation, and operations of end-to-end ML solutions at scale. Your role will involve establishing best practices, mentoring junior engineers, and ensuring the team remains at the forefront of ML innovation. Key Responsibilities: - Define and drive the technical ML strategy, emphasizing robust model architectures and MLOps practices - Lead end-to-end ML pipeline development, focusing on automated retraining workflows and model optimization - Implement infrastructure-as-code, CI/CD pipelines, and MLOps automation for model monitoring and drift detection - Own the MLOps lifecycle, including model governance, testing standards, and incident response for production ML systems - Establish engineering standards for model deployment, testing, version control, and code quality - Design and implement monitoring solutions for model performance, data quality, and system health - Collaborate with cross-functional teams to deliver scalable ML solutions with measurable impact - Provide technical leadership in ML engineering best practices and mentor junior engineers in MLOps principles Position Requirements: - 8+ years of experience in building and deploying ML model pipelines with a focus on marketing - Expertise in AWS services, particularly SageMaker and MLflow, for ML experiment tracking and lifecycle management - Proficiency in containerization, workflow orchestration, Python programming, ML frameworks, and software engineering best practices - Experience with MLOps practices, feature engineering, feature store implementations, and big data technologies - Track record of leading ML initiatives with measurable marketing impact and strong collaboration skills Join us at Salesforce to drive transformative business impact and shape the future of customer engagement through innovative AI solutions.,
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
haryana
On-site
You should have at least 8+ years of experience in backend development using Java. Your responsibilities will include designing, developing, and maintaining Java-based microservices using the Spring Boot framework. You should be proficient in Java 17 or 21 and have the ability to design and present in Architecture Forums. An expert level understanding of Event Driven Architecture is required. You will also be responsible for building RESTful APIs, integrating with external/internal services, and deploying and managing services on AWS cloud using tools like EC2, ECS/EKS, Lambda, S3, RDS, and API Gateway. Collaboration with front-end developers, DevOps, and QA teams is essential to deliver high-quality software. You must ensure best practices in code quality, performance, security, and scalability. Participation in Agile ceremonies including sprint planning, stand-ups, and retrospectives is expected. Writing unit, integration, and performance tests to ensure code reliability, as well as monitoring, troubleshooting, and optimizing existing services in production, are also part of the role. Required skills and experience include a strong expertise in Spring Boot, Spring Cloud, and building Microservices, as well as experience with REST APIs, JSON, and API integration. Good knowledge of AWS services for deployment, storage, and compute is necessary. Familiarity with CI/CD pipelines and tools like Jenkins, Git, Maven/Gradle is a plus. An understanding of containerization using Docker and orchestration with Kubernetes is nice to have. Experience with relational and NoSQL databases (e.g., MySQL, PostgreSQL, DynamoDB, MongoDB) and a solid understanding of application performance monitoring and logging tools are also required for this role.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
You empower individuals to stay resilient and relevant in a constantly changing world. You are seeking individuals who are continuously exploring creative ways to grow and learn, individuals who aspire to make a tangible impact, both presently and in the future. As a Software Developer (Java based) with 4 to 6 years of experience in Core Java, Spring Boot, AWS Lambda, and Node JS, you play a crucial role in designing software solutions based on requirements and within the constraints of architectural/design guidelines. Your responsibilities include deriving software requirements and software functional specifications, validating software requirements, providing software feasibility analysis, and software effort estimation. Your role involves the accurate translation of software architecture into design and code, guiding Scrum team members on design topics and ensuring implementation consistency against the design/architecture. You will actively participate in coding features and/or bug-fixing, delivering solutions that adhere to coding and quality guidelines for self-owned components. Additionally, you will guide the team in test automation design and support its implementation. Key Requirements: - Proficiency in Testing Frameworks - Strong knowledge of SQL, GIT, and Cloud Computing - Familiarity with various AWS Services, Spring Framework, and REST Services - Experience working with Git/Bitbucket - Good to have Skills: Serverless Development This position is based in Bangalore with opportunities to travel to other locations in India and beyond. Joining the Smart Grids and Infrastructure team as a Power System Engineer, you will contribute to creating technology that will revolutionize entire industries, cities, and countries. Siemens, with over 379,000 minds in over 200 countries, is dedicated to equality and welcomes diverse applications that reflect the communities it serves. Employment decisions at Siemens are based on qualifications, merit, and business needs. Embrace the opportunity to work with teams that are shaping the future and be a part of crafting tomorrow. Discover more about Siemens careers at www.siemens.com/careers. Benefits: - Hybrid working opportunities - Inclusive and diverse culture - Array of learning and development prospects - Competitive compensation package,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
Zeta Global is seeking an experienced Senior Product UI Developer to play a crucial role in the development and maintenance of their Data Cloud applications. These applications play a pivotal role in transforming vast amounts of data signals into impactful marketing and advertising outcomes. As a Senior Product UI Developer at Zeta, you will take the lead in front-end development, particularly focusing on the creation of advanced data visualization tools that will help shape the future of Zeta's Data Cloud UI. Your responsibilities will include leading the UI development for essential data visualization applications within the Zeta Data Cloud ecosystem. You will collaborate closely with UI/UX designers, product managers, and other stakeholders to ensure thorough requirements gathering and alignment with project objectives. You should demonstrate the ability to independently research and find solutions to complex challenges, while also being proactive in troubleshooting and problem-solving to ensure efficient and effective solutions for new and evolving requirements. Implementing responsive, cross-device layouts with a strong emphasis on usability and performance will be a key aspect of your role. You will be writing clean, concise, efficient, and reusable code primarily using React as the framework. Furthermore, you will partner with backend and other application developers to ensure smooth integration and alignment across all dependencies. As a subject matter expert in React-based UI development, you will stay informed about design trends, emerging technologies, and best practices to drive continuous improvement. Your role will involve leading thorough documentation processes during and after project completion to guarantee clarity and reusability for future projects. Collaboration with QA teams to maintain high standards for each release will also be part of your responsibilities. The ideal candidate for this role should possess strong proficiency in React.js and TypeScript, with a solid understanding of core principles. In-depth knowledge of JavaScript, CSS, SCSS, HTML, and other front-end technologies is essential. Experience with data visualization libraries such as AmCharts, Chart.js, and D3.js, as well as practical understanding of MySQL Databases, RESTful APIs, AWS services, modern authorization methods, and version control systems like Git are required. Knowledge of MVC Development model is crucial, and familiarity with performance testing frameworks like Mocha, Jest, and Robot would be beneficial. Experience with Vue.js is optional. A Bachelor's Degree in Computer Science, Software Engineering, or a related field, along with a minimum of 4 years of hands-on experience in React development, is necessary. Zeta Global offers an exciting opportunity to work on a product that has been recognized as a Leader by Forrester WaveTM. You will be part of a dynamic work environment that fosters high-velocity professional growth and encourages decision-making at all levels. Zeta has a strong history of innovation in the marketing and advertising industry, providing you with the chance to work on cutting-edge innovations and industry challenges. Founded in 2007 by David A. Steinberg and John Sculley, Zeta Global is a prominent data-driven marketing technology company. Their SaaS-based marketing cloud, the Zeta Marketing Platform (ZMP), empowers over 500+ Fortune 1000 and mid-market brands to acquire, retain, and grow customer relationships through actionable data, advanced analytics, artificial intelligence, and machine learning. To learn more about Zeta Global, visit https://zetaglobal.com/about/our-story/.,
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
kolkata, west bengal
On-site
As a Sales Manager for the assigned zone, your primary responsibility will be to develop and implement effective sales strategies to achieve set targets. This includes aligning regional sales strategies with company and AWS goals, identifying market trends, and setting clear sales targets for your team. Additionally, you will be responsible for leading and developing a team of sales managers and executives, conducting performance reviews, and ensuring team alignment with company values and sales objectives. In terms of sales operations management, you will oversee day-to-day operations within the zone, monitor sales pipelines, forecasts, and budget allocations. Collaboration with other departments such as marketing and finance will be essential to ensure the smooth execution of sales activities. Furthermore, working closely with AWS teams to align on marketing strategies, promotional initiatives, and sales targets will be crucial for success in this role. Building and maintaining relationships with key clients, handling high-level negotiations, and ensuring customer satisfaction and retention will be vital aspects of customer relationship management. You will also be responsible for tracking and reporting on sales metrics, challenges, pipeline status, and market feedback to senior management. Collaboration with marketing teams to develop localized campaigns and promotions will also be part of your role. Ensuring compliance with company policies, industry regulations, and legal requirements, as well as preparing and presenting regular sales performance reports to senior management, will be key responsibilities. Maintaining accurate records of sales performance, customer feedback, and market data will also be essential in this role. Qualifications: - Bachelor's degree in Business Administration, Marketing, or IT (MBA preferred) - 8+ years of proven experience in IT cloud sales - Experience managing a large team of sales professionals across multiple locations - AWS Certified Solutions Architect (Associate or Professional) - AWS Certified Cloud Practitioner Skills Required: - Excellent leadership and team management skills - Strong negotiation, communication, and interpersonal skills - Ability to think strategically and drive operational excellence - Analytical mindset with the ability to interpret sales data and make data-driven decisions - Proficient in CRM software and other sales management tools - Advanced knowledge of AWS services, cloud deployment models, and cloud cost management,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
haryana
On-site
You will be responsible for implementing, managing, and supporting web-based applications that support Dental Network Administration, Quality Management Program, Clinical AI Review DB, and Cactus. Your role will involve leading the offshore technical team, developing applications, making enhancements, and collaborating with onshore technology teams. Your expertise in Microsoft technologies will be crucial, including ASP.NET, .Net core, Restful API, WCF, MVC, MSSQL, Oracle SQL server, jQuery, JavaScript, CI/CD, Jenkins, Service Now, JIRA, Confluence, React & Angular (basics), Launchpad, AWS services (basics), Swagger, PostgreSQL, Hive & Presto. You must have hands-on experience in building and consuming Restful API Services, creating queries, stored procedures, and packages for relational database management systems. In addition, you should possess detailed knowledge of design patterns and architecture connecting multiple applications through Restful API services. Experience in working in an onshore/offshore model and a proven track record of successful project delivery are required. You will be the go-to expert for web and mobile application implementations, actively leading technical teams, participating in PI planning events, managing risks and dependencies, and fostering Communities of Practice. Your responsibilities will also include managing existing and new software vendor relationships to comply with Guardian Security policies, providing technical guidance and mentoring to less experienced team members. This position can be based in Chennai or Gurgaon. If you are a current Guardian colleague, please apply through the internal Jobs Hub in Workday.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Full Stack Developer at our company based in Noida, you will be responsible for designing, developing, and maintaining scalable backend systems, crafting responsive user interfaces, and managing robust cloud infrastructure. With a focus on Django (Python), ReactJS, and AWS Deployments, you will play a key role in optimizing performance, scalability, and security of our applications. Your key responsibilities will include backend development where you will design RESTful APIs using Django and Django REST Framework, write modular and testable code, and ensure the optimization of backend services. On the frontend side, you will be building interactive UIs using ReactJS, translating design wireframes into high-quality code, and managing state using Redux or Context API. Additionally, you will deploy and manage applications on AWS, implement CI/CD pipelines, monitor application performance, design and manage databases, implement caching mechanisms for performance, follow security best practices, and collaborate with cross-functional teams to deliver features. To be successful in this role, you should have at least 4 years of hands-on development experience, proficiency in Python, Django/Django REST Framework, ReactJS, JavaScript, HTML5, CSS3, and a strong understanding of AWS services and cloud architecture. Experience with Git, RESTful APIs, microservices, Docker, and container orchestration is desired. Exposure to Agile/Scrum development environments and familiarity with Terraform, AWS CloudFormation, GraphQL, analytics dashboards, and testing frameworks are considered a plus. Joining our team will provide you with the opportunity to work on impactful projects, contribute to architecture and decision-making, and be part of a friendly, collaborative, and high-performance work culture. We offer flexibility in work location and hours where applicable, making it an attractive opportunity for professionals seeking a dynamic work environment.,
Posted 1 week ago
5.0 - 10.0 years
0 Lacs
hyderabad, telangana
On-site
We are looking for an experienced and dedicated Senior Manager of Business Intelligence & Data Engineering to lead a team of engineers. In this role, you will oversee various aspects of the Business Intelligence (BI) ecosystem, including designing and maintaining data pipelines, enabling advanced analytics, and providing actionable insights through BI tools and data visualization. Your responsibilities will include leading the design and development of scalable data architectures on AWS, managing Data Lakes, implementing data modeling and productization, collaborating with business stakeholders to create actionable insights, ensuring thorough documentation of data pipelines and systems, promoting knowledge-sharing within the team, and staying updated on industry trends in data engineering and BI. You should have at least 10 years of experience in Data Engineering or a related field, with a strong track record in designing and implementing large-scale distributed data systems. Additionally, you should possess expertise in BI, data visualization, people management, CI/CD tools, cloud-based data warehousing, AWS services, Data Lake architectures, Apache Spark, SQL, enterprise BI platforms, and microservices-based architectures. Strong communication skills, a collaborative mindset, and the ability to deliver insights to technical and executive audiences are essential for this role. Bonus points will be awarded if you have knowledge of data science and machine learning concepts, experience with Infrastructure as Code practices, familiarity with data governance and security in cloud environments, and domain understanding of Apparel, Retail, Manufacturing, Supply Chain, or Logistics. If you are passionate about leading a high-performing team, driving innovation in data engineering and BI, and contributing to the success of a global sports platform like Fanatics Commerce, we welcome you to apply for this exciting opportunity.,
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
kochi, kerala
On-site
As a Senior JAVA Developer, you will be required to effectively communicate technical concepts in English, both verbally and in written form. With over 6 years of commercial JAVA experience, you should demonstrate the ability to write efficient, testable, and maintainable JAVA code following best practices and patterns for implementation, build, and deployment of JAVA services. Your expertise should extend to the Java ecosystem and related technologies, including Spring Boot, Spring frameworks, Hibernate, and Maven. Proficiency in Test-Driven Development (TDD) and exposure to Behavior-Driven Development (BDD) are essential. Familiarity with version control tools like Git, project management tools such as JIRA and Confluence, and continuous integration tools like Jenkins is expected. You should have a solid background in building RESTful services within microservices architectures and working in cloud-based environments, preferably AWS. Knowledge of both NoSQL and relational databases, especially PostgreSQL, is crucial. Experience in developing services using event or stream-based systems like SQS, Kafka, or Pulsar, and knowledge of CQRS principles is desirable. A strong foundation in Computer Science fundamentals and software patterns is necessary for this role. Additionally, experience with AWS services like Lambda, SQS, S3, and Rekognition Face Liveness, as well as familiarity with Camunda BPMN, would be advantageous. This position requires a Senior level professional with over 10 years of experience, offering a competitive salary ranging from 25 to 40 LPA. If you meet these qualifications and are eager to contribute your skills to a dynamic team, we encourage you to apply for this Senior JAVA Developer role.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
You are an experienced Databricks on AWS and PySpark Engineer being sought to join our team. Your role will involve designing, building, and maintaining large-scale data pipelines and architectures using Databricks on AWS and optimizing data processing workflows with PySpark. Collaboration with data scientists and analysts to develop data models and ensure data quality, security, and compliance with industry standards will also be a key responsibility. Your main tasks will include troubleshooting data pipeline issues, optimizing performance, and staying updated on industry trends and emerging data engineering technologies. You should have at least 3 years of experience in data engineering with a focus on Databricks on AWS and PySpark, possess strong expertise in PySpark and Databricks for data processing, modeling, and warehousing, and have hands-on experience with AWS services like S3, Glue, and IAM. Your proficiency in data engineering principles, data governance, and data security is essential, along with experience in managing data processing workflows and data pipelines. Strong problem-solving skills, attention to detail, effective communication, and collaboration abilities are key soft skills required for this role, as well as the capability to work in a fast-paced and dynamic environment while adapting to changing requirements and priorities.,
Posted 1 week ago
2.0 - 8.0 years
0 Lacs
hyderabad, telangana
On-site
The role requires you to lead the design and development of Global Supply Chain Analytics applications and provide support for applications from other domains using supply chain data. You will be responsible for hands-on management of applications in Supply Chain Analytics and wider Operations domain. As a senior specialist in Supply Chain Data & Analytics, you will drive the deliverables for important digital initiatives contributing towards strategic priorities. Your role will involve leading multiple projects and digital products, collaborating with team members both internally and externally, and interacting with global Business and IT stakeholders to ensure successful solution delivery with standard designs in line with industry best practices. Your responsibilities will include designing and managing the development of modular, reusable, elegantly designed, and maintainable software solutions that support the Supply Chain organization and other Cross Functional strategic initiatives. You will participate in fit-gap workshops with business stakeholders, provide effort estimates and solutions proposals, and develop and maintain code repositories while responding rapidly to bug reports or security vulnerability issues. Collaboration with colleagues across various departments such as Security, Compliance, Engineering, Project Management, and Product Management will be essential. You will also drive data enablement and build digital products, delivering solutions aligned with business prioritizations and in coordination with technology architects. Contributing towards AI/ML initiatives, data quality improvement, business process simplification, and other strategic pillars will be part of your role. Ensuring that delivered solutions adhere to architectural and development standards, best practices, and meet requirements as recommended in the architecture handbook will be crucial. You will also be responsible for aligning designed solutions with Data and Analytics strategy standards and roadmap, as well as providing status reporting to product owners and IT management. To be successful in this role, you should have a minimum of 8 years of data & analytics experience in a professional environment, with expertise in building applications across platforms. Additionally, you should have experience in delivery management, customer-facing IT roles, Machine Learning, SAP BW on HANA and/or S/4 HANA, and cloud platforms. Strong data engineering fundamentals in data management, data analysis, and back-end system design are required, along with hands-on exposure in Data & Analytics solutions, including predictive and prescriptive analytics. Key skills for this role include collecting and interpreting requirements, understanding Supply Chain business processes and KPIs, domain expertise in Pharma industry and/or Healthcare, excellent communication and problem-solving skills, knowledge in Machine Learning and analytical tools, familiarity with Agile and Waterfall delivery concepts, proficiency in using various tools such as Jira, Confluence, GitHub, and SAP Solution Manager, and hands-on experience in technologies like AWS Services, Python, Power BI, SAP Analytics, and more. Additionally, the ability to learn new technologies and functional topics quickly is essential. Novartis is committed to building an outstanding, inclusive work environment and diverse teams representative of the patients and communities it serves. If you are passionate about making a difference in the lives of others and are ready to collaborate, support, and inspire breakthroughs, this role offers an opportunity to create a brighter future together.,
Posted 1 week ago
7.0 - 9.0 years
27 - 37 Lacs
Pune
Hybrid
Responsibilities may include the following and other duties may be assigned: Develop and maintain robust, scalable data pipelines and infrastructure automation workflows using GitHub, AWS, and Databricks. Implement and manage CI/CD pipelines using GitHub Actions and GitLab CI/CD for automated infrastructure deployment, testing, and validation. Deploy and manage Databricks LLM Runtime or custom Hugging Face models within Databricks notebooks and model serving endpoints. Manage and optimize Cloud Infrastructure costs, usage, and performance through tagging policies, right-sizing EC2 instances, storage tiering strategies, and auto-scaling. Set up infrastructure observability and performance dashboards using AWS CloudWatch for real-time insights into cloud resources and data pipelines. Develop and manage Terraform or CloudFormation modules to automate infrastructure provisioning across AWS accounts and environments. Implement and enforce cloud security policies, IAM roles, encryption mechanisms (KMS), and compliance configurations. Administer Databricks Workspaces, clusters, access controls, and integrations with Cloud Storage and identity providers. Enforce DevSecOps practices for infrastructure-as-code, ensuring all changes are peer-reviewed, tested, and compliant with internal security policies. Coordinate cloud software releases, patching schedules, and vulnerability remediations using Systems Manager Patch Manage. Automate AWS housekeeping and operational tasks such as: Cleanup of unused EBS Volumes, snapshots, old AMIs Rotation of secrets and credentials using secrets manager Log retention enforcement using S3 Lifecycle policies and CloudWatch Log groups Perform incident response, disaster recovery planning, and post-mortem analysis for operational outages. Collaborate with cross-functional teams including Data Scientists, Data Engineers, and other stakeholders to gather, implement the infrastructure and data requirements. Required Knowledge and Experience: 8+ years of experience in DataOps / CloudOps / DevOps roles, with strong focus on infrastructure automation, data pipeline operations, observability, and cloud administration. Strong proficiency in at least one Scripting language (e.g., Python, Bash) and one infrastructure-as-code tool (e.g., Terraform, CloudFormation) for building automation scripts for AWS resource cleanup, tagging enforcement, monitoring and backups. Hands-on experience integrating and operationalizing LLMs in production pipelines, including prompt management, caching, token-tracking, and post-processing. Deep hands-on experience with AWS Services, including Core: EC2, S3, RDS, CloudWatch, IAM, Lambda, VPC Data Services: Athena, Glue, MSK, Redshift Security: KMS, IAM, Config, CloudTrail, Secrets Manager Operational: Auto Scaling, Systems Manager, CloudFormation/Terraform Machine Learning/AI: Bedrock, SageMaker, OpenSearch serverless Working knowledge of Databricks, including: Cluster and workspace management, job orchestration Integration with AWS Storage and identity (IAM passthrough) Experience deploying and managing CI/CD workflows using GitHub Actions, GitLab CI, or AWS CodePipeline. Strong understanding of cloud networking, including VPC Peering, Transit Gateway, security groups, and private link setup. Familiarity with container orchestration platforms (e.g., Kubernetes, ECS) for deploying platform tools and services. Strong understanding of data modeling, data warehousing concepts, and AI/ML Lifecycle management. Knowledge of cost optimization strategies across compute, storage, and network layers. Experience with data governance, logging, and compliance practices in cloud environments (e.g., SOC2, HIPAA, GDPR) Bonus: Exposure to LangChain, Prompt Engineering frameworks, Retrieval Augmented Generation (RAG), and vector database integration (AWS OpenSearch, Pinecone, Milvus, etc.) Preferred Qualifications: AWS Certified Solutions Architect, DevOps Engineer or SysOps Administrator certifications. Hands-on experience with multi-cloud environments, particularly Azure or GCP, in addition to AWS. Experience with infrastructure cost management tools like AWS Cost Explorer, or FinOps dashboards. Ability to write clean, production-grade Python code for automation scripts, operational tooling, and custom CloudOps Utilities. Prior experience in supporting high-availability production environments with disaster recovery and failover architectures. Understanding of Zero Trust architecture and security best practices in cloud-native environments. Experience with automated cloud resources cleanup, tagging enforcement, and compliance-as-code using tools like Terraform Sentinel. Familiarity with Databricks Unity Catalog, access control frameworks, and workspace governance. Strong communication skills and experience working in agile cross-functional teams, ideally with Data Product or Platform Engineering teams. If interested, please share below details on ashwini.ukekar@medtronic.com Name: Total Experience: Relevant Experience: Current CTC: Expected CTC: Notice Period: Current Company: Current Designation: Current Location: Regards , Ashwini Ukekar Sourcing Specialist
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
A career at HARMAN Automotive offers you the opportunity to be part of a global, multi-disciplinary team dedicated to harnessing the power of technology to shape the future. We empower you to accelerate your professional growth and make a difference by: - Engineering cutting-edge audio systems and integrated technology platforms that enhance the driving experience. - Fostering innovation through collaborative efforts that combine in-depth research, design excellence, and engineering prowess. - Driving advancements in in-vehicle infotainment, safety, efficiency, and overall enjoyment for users. About The Role: We are looking for a skilled Python Backend Developer with 3 to 6 years of experience in building scalable and secure backend systems using AWS services. In this role, you will be instrumental in: - Designing and implementing microservices architecture and cloud-native solutions. - Integrating diverse data sources into a unified system to ensure data consistency and security. What You Will Do: Your responsibilities will include: - Backend Development: Creating scalable backend systems using Python and frameworks like Flask or Django. - Microservices Architecture: Developing and deploying microservices-based systems with AWS services like SQS, Step Functions, and API Gateway. - Cloud-Native Solutions: Building cloud-native solutions utilizing AWS services such as Lambda, CloudFront, and IAM. - Data Integration: Integrating multiple data sources into a single system while maintaining data integrity. - API Development: Designing and implementing RESTful/SOAP APIs using API Gateway and AWS Lambda. What You Need To Be Successful: To excel in this role, you should possess: - Technical Skills: Proficiency in Python backend development, JSON data handling, and familiarity with AWS services. - AWS Services: Knowledge of various AWS services including SQS, Step Functions, IAM, CloudFront, and API Gateway. - Security and Authentication: Understanding of identity management, authentication protocols like OAuth 2.0 and OIDC. - Data Management: Experience with ORM frameworks like SQLAlchemy or Django ORM. - Collaboration and Testing: Ability to collaborate effectively and work independently when needed, along with familiarity with testing tools. Bonus Points if You Have: Additional experience with AWS ECS, VPC, serverless computing, and DevOps practices would be advantageous. What Makes You Eligible: We are looking for individuals with relevant experience in backend development, strong technical expertise, problem-solving abilities, and effective collaboration and communication skills. What We Offer: Join us for a competitive salary and benefits package, opportunities for professional growth, a dynamic work environment, access to cutting-edge technologies, recognition for outstanding performance, and the chance to collaborate with a renowned German OEM. You Belong Here: At HARMAN, we value diversity, inclusivity, and empowerment. We encourage you to share your ideas, voice your perspective, and be yourself in a supportive culture that celebrates uniqueness. We are committed to your ongoing learning and development, providing training and education opportunities for you to thrive in your career. About HARMAN: With a legacy of innovation dating back to the 1920s, HARMAN continues to redefine technology across automotive, lifestyle, and digital transformation solutions. Our portfolio of iconic brands delivers exceptional experiences, setting new standards in engineering and design for our customers and partners worldwide. If you are ready to drive innovation and create lasting impact, we invite you to join our talent community at HARMAN Automotive.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
The role involves providing production support for trading applications and requires the candidate to be comfortable with working in a rotational shift (7 AM - 4 PM / 11 AM - 8 PM / 1 PM - 10 PM). The applications have transitioned from on-premises to AWS cloud, necessitating strong experience in AWS services such as EC2, S3, and Kubernetes. Monitoring overnight batch jobs is also a key responsibility. Key Requirements: - Proficiency in AWS services like EC2, S3, Kubernetes, CloudWatch, etc. - Familiarity with monitoring tools like Datadog, Grafana, Prometheus. Good to have: - Basic understanding of SQL. - Experience in utilizing Control-M/Autosys schedulers.,
Posted 1 week ago
5.0 - 13.0 years
0 Lacs
pune, maharashtra
On-site
We are seeking talented and experienced individuals to join our engineering team in the roles of Staff Development Engineer and Senior Software Development Engineer (SDE 3). As a member of our team, you will be responsible for taking ownership of complex projects, designing and constructing high-performance, scalable systems. In the role of SDE 3, you will play a crucial part in ensuring that the solutions we develop are not only robust but also efficient. This is a hands-on position that requires you to lead projects from concept to deployment, ensuring the delivery of top-notch, production-ready code. Given the fast-paced environment, strong problem-solving skills and a dedication to crafting exceptional software are indispensable. Your responsibilities will include: - Developing high-quality, secure, and scalable enterprise-grade backend components in alignment with technical requirements specifications and design artifacts within the expected time and budget. - Demonstrating a proficient understanding of the choice of technology and its application, supported by thorough research. - Identifying, troubleshooting, and ensuring the timely resolution of software defects. - Participating in functional specification, design, and code reviews. - Adhering to established practices for the development and upkeep of application code. - Taking an active role in diminishing the technical debt across our various codebases. We are looking for candidates with the following qualifications: - Proficiency in Python programming and frameworks such as Flask/FastAPI. - Prior experience in constructing REST API-based microservices. - Excellent knowledge and hands-on experience with RDBMS (e.g., MySQL, PostgreSQL), message brokers, caching, and queueing systems. - Preference for experience with NoSQL databases. - Ability for Research & Development to explore new topics and use cases. - Hands-on experience with AWS services like EC2, SQS, Fargate, Lambda, and S3. - Knowledge of Docker for application containerization. - Cybersecurity knowledge is considered advantageous. - Strong technical background with the ability to swiftly adapt to emerging technologies. - Desired experience: 5-13 years in Software Engineering for Staff or SDE 3 roles. Working Conditions: This role necessitates full-time office-based work; remote work arrangements are not available. Company Culture: At Fortinet, we uphold a culture of innovation, collaboration, and continuous learning. We are dedicated to fostering an inclusive environment where every employee is valued and respected. We encourage applications from individuals of all backgrounds and identities. Our competitive Total Rewards package is designed to assist you in managing your overall health and financial well-being. We also offer flexible work arrangements and a supportive work environment. If you are looking for a challenging, fulfilling, and rewarding career journey, we invite you to contemplate joining us and contributing solutions that have a meaningful and enduring impact on our 660,000+ global customers.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
The Senior Software Developer (MERN) position in Noida requires 5-8 years of experience and immediate to 15 days of joiners only. As a Senior Software Developer, you will be responsible for building the eCourt application using ReactJS, Node.js, MongoDB, and a microservices-based architecture. Your role will involve collaborating within a high-performing team to deliver quality code and contribute to building scalable, secure applications. Your focus will be on performance, code quality, and customer satisfaction. You should have 5+ years of hands-on experience in MERN Stack development and hold a B.Tech / MCA degree (M.Tech preferred). Essential skills include expertise in Node.js, MongoDB, ReactJS, React Native, RESTful APIs, microservices architecture, API design, and server-side technologies. Proficiency in containerization (e.g., Docker), orchestration (e.g., Kubernetes), Git, and strong communication skills are essential. Additionally, experience with AWS services, international coding standards, DevOps tools, problem-solving, and analytical capabilities are preferred. In this role, you will collaborate with product management and development teams, engage with customers, and drive project execution. Your routine responsibilities will include developing features, writing clean and testable code, supporting team members, ensuring best practices, maintaining backend services, and participating in all phases of the software development lifecycle. If you are passionate about MERN Stack development and possess the required skills and experience, please share your resume to sunidhi.manhas@portraypeople.com.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
As a Data Platform developer at Barclays, you will play a crucial role in shaping the digital landscape and enhancing customer experiences. Leveraging cutting-edge technology, you will work alongside a team of engineers, business analysts, and stakeholders to deliver high-quality solutions that meet business requirements. Your responsibilities will include tackling complex technical challenges, building efficient data pipelines, and staying updated on the latest technologies to continuously enhance your skills. To excel in this role, you should have hands-on coding experience in Python, along with a strong understanding and practical experience in AWS development. Experience with tools such as Lambda, Glue, Step Functions, IAM roles, and various AWS services will be essential. Additionally, your expertise in building data pipelines using Apache Spark and AWS services will be highly valued. Strong analytical skills, troubleshooting abilities, and a proactive approach to learning new technologies are key attributes for success in this role. Furthermore, experience in designing and developing enterprise-level software solutions, knowledge of different file formats like JSON, Iceberg, Avro, and familiarity with streaming services such as Kafka, MSK, Kinesis, and Glue Streaming will be advantageous. Effective communication and collaboration skills are essential to interact with cross-functional teams and document best practices. Your role will involve developing and delivering high-quality software solutions, collaborating with various stakeholders to define requirements, promoting a culture of code quality, and staying updated on industry trends. Adherence to secure coding practices, implementation of effective unit testing, and continuous improvement are integral parts of your responsibilities. As a Data Platform developer, you will be expected to lead and supervise a team, guide professional development, and ensure the delivery of work to a consistently high standard. Your impact will extend to related teams within the organization, and you will be responsible for managing risks, strengthening controls, and contributing to the achievement of organizational objectives. Ultimately, you will be part of a team that upholds Barclays" values of Respect, Integrity, Service, Excellence, and Stewardship, while embodying the Barclays Mindset of Empower, Challenge, and Drive in your daily interactions and work ethic.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
jaipur, rajasthan
On-site
This position is responsible for implementing and supporting AWS cloud infrastructure. As a seasoned individual with hands-on experience in IT infrastructure management on AWS, Infrastructure as Code, and Container Orchestration, you will be supporting critical business applications. You must possess the following skills: - Proficiency in AWS Services such as EC2, Load Balancers, S3, KMS, SNS, SES, CloudWatch, CloudFront, Backup, RDS, DynamoDB, DocumentDB, NACL, Security Groups, VPC Peering, Transit Gateway, GuardDuty, and Athena. - Knowledge of Linux, including Debian/Ubuntu OS, Installation of packages, Cron Jobs, Troubleshooting, and User Management. - Understanding of Security principles, including cryptography. - Proficiency in scripting with Bash or Python. - Experience with Infrastructure as Code using Terraform and CloudFormation. - Knowledge of Containers & Orchestration with Kubernetes and Docker. Additionally, the following skills are considered beneficial: - Excellent interpersonal and communication skills, both written and verbal. - Proven ability to work in a team-oriented, collaborative environment. - Strong analytical and problem-solving abilities. - Strategic thinking and the ability to influence and build consensus. - Action-oriented with the ability to drive results. - Effective facilitation of team and stakeholder meetings. - Timely resolution of issues and escalation when necessary. - Tactful communication of difficult/sensitive information. Job Responsibilities include: - Setting up and building AWS infrastructure for VPC, EC2, S3, EBS, ELB, Security Group, and RDS. - Working with IAM to create new users, roles, groups, and policies. - Designing and implementing using VPC and managing security with IAM Policies, Security Groups, and NACL. - Creating backups and restoring from snapshots using backup plans. - Encrypting EBS Volumes and Snapshots using KMS. - Ensuring network security through security group and NACL hardening. - Managing S3 Bucket storage requirements, security, and replication. - Encrypting S3 Bucket and EFS using KMS. - Working with CloudTrail and config for compliance. - Conducting vulnerability assessments and Security Auditing. - Providing client-specific reports. - Hands-on experience with Infrastructure, Networks, and servers. - Container management, Docker file creation, and optimization. - Requirements gathering, planning, and designing AWS services. - Operating and maintaining critical AWS applications and databases. - Resolving debugging issues with microservices and distributed systems. - Creation, Management, and Review of Change Requests. - System performance monitoring and optimization. - Implementing architecture designs for High availability, Scalability, and Disaster recovery. - Configuring, Managing, and troubleshooting Kubernetes Clusters and containers. - Writing and maintaining Infrastructure as Code using Terraform and CloudFormation.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
Zeta Global is currently seeking an experienced Senior Product UI Developer to take charge of the development and upkeep of our Data Cloud applications. These applications play a pivotal role in converting vast amounts of data signals into impactful outcomes in marketing and advertising. As a Senior Product UI Developer at Zeta Global, you will spearhead the front-end development of advanced data visualization tools and contribute to shaping the future of Zeta's Data Cloud UI. Your responsibilities will include leading the UI development for crucial data visualization applications within the Zeta Data Cloud ecosystem, collaborating closely with UI/UX designers, product managers, and other stakeholders to ensure thorough requirements gathering and alignment with project objectives. You will be expected to demonstrate the ability to independently research and find solutions to complex challenges, while also exhibiting a proactive mindset for troubleshooting and problem-solving. Additionally, you will be responsible for implementing responsive, cross-device layouts that prioritize usability and seamless performance, as well as writing clean, efficient, and reusable code using React as the primary framework. You will work in close partnership with backend and other application developers to ensure smooth integration and alignment across all dependencies. As a subject matter expert in React-based UI development, you will stay abreast of design trends, emerging technologies, and best practices to drive continuous improvement. Furthermore, you will lead thorough documentation processes during and after project completion to ensure clarity and reusability for future projects, while also collaborating with QA teams to maintain high standards for each release. To be successful in this role, you should possess strong proficiency in React.js and TypeScript, along with a solid understanding of core principles. You should also have in-depth knowledge of JavaScript, CSS, SCSS, HTML, and other front-end technologies, as well as experience with data visualization libraries such as AmCharts, Chart.js, and D3.js. A practical understanding of MySQL databases, familiarity with RESTful APIs, foundational knowledge of AWS services, and proficiency with version control systems like Git are also essential. Additionally, you should be knowledgeable in the MVC development model and modern authorization methods, including JSON Web Tokens. Ideally, you should hold a Bachelor's Degree in Computer Science, Software Engineering, or a related field, and have at least 4 years of hands-on experience in React development. Experience with performance testing frameworks like Mocha, Jest, and Robot, as well as other JavaScript frameworks like Vue.js, is considered advantageous. At Zeta, you will have the opportunity to work on a product that has been recognized as a Leader by Forrester WaveTM. Our dynamic work environment fosters high-velocity professional growth, encourages decision-making at all levels, and allows you to work on the latest innovations and challenges in the marketing and advertising industry. Founded in 2007 by David A. Steinberg and John Sculley, Zeta Global is a leading data-driven marketing technology company. Our SaaS-based marketing cloud, the Zeta Marketing Platform (ZMP), empowers over 500 Fortune 1000 and mid-market brands to acquire, retain, and grow customer relationships through actionable data, advanced analytics, artificial intelligence, and machine learning. Learn more about us at [ZetaGlobal](https://zetaglobal.com/about/our-story/).,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
vijayawada, andhra pradesh
On-site
As a qualified candidate for this role, you should possess proficiency in HTML5/CSS. Additionally, you should be familiar with GIT or any Source Code Management (SCM) system and FTP Clients. An added advantage would be to have knowledge of Google Cloud and AWS Services, along with experience in Google developer API integration. If you meet these requirements and are looking for an exciting opportunity, we encourage you to apply for this position.,
Posted 1 week ago
2.0 - 10.0 years
0 Lacs
karnataka
On-site
As an MLOps Engineer at our Bangalore location, you will play a pivotal role in designing, developing, and maintaining robust MLOps pipelines for generative AI models on AWS. With a Bachelor's or Master's degree in Computer Science, Data Science, or a related field, you should have at least 2 years of proven experience in building and managing MLOps pipelines, preferably in a cloud environment like AWS. Your responsibilities will include implementing CI/CD pipelines to automate model training, testing, and deployment workflows. You should have a strong grasp of containerization technologies such as Docker, container orchestration platforms, and AWS services like SageMaker, Bedrock, EC2, S3, Lambda, and CloudWatch. Practical knowledge of CI/CD principles and tools, along with experience working with large language models, will be essential for success in this role. Additionally, your role will involve driving technical discussions, explaining options to both technical and non-technical audiences, and ensuring software product cost monitoring and optimization. Proficiency in Python and deep learning frameworks like PyTorch or TensorFlow, familiarity with generative AI models, and experience with infrastructure-as-code tools like Terraform or CloudFormation will be advantageous. Moreover, knowledge of model monitoring and explainability techniques, various data storage and processing technologies, and experience with other cloud platforms like GCP will further enhance your capabilities. Any contributions to open-source projects related to MLOps or machine learning will be a valuable asset. At CGI, we believe in ownership, teamwork, respect, and belonging. Your work as an MLOps Engineer will focus on turning meaningful insights into action, with opportunities to develop innovative solutions, build relationships with teammates and clients, and access global capabilities to scale your ideas. Join us in shaping your career at one of the largest IT and business consulting services firms globally.,
Posted 1 week ago
8.0 - 14.0 years
0 Lacs
karnataka
On-site
NTT DATA is looking for a Lead Java Developer - Backend (Engineering Manager) to join the team in Bengaluru, Karnataka, India. The ideal candidate should have 8 to 10 years of experience and possess expertise in Java 11+, Spring Boot, Rest API, and AWS services like DynamoDB, UKS, SQS, and Lambda. Key Responsibilities: - Good to have knowledge and working experience with high volume systems. - Expertise in Java, Spring Boot, Rest API, and AWS services. - Should be able to work independently with minimal guidance. - Expert in problem debugging skills and well-versed with functional design patterns. - Collaborate with stakeholders including QA, Product Owner, Engineering Manager, and peer teams. - Support existing services and products that are live. - Self-motivated and proactive in participation. - Conduct peer reviews and provide assistance to junior team members. - Maintain code quality and create functional/technical improvements in new or existing services. Desired Skills: - Experience with eCommerce domain is an added advantage. - Knowledge of Jenkins and caching technologies. - Ability to design, implement, and integrate solutions effectively. Education Qualification: - Engineering Discipline About NTT DATA: NTT DATA is a trusted global innovator of business and technology services, serving 75% of the Fortune Global 100. With experts in more than 50 countries and a robust partner ecosystem, we offer services in business and technology consulting, data and artificial intelligence, industry solutions, and application development. As a part of the NTT Group, we invest significantly in R&D to facilitate the digital transformation of organizations and society. Learn more about us at us.nttdata.com.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
You are an experienced Databricks on AWS and PySpark Engineer looking to join our team. Your role will involve designing, building, and maintaining large-scale data pipelines and architectures using Databricks on AWS and PySpark. You will also be responsible for developing and optimizing data processing workflows, collaborating with data scientists and analysts, ensuring data quality, security, and compliance, troubleshooting data pipeline issues, and staying updated with industry trends in data engineering and big data. Your responsibilities will include: - Designing, building, and maintaining large-scale data pipelines and architectures using Databricks on AWS and PySpark - Developing and optimizing data processing workflows using PySpark and Databricks - Collaborating with data scientists and analysts to design and implement data models and architectures - Ensuring data quality, security, and compliance with industry standards and regulations - Troubleshooting and resolving data pipeline issues and optimizing performance - Staying up-to-date with industry trends and emerging technologies in data engineering and big data Requirements: - 3+ years of experience in data engineering, with a focus on Databricks on AWS and PySpark - Strong expertise in PySpark and Databricks, including data processing, data modeling, and data warehousing - Experience with AWS services such as S3, Glue, and IAM - Strong understanding of data engineering principles, including data pipelines, data governance, and data security - Experience with data processing workflows and data pipeline management Soft Skills: - Excellent problem-solving skills and attention to detail - Strong communication and collaboration skills - Ability to work in a fast-paced, dynamic environment - Ability to adapt to changing requirements and priorities If you are a proactive and skilled professional with a passion for data engineering and a strong background in Databricks on AWS and PySpark, we encourage you to apply for this opportunity.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Senior IT Systems & DevOps Engineer, you will oversee incident, change, and release management for IT systems in a regulated environment. Your responsibilities will include ensuring compliance with regulatory standards, managing Azure AD for identity and access management, implementing DevOps best practices, collaborating with cross-functional teams to align infrastructure with business goals, designing and documenting business requirements, managing cloud infrastructure, deploying containerized applications, and automating workflow and integration. You will also be involved in compliance activities, providing technical support for IT systems, and participating in audits and cybersecurity initiatives. Key Responsibilities: - Oversee incident, change, and release management for IT systems, ensuring compliance with regulatory standards. - Manage Azure AD for identity and access management, including authentication flows. - Implement DevOps best practices, including CI/CD, automation, and observability. - Collaborate with cross-functional teams to align infrastructure with business goals. - Ensure compliance and security within a regulated environment, implementing RBAC, secrets management, and monitoring frameworks. - Design, develop, test, and document business requirements related to IT systems. - Coordinate system management tasks, ensuring alignment with quality and compliance standards. - Review business requirements, create technical designs, and align with stakeholders. - Manage and optimize cloud infrastructure, including cost management and performance tuning. - Deploy and manage containerized applications using Docker, Kubernetes, and ArgoCD. - Implement Infrastructure as Code using Terraform and AWS CloudFormation. - Automate workflow and integration using Python and Ansible. - Ensure observability with logging, monitoring, and tracing tools. - Participate in compliance activities, audits, patch management, and cybersecurity initiatives. - Provide technical guidance and support for IT systems and users. Your Skills & Experience: Must-Have: - 5+ years of experience in IT systems management, DevOps, cloud infrastructure, and automation. - Strong expertise in Change, Release, Incident, and Problem Management. - Hands-on experience with Azure DevOps and AWS services. - Experience with Linux, Windows servers, and Oracle PLSQL. - Proficiency in containerization, Kubernetes, and Python scripting. - Strong understanding of observability, security, and compliance frameworks. - Excellent English communication skills. Good to Have: - Experience in a regulated industry. - Familiarity with Agile methodologies. - Knowledge of front-end development tools and data management technologies. Why Join Us - Contribute to cutting-edge R&D in drug discovery and development. - Work in a multicultural, agile team with high autonomy. - Grow your skills in cloud computing, security, and automation. - Join a diverse team committed to inclusion and belonging. Apply now and be part of our innovative environment!,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Software Engineer III at JPMorgan Chase within the Asset & Wealth Management, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm's business objectives. As a Software Engineer III, your job responsibilities include executing software solutions, design, development, and technical troubleshooting with the ability to think beyond routine or conventional approaches to build solutions or break down technical problems. You will create secure and high-quality production code, maintain algorithms that run synchronously with appropriate systems, and produce architecture and design artifacts for complex applications, ensuring design constraints are met by software code development. Additionally, you will gather, analyze, synthesize, and develop visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems. Proactively identifying hidden problems and patterns in data and using these insights to drive improvements to coding hygiene and system architecture is also part of your responsibilities. You will contribute to software engineering communities of practice and events that explore new and emerging technologies, while adding to the team culture of diversity, opportunity, inclusion, and respect. To qualify for this role, you need formal training or certification on software engineering concepts and at least 3 years of applied experience. Hands-on practical experience in system design, application development, testing, and operational stability is required. Proficiency in Java/J2EE and REST APIs, Python, Web Services, and experience in building event-driven Micro Services and Kafka streaming is essential. Experience in RDBMS and NOSQL database, working proficiency in developmental toolset like GIT/Bitbucket, Jira, and maven, as well as experience with AWS services are necessary. You should also have experience in Spring Framework Services in public cloud infrastructure, proficiency in automation and continuous delivery methods, and be proficient in all aspects of the Software Development Life Cycle. Demonstrated knowledge of software applications and technical processes within a technical discipline, solid understanding of agile methodologies such as CI/CD, Application Resiliency, and Security, and overall knowledge of the Software Development Life Cycle are also required. Additionally, you should have experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages, and in-depth knowledge of the financial services industry and their IT systems. Preferred qualifications, capabilities, and skills for this role include AWS certification, experience on cloud engineering including Pivotal Cloud Foundry, AWS, experience in PERF testing and tuning as well as shift left practices, and DDD (domain-driven design). Experience with MongoDB is also preferred for this position.,
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough