Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 8.0 years
25 - 30 Lacs
indore, hyderabad, pune
Work from Office
Responsibilities Design, develop, and deploy end-to-end machine learning pipelines in cloud-native environments, ensuring scalability and reliability. Collaborate with data scientists to productionalize ML models, transitioning from prototype to enterprise-ready solutions. Optimize cloud-based data architectures and ML systems for high performance and cost efficiency (AWS, Azure, GCP). Integrate ML models into existing and new system architectures, designing robust APIs for seamless model consumption. Implement MLOps and LLMOps best practices, including CI/CD for ML models, monitoring, and automated retraining workflows. Continuously assess and improve ML system performance, ensuring high availability and minimal downtime. Stay ahead of AI and cloud trends, collaborating with cloud architects to leverage cutting-edge technologies Qualifications 4+ years of experience in cloud-native ML engineering, deploying and maintaining AI models at scale. Hands-on experience with AI cloud platforms (Azure ML, Google AI Platform, AWS SageMaker) and cloud-native services. Strong programming skills in Python and SQL, with expertise in cloud-native tools like Kubernetes and Docker. Experience building automated ML pipelines, including data preprocessing, model deployment, and monitoring. Proficiency in Linux environments and cloud infrastructure management. Experience operationalizing GenAI applications or AI assistants is a plus. Strong problem-solving, organizational, and communication skills.
Posted 3 days ago
5.0 - 8.0 years
25 - 30 Lacs
pune
Work from Office
Responsibilities Design, develop, and deploy end-to-end machine learning pipelines in cloud-native environments, ensuring scalability and reliability. Collaborate with data scientists to productionalize ML models, transitioning from prototype to enterprise-ready solutions. Optimize cloud-based data architectures and ML systems for high performance and cost efficiency (AWS, Azure, GCP). Integrate ML models into existing and new system architectures, designing robust APIs for seamless model consumption. Implement MLOps and LLMOps best practices, including CI/CD for ML models, monitoring, and automated retraining workflows. Continuously assess and improve ML system performance, ensuring high availability and minimal downtime. Stay ahead of AI and cloud trends, collaborating with cloud architects to leverage cutting-edge technologies Qualifications 4+ years of experience in cloud-native ML engineering, deploying and maintaining AI models at scale. Hands-on experience with AI cloud platforms (Azure ML, Google AI Platform, AWS SageMaker) and cloud-native services. Strong programming skills in Python and SQL, with expertise in cloud-native tools like Kubernetes and Docker. Experience building automated ML pipelines, including data preprocessing, model deployment, and monitoring. Proficiency in Linux environments and cloud infrastructure management. Experience operationalizing GenAI applications or AI assistants is a plus. Strong problem-solving, organizational, and communication skills. Location: ,Indore,Pune
Posted 3 days ago
8.0 - 12.0 years
10 - 14 Lacs
hyderabad, telangana, india
On-site
We are seeking a Senior Generative AI Developer with over 8 years of experience, including at least 2 years specializing in NLP and Generative AI. The ideal candidate will be proficient in Python, LangChain, AWS services such as SageMaker and Bedrock, vector stores, LLM fine-tuning, prompt engineering, and agent frameworks. Experience in CI/CD, Textract, and designing secure, scalable AI architectures is essential. Roles & Responsibilities: Develop and fine-tune large language models (LLMs) to deliver cutting-edge generative AI solutions. Build and optimize NLP pipelines using tools like LangChain and vector stores. Implement and maintain AI services on AWS, leveraging SageMaker, Bedrock, and Textract. Design and develop secure, scalable architectures for AI applications. Create and refine prompt engineering strategies to improve model output quality. Develop agent frameworks and integrate AI models into production workflows. Establish CI/CD pipelines for seamless AI model deployment and updates. Collaborate with cross-functional teams to translate business needs into AI-driven solutions.
Posted 3 days ago
5.0 - 8.0 years
10 - 20 Lacs
bengaluru
Work from Office
Backend Developer Responsibilities & Skills Position Title Backend Developer Position Type Full time permanent Location Bengaluru, India Company Description Privaini is the pioneer of privacy risk management for companies and their entire business networks. Privaini offers a unique "outside-in approach", empowering companies to gain a comprehensive understanding of both internal and external privacy risks. It provides actionable insights using a data-driven, systematic, and automated approach to proactively address reputation and legal risks related to data privacy. Privaini generates AI-powered privacy profile and privacy score for a company from externally observable privacy, corporate, regulatory, historical events, and security data. Without the need for time-consuming questionnaires or installing any software, Privaini creates standardized privacy views of companies from externally observable information. Then Privaini builds a privacy risk posture for every business partner in the company's business network and continuously monitors each one. Our platform provides actionable insights that privacy & risk teams can readily implement. Be part of an exciting team of researchers, developers, and data scientists focused on the mission of building transparency in data privacy risks for companies and their business networks. Key Responsibilities Strong Python, Flask, REST API, and NoSQL skills. Familiarity with Docker is a plus. AWS Developer Associate certification is required. AWS Professional Certification is preferred. Architect, build, and maintain secure, scalable backend services on AWS platforms. Utilize core AWS services like Lambda, DynamoDB, API Gateways, and serverless technologies. Design and deliver RESTful APIs using Python Flask framework. Leverage NoSQL databases and design efficient data models for large user bases. Integrate with web services APIs and external systems. Apply AWS Sagemaker for machine learning and analytics (optional but preferred). Collaborate effectively with diverse teams (business analysts, data scientists, etc.). Troubleshoot and resolve technical issues within distributed systems. Employ Agile methodologies (JIRA, Git) and adhere to best practices. Continuously learn and adapt to new technologies and industry standards. Qualifications A bachelors degree in computer science, information technology or any relevant disciplines is required. A masters degree is preferred. At least 6 years of development experience, with 5+ years of experience in AWS. Must have demonstrated skills in planning, designing, developing, architecting, and implementing applications. Additional Information At Privaini Software India Private Limited, we value diversity and always treat all employees and job applicants based on merit, qualifications, competence, and talent. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
Posted 3 days ago
5.0 - 10.0 years
17 - 22 Lacs
bengaluru
Work from Office
Dear Aspirant! We empower our people to stay resilient and relevant in a constantly changing world. We're looking for people who are always searching for creative ways to grow and learn. People who want to make a real impact, now and in the future. Does that sound like you? Then it seems like you'd make a great addition to our vibrant international team. We are looking for Senior Software Engineer (AI/ML"“ NLP & Generative AI) , You'll make an impact by Design, develop, and optimize NLP-driven AI solutions using state-of-the-art models and techniques (NER, embeddings, summarization, etc.). Build and productionize RAG pipelines and agentic workflows to support intelligent, context aware applications. Fine-tune, prompt-engineer, and deploy LLMs (OpenAI, Anthropic, Falcon, LLaMA, etc.) for domain-specific use cases. Collaborate with data scientists, backend developers, and cloud architects to build scalable AI first systems. Evaluate and integrate third-party models/APIs and open-source libraries for generative use cases. Continuously monitor and improve model performance, latency, and accuracy in production settings. Implement observability, performance monitoring, and explainability features in deployed models. Ensure solutions meet enterprise-level requirements for reliability, traceability, and maintainability. Use your skills to move the world forward! Master's or Bachelor's degree in Computer Science, Machine Learning, AI, or a related field. 5+ years of overall experience in AI/ML, with at least 2+ years in NLP and 1"“2 years in Generative AI. Strong understanding of LLM architectures, fine-tuning methods (LoRA, PEFT), embeddings, and vector search. Experience in designing and deploying RAG pipelines and working with multi-step agent architectures. Proficiency in Python and frameworks like Lang Chain, Transformers (Hugging Face), Llama Index, Smol Agents, etc. Familiarity with ML observability and explainability tools (e.g., Tru Era, Arize, Why Labs). Knowledge of cloud-based ML services like AWS Sagemaker, AWS Bedrock, Azure OpenAI Service, Azure ML Studio, and Azure AI Foundry. Experience in integrating LLM-based agents in production environments. Understanding of real-time NLP challenges (streaming, latency optimization, multi-turn dialogues). Familiarity with Lang Graph, function calling, and tools for orchestration in agent-based systems. Exposure to infrastructure-as-code (Terraform/CDK) and DevOps for AI pipelines. Domain knowledge in Electrification, Energy, or Industrial AI is a strong plus. Create a better #TomorrowWithUs! This role is based in Bangalore, where you'll get the chance to work with teams impacting entire cities, countries - and the shape of things to come. We're Siemens. A collection of over 312,000 minds building the future, one day at a time in over 200 countries. We're dedicated to equality, and we encourage applications that reflect the diversity of the communities we work in. All employment decisions at Siemens are based on qualifications, merit and business need. Bring your curiosity and imagination and help us shape tomorrow. Find out more about Siemens careers at www.siemens.com/careers Find out more about the Digital world of Siemens here www.siemens.com/careers/digitalminds
Posted 4 days ago
14.0 - 16.0 years
40 - 50 Lacs
pune
Work from Office
Role Description : Advanced AI and GenAI for Problem Discovery, RCA and Action Recommendation : Lead the data scientists in hands-on mode developing and deploying advanced machine learning models, including anomaly detection, predictive analytics, causal inferencing and action recommendation models, to autonomously discover problems and identify root causes and recommend actions in real-time and batch processing scenarios Gen-AI-Powered Analysis and Insights Generation: Lead the data scientists in hands-on mode creating and fine-tuning Generative AI models to assist in analysis creation for problem discovery, translating complex data patterns and root cause findings into actionable, natural language insights for business stakeholders. Implement prompt engineering and fine-tuning techniques to enhance the relevance and accuracy of insights. Reference Architectures : Work with Technical Architects to evolve frameworks for augmenting and evolving batch processing data architectures with streaming architectures to support connected data and ML pipeline executions in real-time. Client Deployments of the Platform: Consult and provide guidance for creating automated data pipelines for raw data and engineered features ensuring data quality, integrity, and accessibility for model training and inference. Development of Use Cases: Lead and support development of the use cases on the platform for various vertical specific problem statements. Leadership and Collaboration: Lead and mentor a team of data scientists and machine learning engineers, Pre-sales: Support pre-sales, marketing hyper-scaler partnerships and other sales activities such as RFPs by providing subject matter expertise during scoping and designing of the engagements. Technical Skills : In-depth conceptual understanding of Statistics, Classical Machine Learning, Deep Learning and GenAI Able to understand the nuances of business problems in various domains, quickly grasp the problem discovery analysis and modelling imperatives and translate those into the requirements for AI models Significant exposure to various proprietary & open-source cloud-based Machine Learning platforms such as Amazon Sagemaker / Azure Machine Learning Studio / Google Datalab ML Engine AutoML / H2O etc. Experience in leveraging open-source LLMs and prompt engineering and RAG based SLM model development Hands-on with excellent ML programming skills in Python using open-source libraries Qualifications Experience in working with RDBMS and NoSQL databases; Exposure to Big Data technologies 14-16 years of Machine Learning model development and application experience Experience in building cloud-based products that implement ML modelling as a service would be ideal
Posted 4 days ago
7.0 - 10.0 years
22 - 37 Lacs
bengaluru
Work from Office
We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale across all devices and digital mediums, and our people exist everywhere in the world (17500+ experts across 39 countries, to be exact). Our work culture is dynamic and non-hierarchical. We're looking for great new colleagues. That's where you come in! REQUIREMENTS: Total experience 7+ years. Hands on working experience in Python Experience with data visualization libraries (e.g., matplotlib, seaborn, plotly) Strong working experience grasp of DS stack packages: SciPy, scikit-learn, TensorFlow, PyTorch, NumPy, Pandas. Hands on working experience in AWS Sagemaker. Basic to intermediate SQL skills Experience working with GCP. Hands-on experience deploying ML solutions using Kubeflow, Vertex AI, Airflow, or PySpark Excellent communication and collaboration skills for working across global teams. RESPONSIBILITIES: Writing and reviewing great quality code. Understanding the clients business use cases and technical requirements and be able to convert them into technical design which elegantly meets the requirements. Mapping decisions with requirements and be able to translate the same to developers. Identifying different solutions and being able to narrow down the best option that meets the clients requirements. Defining guidelines and benchmarks for NFR considerations during project implementation Writing and reviewing design document explaining overall architecture, framework, and high-level design of the application for the developers Reviewing architecture and design on various aspects like extensibility, scalability, security, design patterns, user experience, NFRs, etc., and ensure that all relevant best practices are followed. Developing and designing the overall solution for defined functional and non-functional requirements; and defining technologies, patterns, and frameworks to materialize it Understanding and relating technology integration scenarios and applying these learnings in projects Resolving issues that are raised during code/review, through exhaustive systematic analysis of the root cause, and being able to justify the decision taken. Carrying out POCs to make sure that suggested design/technologies meet the requirements.
Posted 4 days ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
At PwC, the focus in data and analytics engineering is on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. You play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will concentrate on designing and building data infrastructure and systems to enable efficient data processing and analysis. Your responsibilities include developing and implementing data pipelines, data integration, and data transformation solutions. As an AWS Architect / Manager at PwC - AC, you will interact with Offshore Manager/Onsite Business Analyst to understand the requirements and will be responsible for end-to-end implementation of Cloud data engineering solutions like Enterprise Data Lake and Data hub in AWS. Strong experience in AWS cloud technology is required, along with planning and organization skills. You will work as a cloud Architect/lead on an agile team and provide automated cloud solutions, monitoring the systems routinely to ensure that all business goals are met as per the Business requirements. **Position Requirements:** **Must Have:** - Experience in architecting and delivering highly scalable, distributed, cloud-based enterprise data solutions - Strong expertise in the end-to-end implementation of Cloud data engineering solutions like Enterprise Data Lake, Data hub in AWS - Hands-on experience with Snowflake utilities, SnowSQL, SnowPipe, ETL data Pipelines, Big Data model techniques using Python / Java - Design scalable data architectures with Snowflake, integrating cloud technologies (AWS, Azure, GCP) and ETL/ELT tools such as DBT - Guide teams in proper data modeling (star, snowflake schemas), transformation, security, and performance optimization - Experience in load from disparate data sets and translating complex functional and technical requirements into detailed design - Deploying Snowflake features such as data sharing, events, and lake-house patterns - Experience with data security and data access controls and design - Understanding of relational as well as NoSQL data stores, methods, and approaches (star and snowflake, dimensional modeling) - Good knowledge of AWS, Azure, or GCP data storage and management technologies such as S3, Blob/ADLS, and Google Cloud Storage - Proficient in Lambda and Kappa Architectures - Strong AWS hands-on expertise with a programming background preferably Python/Scala - Knowledge of Big Data frameworks and related technologies with experience in Hadoop and Spark - Strong experience in AWS compute services like AWS EMR, Glue, and Sagemaker and storage services like S3, Redshift & Dynamodb - Experience with AWS Streaming Services like AWS Kinesis, AWS SQS, and AWS MSK - Troubleshooting and Performance tuning experience in Spark framework - Spark core, Sql, and Spark Streaming - Experience in flow tools like Airflow, Nifi, or Luigi - Knowledge of Application DevOps tools (Git, CI/CD Frameworks) - Experience in Jenkins or Gitlab with rich experience in source code management like Code Pipeline, Code Build, and Code Commit - Experience with AWS CloudWatch, AWS Cloud Trail, AWS Account Config, AWS Config Rules - Understanding of Cloud data migration processes, methods, and project lifecycle - Business/domain knowledge in Financial Services/Healthcare/Consumer Market/Industrial Products/Telecommunication, Media and Technology/Deal advisory along with technical expertise - Experience in leading technical teams, guiding and mentoring team members - Analytical & problem-solving skills - Communication and presentation skills - Understanding of Data Modeling and Data Architecture **Desired Knowledge/Skills:** - Experience in building stream-processing systems using solutions such as Storm or Spark-Streaming - Experience in Big Data ML toolkits like Mahout, SparkML, or H2O - Knowledge in Python - Certification on AWS Architecture desirable - Worked in Offshore/Onsite Engagements - Experience in AWS services like STEP & Lambda - Project Management skills with consulting experience in Complex Program Delivery **Professional And Educational Background:** BE/B.Tech/MCA/M.Sc/M.E/M.Tech/MBA **Minimum Years Experience Required:** Candidates with 8-12 years of hands-on experience **Additional Application Instructions:** Add here and change text color to black or remove bullet and section title if not applicable.,
Posted 5 days ago
3.0 - 7.0 years
0 Lacs
noida, uttar pradesh
On-site
You should have an understanding of the ML model lifecycle, including training, evaluation, deployment, and monitoring. Experience in operationalizing ML models in production is required. You should possess knowledge of model drift and data drift detection techniques, as well as experience in model versioning and monitoring. Proficiency in AWS SageMaker for training, deploying, and hosting ML models, as well as AWS ECS (Elastic Container Service) for running containerized services, is essential. Additionally, Python programming skills are necessary for ML, pipelines, and scripting. As part of your job responsibilities, you will be expected to apply your understanding of the ML model lifecycle, operationalize ML models in production, utilize model drift and data drift detection techniques, manage model versioning and monitoring, work with AWS SageMaker and AWS ECS, and employ Python programming for ML, pipelines, and scripting. At GlobalLogic, we prioritize a culture of caring where people come first. You will experience an inclusive culture of acceptance and belonging, build meaningful connections with collaborative teammates, receive support from managers, and learn and grow daily. We are committed to your continuous learning and development, offering various opportunities to enhance your skills and advance your career. You will have the chance to work on interesting and meaningful projects that have a positive impact on clients around the world. GlobalLogic provides a balanced and flexible work environment, allowing you to achieve a healthy work-life balance. As a high-trust organization, integrity is a key value, and you can trust GlobalLogic to provide a safe, reliable, and ethical work environment. GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to leading companies worldwide. Join us in transforming businesses and industries through intelligent products, platforms, and services, and be part of creating innovative digital products and experiences that shape the future.,
Posted 5 days ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
As an AWS Developer at PwC's Advisory Acceleration Center, you will collaborate with the Offshore Manager and Onsite Business Analyst to comprehend requirements and take charge of implementing Cloud data engineering solutions on AWS, such as Enterprise Data Lake and Data hub. With a focus on architecting and delivering scalable cloud-based enterprise data solutions, you will bring your expertise in end-to-end implementation of Cloud data engineering solutions using tools like Snowflake utilities, SnowSQL, SnowPipe, ETL data Pipelines, and Big Data model techniques using Python/Java. Your responsibilities will include loading disparate data sets, translating complex requirements into detailed designs, and deploying Snowflake features like data sharing, events, and lake-house patterns. You are expected to possess a deep understanding of relational and NoSQL data stores, including star and snowflake dimensional modeling, and demonstrate strong hands-on expertise in AWS services such as EMR, Glue, Sagemaker, S3, Redshift, Dynamodb, and AWS Streaming Services like Kinesis, SQS, and MSK. Troubleshooting and performance tuning experience in Spark framework, familiarity with flow tools like Airflow, Nifi, or Luigi, and proficiency in Application DevOps tools like Git, CI/CD frameworks, Jenkins, and Gitlab are essential for this role. Desired skills include experience in building stream-processing systems using solutions like Storm or Spark-Streaming, knowledge in Big Data ML toolkits such as Mahout, SparkML, or H2O, proficiency in Python, and exposure to Offshore/Onsite Engagements and AWS services like STEP & Lambda. Candidates with 2-4 years of hands-on experience in Cloud data engineering solutions, a professional background in BE/B.Tech/MCA/M.Sc/M.E/M.Tech/MBA, and a passion for problem-solving and effective communication are encouraged to apply to be part of PwC's dynamic and inclusive work culture, where learning, growth, and excellence are at the core of our values. Join us at PwC, where you can make a difference today and shape the future tomorrow!,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
As an AWS Developer at PwC's Acceleration Center in Bangalore, you will be responsible for the end-to-end implementation of Cloud data engineering solutions like Enterprise Data Lake and Data hub in AWS. You will collaborate with Offshore Manager/Onsite Business Analyst to understand requirements and architect scalable, distributed, cloud-based enterprise data solutions. Your role will involve hands-on experience with Snowflake utilities, SnowSQL, SnowPipe, ETL data Pipelines, and Big Data model techniques using Python/Java. You must have a deep understanding of relational and NoSQL data stores, methods, and approaches such as star and snowflake dimensional modeling. Strong expertise in AWS services like EMR, Glue, Sagemaker, S3, Redshift, Dynamodb, and streaming services like Kinesis, SQS, and MSK is essential. Troubleshooting and performance tuning experience in Spark framework, along with knowledge of flow tools like Airflow, Nifi, or Luigi, is required. Experience with Application DevOps tools like Git, CI/CD Frameworks, Jenkins, or Gitlab is preferred. Familiarity with AWS CloudWatch, Cloud Trail, Account Config, Config Rules, and Cloud data migration processes is expected. Good analytical, problem-solving, communication, and presentation skills are essential for this role. Desired skills include building stream-processing systems using Storm or Spark-Streaming, experience in Big Data ML toolkits like Mahout, SparkML, or H2O, and knowledge of Python. Exposure to Offshore/Onsite Engagements and AWS services like STEP and Lambda would be a plus. Candidates with 2-4 years of hands-on experience in cloud data engineering solutions and a background in BE/B.Tech/MCA/M.Sc/M.E/M.Tech/MBA are encouraged to apply. Travel to client locations may be required based on project needs. This position falls under the Advisory line of service and the Technology Consulting horizontal, with the designation of Associate based in Bangalore, India. If you are passionate about working in a high-performance culture that values diversity, inclusion, and professional development, PwC could be the ideal place for you to grow and excel in your career. Apply now to be part of a global team dedicated to solving important problems and making a positive impact on the world.,
Posted 1 week ago
0.0 - 4.0 years
0 Lacs
karnataka
On-site
We empower our people to stay resilient and relevant in a constantly changing world. We are looking for individuals who are always seeking creative ways to grow and learn, individuals who aspire to make a real impact, both now and in the future. If this resonates with you, then you would be a valuable addition to our dynamic international team. As a Graduate Trainee Engineer, you will have the opportunity to contribute significantly by: - Designing, developing, and optimizing NLP-driven AI solutions using cutting-edge models and techniques such as NER, embeddings, and summarization. - Building and operationalizing RAG pipelines and agentic workflows to facilitate intelligent, context-aware applications. - Fine-tuning, prompt-engineering, and deploying LLMs (such as OpenAI, Anthropic, Falcon, LLaMA, etc.) for specific domain use cases. - Collaborating with data scientists, backend developers, and cloud architects to construct scalable AI-first systems. - Evaluating and integrating third-party models/APIs and open-source libraries for generative use cases. - Continuously monitoring and enhancing model performance, latency, and accuracy in production environments. - Implementing observability, performance monitoring, and explainability features in deployed models. - Ensuring that solutions meet enterprise-level criteria for reliability, traceability, and maintainability. To excel in this role, you should possess: - A Master's or Bachelor's degree in Computer Science, Machine Learning, AI, or a related field. - Exposure to AI/ML, with expertise in NLP and Generative AI. - A solid understanding of LLM architectures, fine-tuning methods (such as LoRA, PEFT), embeddings, and vector search. - Previous experience in designing and deploying RAG pipelines and collaborating with multi-step agent architectures. - Proficiency in Python and frameworks like Lang Chain, Transformers (Hugging Face), Llama Index, Smol Agents, etc. - Familiarity with ML observability and explainability tools (e.g., Tru Era, Arize, Why Labs). - Knowledge of cloud-based ML services like AWS Sagemaker, AWS Bedrock, Azure OpenAI Service, Azure ML Studio, and Azure AI Foundry. - Hands-on experience in integrating LLM-based agents in production settings. - An understanding of real-time NLP challenges (streaming, latency optimization, multi-turn dialogues). - Familiarity with Lang Graph, function calling, and tools for orchestration in agent-based systems. - Exposure to infrastructure-as-code (Terraform/CDK) and DevOps for AI pipelines. - Domain knowledge in Electrification, Energy, or Industrial AI would be advantageous. Join us in Bangalore and be part of a team that is shaping the future of entire cities, countries, and beyond. At Siemens, we are a diverse community of over 312,000 minds working together to build a better tomorrow. We value equality and encourage applications from individuals who reflect the diversity of the communities we serve. Our employment decisions are based on qualifications, merit, and business requirements. Bring your curiosity and creativity to Siemens and be a part of shaping tomorrow with us. Explore more about Siemens careers at www.siemens.com/careers and discover the digital world of Siemens at www.siemens.com/careers/digitalminds.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
kochi, kerala
On-site
We are seeking a highly skilled Senior Data Scientist to join our India-based team in a remote capacity. Your primary responsibility will involve building and deploying advanced predictive models to influence key business decisions. To excel in this role, you should possess a strong background in machine learning, data engineering, and cloud environments, with a particular emphasis on AWS. Your main tasks will include collaborating with cross-functional teams to design, develop, and deploy cutting-edge ML models using tools like SageMaker, Bedrock, PyTorch, TensorFlow, Jupyter Notebooks, and AWS Glue. This position offers an excellent opportunity to work on impactful AI/ML solutions within a dynamic and innovative team environment. Your key responsibilities will encompass predictive modeling and machine learning, data engineering and cloud computing, Python programming, as well as collaboration and communication with various teams. Additionally, having experience in the utility industry, generative AI technologies, and geospatial data and GIS tools would be advantageous. To qualify for this position, you should hold a Master's degree in Computer Science, Statistics, Mathematics, or a related field, along with at least 5 years of relevant experience in data science, predictive modeling, and machine learning. Previous experience working in cloud-based data science environments, preferably AWS, would be beneficial.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
You are a Data Engineer with 5-8 years of experience in AWS Data Engineering. Your role involves implementing data lake and warehousing strategies to support analytics, AI, and machine learning initiatives. You will be responsible for developing and maintaining scalable data pipelines, managing data storage solutions using S3 buckets, and optimizing data retrieval and query performance. Your expertise in Python, proficiency in Snowflake and AWS services, and strong understanding of data warehousing, ETL processes, and cloud data storage will be crucial in this role. You will collaborate with cross-functional teams to deliver solutions that align with business goals, ensure compliance with data governance and security policies, and define strategies to leverage existing large datasets. Additionally, you will be tasked with identifying relevant data sources for client business needs, mining big-data stores, cleaning and validating data, and building cloud-based solutions with AWS Sagemaker and Snowflake. Your problem-solving skills, ability to work in a dynamic environment, and strong communication skills will be essential for effective collaboration and documentation. In this role, you will play a key part in building scalable data solutions for AI and Machine Learning initiatives, leveraging AWS and Snowflake technologies to support data infrastructure needs in the Fintech Capital market space. Being part of a team at FIS, a leading Fintech provider, you will have the opportunity to work on challenging and relevant issues in financial services and technology.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
We are seeking a skilled and experienced AI/ML Engineer to join our Bangalore-based team. The ideal candidate will have a strong background in machine learning, data preprocessing, and deploying AI solutions in cloud environments. This role requires a hands-on professional with a consulting mindset who can translate business needs into actionable AI/ML solutions. The candidate will lead and support the design, implementation, and optimization of AI-powered tools across various business or public sector environments. The role also includes client interaction, mentoring junior staff, and contributing to end-to-end project lifecyclesfrom data ingestion to model deployment and monitoring. Lead the design and implementation of AI/ML solutions for clients. Translate business problems into technical requirements and model-driven solutions. Perform data preparation tasks, including cleansing, preprocessing, and handling missing or inconsistent data. Work with AI frameworks like TensorFlow, PyTorch, and Scikit-learn. Leverage cloud-based AI platforms such as AWS SageMaker, Azure ML, or GCP Vertex AI. Provide functional support and guidance during implementation and post-deployment phases. Analyze and redesign business processes as required by AI implementations. Mentor junior team members and support business stakeholders in AI adoption. Engage directly with clients and senior stakeholders to drive solution outcomes. Collaborate across global virtual teams and contribute to best practices in AI solution delivery. To be successful in this role, you must have a Masters degree in computer science, Data Science, AI/ML, or a related technical discipline. Certifications in AI/ML or cloud platforms (e.g., AWS Certified Machine Learning, Azure AI Engineer Associate, TensorFlow Developer Certificate) are required. Experience with MLOps tools and practices (e.g., MLflow, Kubeflow, CI/CD pipelines for ML models) is essential. Exposure to Natural Language Processing (NLP), Computer Vision, or Deep Learning projects is preferred. Understanding of data privacy, model interpretability, and responsible AI principles is necessary. Experience working in cross-functional teams and global delivery models is an advantage. Prior experience in client-facing roles within consulting or IT services companies is beneficial. Strong business acumen and ability to communicate AI/ML concepts to non-technical stakeholders is crucial. The must-have skills for this role include 3+ years of hands-on experience in AI/ML development or consulting, proficiency in data preparation, including cleansing, feature engineering, and data validation, strong knowledge of AI/ML frameworks (TensorFlow, PyTorch, Scikit-learn), experience with cloud-based ML platforms (AWS SageMaker, Azure ML, or GCP Vertex AI), excellent problem-solving and communication skills, and ability to work in dynamic and globally distributed teams. Good-to-have skills include prior consulting experience with public or private sector clients, familiarity with DevOps and MLOps practices for model deployment and monitoring, exposure to business process mapping and improvement initiatives, and experience leading or supporting client-facing workshops or strategy sessions. Together, as owners, let's turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect, and belonging. Here, you'll reach your full potential because you are invited to be an owner from day 1 as we work together to bring our Dream to life. That's why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company's strategy and direction. Your work creates value. You'll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You'll shape your career by joining a company built to grow and last. You'll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team - one of the largest IT and business consulting services firms in the world.,
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
You have a total of 8+ years of experience, with at least 4 years in AI, ML, and Gen AI technologies. You have successfully led and expanded AI/ML teams and projects. Your expertise includes a deep understanding and practical experience in AI, ML, Deep Learning, and Generative AI concepts. You are proficient in ML frameworks like PyTorch and/or TensorFlow and have worked with ONNX runtime, model optimization, and hyperparameter tuning. You possess solid experience in DevOps, SDLC, CI/CD, and MLOps practices, with a tech stack that includes Docker, Kubernetes, Jenkins, Git, CI/CD, RabbitMQ, Kafka, Spark, Terraform, Ansible, Prometheus, Grafana, and ELK stack. You have deployed AI models at an enterprise scale and are skilled in data preprocessing, feature engineering, and handling large-scale data. Your expertise extends to image and video processing, object detection, image segmentation, and other computer vision tasks. In addition, you have proficiency in text analysis, sentiment analysis, language modeling, and various NLP applications. You also have experience in speech recognition, audio classification, and signal processing techniques. Your knowledge includes RAG, VectorDB, GraphDB, and Knowledge Graphs. You have extensive experience working with major cloud platforms such as AWS, Azure, and GCP for AI/ML deployments, and integrating cloud-based AI services and tools like AWS SageMaker, Azure ML, and Google Cloud AI. As for soft skills, you exhibit strong leadership and team management abilities, excellent verbal and written communication skills, strategic thinking, problem-solving capabilities, adaptability to the evolving AI/ML landscape, collaboration skills, and the capacity to translate market requirements into technological solutions. Moreover, you have a deep understanding of industry dynamics and a demonstrated ability to foster innovation and creative problem-solving within a team.,
Posted 2 weeks ago
8.0 - 10.0 years
8 - 10 Lacs
Noida, Uttar Pradesh, India
On-site
We are seeking a highly skilled Senior Data Science Consultant with a strong background in data science, operations research, and mathematical optimization to lead an internal optimization initiative. You will be responsible for developing and implementing mathematical models to optimize resource allocation and process performance, and building robust data pipelines. This role requires a blend of technical depth, business acumen, and collaborative communication to solve complex business problems. Roles & Responsibilities: Lead and contribute to internal optimization-focused data science projects from design to deployment. Develop and implement mathematical models to optimize resource allocation, process performance, and decision-making. Utilize techniques such as linear programming, mixed-integer programming, heuristic and metaheuristic algorithms . Collaborate with business stakeholders to gather requirements and translate them into data science use cases. Build robust data pipelines and use statistical and machine learning methods to drive insights. Communicate complex technical findings in a clear, concise manner to both technical and non-technical audiences. Mentor junior team members and contribute to knowledge sharing and best practices within the team. Skills Required: Strong background in data science, operations research, and mathematical optimization. Expertise in Python (NumPy, Pandas, SciPy, Scikit-learn) , SQL , and optimization libraries such as PuLP, Pyomo, Gurobi, or CPLEX . Experience with the end-to-end lifecycle of internal optimization projects. Strong analytical and problem-solving skills. Excellent communication and stakeholder management abilities. Experience working on internal company projects focused on logistics, resource planning, workforce optimization, or cost reduction is preferred. Exposure to tools/platforms like Databricks, Azure ML, or AWS SageMaker is a plus. Familiarity with dashboards and visualization tools like Power BI or Tableau is a plus. Prior experience in consulting or internal centers of excellence (CoE) is a plus. QUALIFICATION: Master's or PhD in Data Science, Computer Science, Operations Research, Applied Mathematics, or a related field.
Posted 2 weeks ago
8.0 - 10.0 years
8 - 10 Lacs
Gurgaon, Haryana, India
On-site
We are seeking a highly skilled Senior Data Science Consultant with a strong background in data science, operations research, and mathematical optimization to lead an internal optimization initiative. You will be responsible for developing and implementing mathematical models to optimize resource allocation and process performance, and building robust data pipelines. This role requires a blend of technical depth, business acumen, and collaborative communication to solve complex business problems. Roles & Responsibilities: Lead and contribute to internal optimization-focused data science projects from design to deployment. Develop and implement mathematical models to optimize resource allocation, process performance, and decision-making. Utilize techniques such as linear programming, mixed-integer programming, heuristic and metaheuristic algorithms . Collaborate with business stakeholders to gather requirements and translate them into data science use cases. Build robust data pipelines and use statistical and machine learning methods to drive insights. Communicate complex technical findings in a clear, concise manner to both technical and non-technical audiences. Mentor junior team members and contribute to knowledge sharing and best practices within the team. Skills Required: Strong background in data science, operations research, and mathematical optimization. Expertise in Python (NumPy, Pandas, SciPy, Scikit-learn) , SQL , and optimization libraries such as PuLP, Pyomo, Gurobi, or CPLEX . Experience with the end-to-end lifecycle of internal optimization projects. Strong analytical and problem-solving skills. Excellent communication and stakeholder management abilities. Experience working on internal company projects focused on logistics, resource planning, workforce optimization, or cost reduction is preferred. Exposure to tools/platforms like Databricks, Azure ML, or AWS SageMaker is a plus. Familiarity with dashboards and visualization tools like Power BI or Tableau is a plus. Prior experience in consulting or internal centers of excellence (CoE) is a plus. QUALIFICATION: Master's or PhD in Data Science, Computer Science, Operations Research, Applied Mathematics, or a related field.
Posted 2 weeks ago
8.0 - 10.0 years
7 - 10 Lacs
Delhi, India
On-site
Key Responsibilities: Lead and execute data science projects focused on operational optimization from concept to deployment Design and implement mathematical models to improve process performance, decision-making, and resource utilization Apply linear programming, mixed-integer programming, and heuristic/metaheuristic techniques to real-world problems Collaborate with cross-functional stakeholders to define project objectives and translate them into data-driven solutions Develop data pipelines and apply statistical and machine learning methods to derive insights Clearly communicate technical solutions and findings to both technical and business audiences Mentor junior team members and contribute to the growth of internal best practices and knowledge sharing Required Skills and Qualifications: Master's or PhD in Data Science, Computer Science, Operations Research, Applied Mathematics, or related field Minimum 8 years of hands-on experience in data science with a strong focus on optimization Proficient in Python (NumPy, Pandas, SciPy, Scikit-learn) and optimization libraries such as PuLP, Pyomo, Gurobi, or CPLEX Strong SQL skills and experience building and working with robust data pipelines Proven experience delivering end-to-end internal optimization solutions Excellent problem-solving and analytical skills Strong communication and stakeholder engagement skills Preferred Qualifications: Experience with internal optimization projects in logistics, cost reduction, workforce/resource planning Familiarity with Databricks, Azure ML, or AWS SageMaker Working knowledge of dashboarding tools such as Power BI or Tableau Prior experience in consulting environments or internal Centers of Excellence (CoEs)
Posted 2 weeks ago
8.0 - 10.0 years
7 - 10 Lacs
Pune, Maharashtra, India
On-site
Key Responsibilities: Lead and execute data science projects focused on operational optimization from concept to deployment Design and implement mathematical models to improve process performance, decision-making, and resource utilization Apply linear programming, mixed-integer programming, and heuristic/metaheuristic techniques to real-world problems Collaborate with cross-functional stakeholders to define project objectives and translate them into data-driven solutions Develop data pipelines and apply statistical and machine learning methods to derive insights Clearly communicate technical solutions and findings to both technical and business audiences Mentor junior team members and contribute to the growth of internal best practices and knowledge sharing Required Skills and Qualifications: Master's or PhD in Data Science, Computer Science, Operations Research, Applied Mathematics, or related field Minimum 8 years of hands-on experience in data science with a strong focus on optimization Proficient in Python (NumPy, Pandas, SciPy, Scikit-learn) and optimization libraries such as PuLP, Pyomo, Gurobi, or CPLEX Strong SQL skills and experience building and working with robust data pipelines Proven experience delivering end-to-end internal optimization solutions Excellent problem-solving and analytical skills Strong communication and stakeholder engagement skills Preferred Qualifications: Experience with internal optimization projects in logistics, cost reduction, workforce/resource planning Familiarity with Databricks, Azure ML, or AWS SageMaker Working knowledge of dashboarding tools such as Power BI or Tableau Prior experience in consulting environments or internal Centers of Excellence (CoEs)
Posted 2 weeks ago
8.0 - 10.0 years
7 - 10 Lacs
Mumbai, Maharashtra, India
On-site
Key Responsibilities: Lead and execute data science projects focused on operational optimization from concept to deployment Design and implement mathematical models to improve process performance, decision-making, and resource utilization Apply linear programming, mixed-integer programming, and heuristic/metaheuristic techniques to real-world problems Collaborate with cross-functional stakeholders to define project objectives and translate them into data-driven solutions Develop data pipelines and apply statistical and machine learning methods to derive insights Clearly communicate technical solutions and findings to both technical and business audiences Mentor junior team members and contribute to the growth of internal best practices and knowledge sharing Required Skills and Qualifications: Master's or PhD in Data Science, Computer Science, Operations Research, Applied Mathematics, or related field Minimum 8 years of hands-on experience in data science with a strong focus on optimization Proficient in Python (NumPy, Pandas, SciPy, Scikit-learn) and optimization libraries such as PuLP, Pyomo, Gurobi, or CPLEX Strong SQL skills and experience building and working with robust data pipelines Proven experience delivering end-to-end internal optimization solutions Excellent problem-solving and analytical skills Strong communication and stakeholder engagement skills Preferred Qualifications: Experience with internal optimization projects in logistics, cost reduction, workforce/resource planning Familiarity with Databricks, Azure ML, or AWS SageMaker Working knowledge of dashboarding tools such as Power BI or Tableau Prior experience in consulting environments or internal Centers of Excellence (CoEs)
Posted 2 weeks ago
7.0 - 12.0 years
8 - 12 Lacs
Bengaluru, Karnataka, India
On-site
Job description In this roleyouwill help to develop,build, design,continuously improve,and supportthe AI Platform at Thomson Reuters. You willparticipatein all stages of development, use various software and tools, and enabling self-service tools for data science teams. The role also involves designing, developing, integrating, and deploying tools for Data Science, GenAI and Machine Learning (ML) Research. The successful applicant will design andoperatea framework for Machine Learning Operations (MLOps), advise on software engineering for ML, and ensure consistency with cloud architectural principles. The role requires continuous evolution of the platform and keepingup to datewithnew technologies. Applicants should have 7 years of software engineering experience, hands-on experience with a public cloud technology (AWS, Azure, or GCP), experience with service-orientedprograming and design,microservices architecture,and serverless applicationmodel. A good candidateshould havegoodexperiencewith LLMs and GenAI, AWSSageMaker, AzureMLor similar Cloud AI capabilities. Proficiencyin modern programming languages such as Python, JavaScript, TypeScriptisrequired, while also being comfortable developingwithotherlanguages suchC/C++, Java,Rust,and Go. Haveexperience with relational and non-relational databases,integration,and Agile development methodologies. DevOps experience and an understanding of frontend development are also important. The role requires technical leadership and a positive attitude. About the role: Build and evolve the AI Platform which includes being able to assess the current architecture and map to thetargetstate. Participate in all aspects of the development lifecycle: Ideation, Design, Build, Test and Operate Utilize a variety of software and tools both commercial and open source. Enable self-service tooling for data science teams to create andmaintainmodels. Design, develop, integrate, and deploy tools for Data Science, GenAI and ML Research, such as, Model Monitoring, Model Governance, Model Deployment, Data Management, Feature Engineering, among others. Be able to design, implement, andoperatea framework for Machine Learning Operations (MLOps). Design, evaluate, and advise on the day-to-day work of Software Engineering for Machine Learning Ensure consistency with cloud architectural guiding principles. Continuously challenge and evolve the existing platform capabilities and keep up to date with new offerings. About You 7 years experience in Software Engineering Hands-on experience working with at least one public cloud technology (AWS, Azure) Experience with service-oriented and/or microservices architecture Experience with LLMs and GenAI, AWSSageMaker, Azure ML, or similar Cloud AI capabilities Proficiencyin modern programming languages such as Python, JavaScript, TypeScript, C/C++, Java. Experience with relational and non-relational databases Experience with Agile development and delivery - Scrum, Lean, XP, Kanban methodologies. Hands on DevOps experience - CI/CD in AWS, Git,GitHub,Monitoring, LogAnalyticsand others. A good understanding of frontend development (UI/UX concepts and frameworks, HTML, CSS, andJavascript) is an asset and a strong differential for this role. Technical leadership including being able to break down high level requirements tofeatures Acan do attitude,witha greatsenseof ownership andteamwork. Ability to develop prototypes,showcasenew functionalitiesto the rest of the team,demonstrateimplementation patterns, advise,and guide otherteammembers. Ability tofindproblemsandchase, then lead the team tosolve themend to end. Ability to thrive in global teams with peers in differenttimezones.
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a part of this role, you will be responsible for setting up a Center of Excellence (CoE) and business practice for Enterprise AI solutions. You will be tasked with establishing Data Science & Analytics as a Service (DaaS) and building a practice with capabilities to create solution blueprints and roadmaps, orchestrate data acquisition, management, and engineering, as well as develop AI/ML models to solve analytical problems. It will be essential to stay updated with industry trends and developments to ensure the practice remains cutting-edge. In addition, you will promote collaboration and engagement within the team and with stakeholders. Your responsibilities will include creating technical Proof of Concepts (POCs) and solution demos, driving discussions to evaluate and design AI/ML solutions for various engagements, and implementing a framework across the solutions development lifecycle phases. You will also be required to develop a value tracking mechanism to monitor the Return on Investment (ROI) for AI/ML projects. Furthermore, a key aspect of this role will involve hiring, training, and mentoring a team of AI/ML experts to deliver technical solutions based on AI/ML products and platforms. It will be crucial to possess technical skills such as expertise in AI/ML, statistical modeling, ML techniques, and concepts, as well as experience with agile software development methodology and big-data projects. You should also have a fair understanding of verticals like Insurance, Manufacturing, and Education. Moreover, familiarity with tools and techniques related to data preparation, movement, analysis, visualization, machine learning, and Natural Language Processing (NLP) is essential. Experience in AI/ML enabled solution development leveraging high volume and high-velocity data across disparate systems and supporting large-scale Enterprise analytical solutions will be advantageous. Additionally, having a vision and the ability to set up solution practice groups will be beneficial. In terms of technical skills, it is required to have experience with Spark/Spark MLlib, visualization tools like Tableau/Power BI/Qlik View, cloud platforms such as Azure ML/AWS SageMaker/Snowflake, ML Ops, Automated ML, NLP, and Image/Video Analytics. Non-technical skills like possessing a master's degree in relevant fields, excellent interpersonal and people skills, analytical mindset, good debugging skills, and strong verbal and written English communication skills are also mandatory. Overall, this role offers a challenging opportunity to lead and drive AI/ML initiatives within the organization, requiring a combination of technical expertise, leadership skills, and a passion for innovation in the field of data science and analytics.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
telangana
On-site
You will be responsible for designing and building backend components of our MLOps platform in Python on AWS. This includes collaborating with geographically distributed cross-functional teams and participating in an on-call rotation with the rest of the team to handle production incidents. To be successful in this role, you should have at least 3+ years of professional backend development experience with Python. Additionally, you should have experience with web development frameworks such as Flask or FastAPI, as well as working with WSGI & ASGI web servers like Gunicorn and Uvicorn. Experience with concurrent programming designs such as AsyncIO, containers (Docker), AWS ECS or AWS EKS, unit and functional testing frameworks, and public cloud platforms like AWS is also required. Nice-to-have skills include experience with Apache Kafka and developing Kafka client applications in Python, MLOps platforms such as AWS Sagemaker, Kubeflow, or MLflow, big data processing frameworks like Apache Spark, DevOps & IaC tools such as Terraform and Jenkins, various Python packaging options like Wheel, PEX, or Conda, and metaprogramming techniques in Python. You should hold a Bachelor's degree in Computer Science, Information Systems, Engineering, Computer Applications, or a related field. In addition to competitive salaries and benefits packages, Nisum India offers its employees continuous learning opportunities, parental medical insurance, various activities for team building, and free meals including snacks, dinner, and subsidized lunch.,
Posted 2 weeks ago
7.0 - 11.0 years
0 Lacs
hyderabad, telangana
On-site
As a Software Engineer - Backend (Python) with over 7 years of experience, you will be based in Hyderabad and play a crucial role in developing the backend components of the GenAI Platform. Your responsibilities will include designing and constructing backend features for the platform on AWS, collaborating with cross-functional teams spread across different locations, and participating in an on-call rotation for managing production incidents. To excel in this role, you must possess the following skills: - A minimum of 7 years of professional experience in backend web development using Python. - Proficiency in AI, RAG, DevOps, and Infrastructure as Code (IaC) tools like Terraform and Jenkins. - Familiarity with MLOps platforms such as AWS Sagemaker, Kubeflow, or MLflow. - Expertise in web development frameworks like Flask, Django, or FastAPI. - Knowledge of concurrent programming concepts like AsyncIO. - Experience with public cloud platforms such as AWS, Azure, or GCP, preferably AWS. - Understanding of CI/CD practices, tools, and frameworks. Additionally, the following skills would be advantageous: - Experience with Apache Kafka and developing Kafka client applications using Python. - Familiarity with big data processing frameworks, particularly Apache Spark. - Proficiency in containers (Docker) and container platforms like AWS ECS or AWS EKS. - Expertise in unit and functional testing frameworks. - Knowledge of various Python packaging options such as Wheel, PEX, or Conda. - Understanding of metaprogramming techniques in Python. Join our team and contribute to creating a safe, compliant, and efficient access platform for LLMs, leveraging both Opensource and Commercial resources while adhering to Experian standards and policies. Be a part of a dynamic environment where you can utilize your expertise to build innovative solutions and drive the growth of the GenAI Platform.,
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
54024 Jobs | Dublin
Wipro
24262 Jobs | Bengaluru
Accenture in India
18733 Jobs | Dublin 2
EY
17079 Jobs | London
Uplers
12548 Jobs | Ahmedabad
IBM
11704 Jobs | Armonk
Amazon
11059 Jobs | Seattle,WA
Bajaj Finserv
10656 Jobs |
Accenture services Pvt Ltd
10587 Jobs |
Oracle
10506 Jobs | Redwood City