Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 6.0 years
0 Lacs
Sadar, Uttar Pradesh, India
On-site
Profile : Machine Engineer Experience : 2 To 6 Years Requirement Python, pandas, NumPy, MySQL Data Visualization, Matplotlib, Seaborn, Data Cleaning Deep Learning : ANN, CNN, DNN, Back Propagation, TensorFlow 2.x, Keras Web scraping : various library Natural Language processing : Understanding, representation, classification & clustering NLTK,BOW, TFIDF, word2vec Machine Learning : Supervised, Unsupervised (All Algorithm), etc Location : Noida Sector 63 (Work From Office) Working Days : 5 Job Description The Machine Learning Lead will oversee the full lifecycle of machine learning projects, from concept to production deployment. The role requires strong technical expertise and leadership to guide teams in delivering impactful AI-driven solutions. Key Responsibilities Design and implement scalable ML models for various business applications. Manage end-to-end ML pipelines, including data preprocessing, model training, and deployment. Fine-tune foundation models and create small language models for deployment on AWS Inferentia and Trainium. Deploy ML models into production environments using platforms such as AWS. Lead the implementation of custom model development, ML pipelines, fine-tuning, and performance monitoring. Collaborate with cross-functional teams to identify and prioritize AI/ML use cases. Required Skills Expertise in ML frameworks like TensorFlow, PyTorch, and Scikit-learn. Strong experience deploying ML models on cloud platforms, especially AWS. In-depth knowledge of SageMaker and SageMaker Pipelines. Familiarity with RAG-based architectures, agentic AI solutions, Inferentia, and Trainium. Advanced programming skills in Python with experience in APIs and microservices. Exceptional problem-solving abilities and a passion for innovation. (ref:hirist.tech) Show more Show less
Posted 1 month ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: AI/ML Developer Duration: 6 months (Expected to be longer) Work Location & Requirement: Chennai, onsite at least 3-4 days a week Position Summary: We are seeking a highly skilled and motivated Development Lead with deep expertise in ReactJS, Python, and AI/ML DevOps, along with working familiarity in AWS cloud services. This is a hands-on individual contributor role focused on developing and deploying a full-stack AI/ML-powered web application. The ideal candidate should be passionate about building intelligent, user-centric applications and capable of owning the end-to-end development process. Position Description: Design and develop intuitive and responsive web interfaces using ReactJS. Build scalable backend services and RESTful APIs using Python frameworks (e.g., Flask, FastAPI, or Django). Integrate AI/ML models into the application pipeline and support inferencing, monitoring, and retraining flows. Automate development workflows and model deployments using DevOps best practices and tools (Docker, CI/CD, etc.). Deploy applications and ML services on AWS infrastructure, leveraging services such as EC2, S3, Lambda, SageMaker, and EKS. Ensure performance, security, and reliability of the application through testing, logging, and monitoring. Collaborate with data scientists, designers, and product stakeholders to refine and implement AI-powered features. Take ownership of application architecture, development lifecycle, and release management. Minimum Requirements: Bachelor’s or Master’s degree in computer science, Engineering, or a related field. 8+ years of hands-on experience in software development. Strong expertise in ReactJS, and NodeJS based web application development. Proficient in Python for backend development and AI/ML model integration. Experience with at least one AI/ML framework (LLMs). Solid understanding of DevOps concepts for ML workflows – containerization, CI/CD, testing, and monitoring. Experience deploying and operating applications in AWS cloud environments. Self-driven, with excellent problem-solving skills and attention to detail. Strong communication skills and ability to work independently in an agile, fast-paced environment. Show more Show less
Posted 1 month ago
3.0 years
0 Lacs
India
On-site
Coursera was launched in 2012 by Andrew Ng and Daphne Koller, with a mission to provide universal access to world-class learning. It is now one of the largest online learning platforms in the world, with 175 million registered learners as of March 31, 2025. Coursera partners with over 350 leading universities and industry leaders to offer a broad catalog of content and credentials, including courses, Specializations, Professional Certificates, and degrees. Coursera’s platform innovations enable instructors to deliver scalable, personalized, and verified learning experiences to their learners. Institutions worldwide rely on Coursera to upskill and reskill their employees, citizens, and students in high-demand fields such as GenAI, data science, technology, and business. Coursera is a Delaware public benefit corporation and a B Corp. Join us in our mission to create a world where anyone, anywhere can transform their life through access to education. We're seeking talented individuals who share our passion and drive to revolutionize the way the world learns. At Coursera, we are committed to building a globally diverse team and are thrilled to extend employment opportunities to individuals in any country where we have a legal entity. We require candidates to possess eligible working rights and have a compatible timezone overlap with their team to facilitate seamless collaboration. Coursera has a commitment to enabling flexibility and workspace choices for employees. Our interviews and onboarding are entirely virtual, providing a smooth and efficient experience for our candidates. As an employee, we enable you to select your main way of working, whether it's from home, one of our offices or hubs, or a co-working space near you. About The Role We at Coursera are seeking a highly skilled and motivated AI Specialist with expertise in developing and deploying advanced AI solutions. The ideal candidate will have 3+ years of experience, with a strong focus on leveraging AI technologies to derive insights, build predictive models, and enhance platform capabilities. This role offers a unique opportunity to contribute to cutting-edge projects that transform the online learning experience. Key Responsibilities Deploy and customize AI/ML solutions using tools and platforms from Google AI, AWS, or other providers. Develop and optimize customer journey analytics to identify actionable insights and improve user experience. Design, implement, and optimize models for predictive analytics, information extraction, semantic parsing, and topic modelling. Perform comprehensive data cleaning and preprocessing to ensure high-quality inputs for model training and deployment. Build, maintain, and refine AI pipelines for data gathering, curation, model training, evaluation, and monitoring. Analyze large-scale datasets, including customer reviews, to derive insights for improving recommendation systems and platform features. Train and support team members in adopting and managing AI-driven tools and processes. Document solutions, workflows, and troubleshooting processes to ensure knowledge continuity. Stay informed on emerging AI/ML technologies to recommend suitable solutions for new use cases. Evaluate and enhance the quality of video and audio content using AI-driven techniques. Qualifications Education: Bachelor's degree in Computer Science, Machine Learning, or a related field (required). Experience: 3+ years of experience in AI/ML development, with a focus on predictive modelling and data-driven insights. Proven experience in deploying AI solutions using platforms like Google AI, AWS, Microsoft Azure, or similar. Proficiency in programming languages such as Python, Java, or similar for AI tool customization and deployment. Strong understanding of APIs, cloud services, and integration of AI tools with existing systems. Proficiency in building and scaling AI pipelines for data engineering, model training, and monitoring. Experience with frameworks and libraries for building AI agents, such as LangChain, AutoGen Familiarity with designing autonomous workflows using LLMs and external APIs Technical Skills: Programming: Advanced proficiency in Python, PyTorch, and TensorFlow, SciKit-Learn Data Engineering: Expertise in data cleaning, preprocessing, and handling large-scale datasets. Preferred experience with tools like AWS Glue, PySpark, and AWS S3. Cloud Technologies: Experience with AWS SageMaker, Google AI, Google Vertex AI, Databricks Strong SQL skills and advanced proficiency in statistical programming languages such as Python, along with experience using data manipulation libraries (e.g., Pandas, NumPy). Coursera is an Equal Employment Opportunity Employer and considers all qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, age, marital status, national origin, protected veteran status, disability, or any other legally protected class. If you are an individual with a disability and require a reasonable accommodation to complete any part of the application process, please contact us at accommodations@coursera.org. For California Candidates, please review our CCPA Applicant Notice here. For our Global Candidates, please review our GDPR Recruitment Notice here. Show more Show less
Posted 1 month ago
12.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About The Role Grade Level (for internal use): 13 Location: Gurgaon, Hyderabad and Bangalore Job Description We are seeking a highly skilled and visionary Agentic AI Architect to lead the strategic design, development, and scalable implementation of autonomous AI systems within our organization. This role demands an individual with deep expertise in cutting-edge AI architectures, a strong commitment to ethical AI practices, and a proven ability to drive innovation. The ideal candidate will architect intelligent, self-directed decision-making systems that integrate seamlessly with enterprise workflows and propel our operational efficiency forward. Key Responsibilities As an Agentic AI Architect, you will: AI Architecture and System Design: Architect and design robust, scalable, and autonomous AI systems that seamlessly integrate with enterprise workflows, cloud platforms, and advanced LLM frameworks. Define blueprints for APIs, agents, and pipelines to enable dynamic, context-aware AI decision-making. Strategic AI Leadership: Provide technical leadership and strategic direction for AI initiatives focused on agentic systems. Guide cross-functional teams of AI engineers, data scientists, and developers in the adoption and implementation of advanced AI architectures. Framework and Platform Expertise: Evaluate, recommend, and implement leading AI tools and frameworks, with a strong focus on autonomous AI solutions (e.g., multi-agent frameworks, self-optimizing systems, LLM-driven decision engines). Drive the selection and utilization of cloud platforms (AWS SageMaker preferred, Azure ML, Google Cloud Vertex AI) for scalable AI deployments. Customization and Optimization: Design strategies for optimizing autonomous AI models for domain-specific tasks (e.g., real-time analytics, adaptive automation). Define methodologies for fine-tuning LLMs, multi-agent frameworks, and feedback loops to align with overarching business goals and architectural principles. Innovation and Research Integration: Spearhead the integration of R&D initiatives into production architectures, advancing agentic AI capabilities. Evaluate and prototype emerging frameworks (e.g., Autogen, AutoGPT, LangChain), neuro-symbolic architectures, and self-improving AI systems for architectural viability. Documentation and Architectural Blueprinting: Develop comprehensive technical white papers, architectural diagrams, and best practices for autonomous AI system design and deployment. Serve as a thought leader, sharing architectural insights at conferences and contributing to open-source AI communities. System Validation and Resilience: Design and oversee rigorous architectural testing of AI agents, including stress testing, adversarial scenario simulations, and bias mitigation strategies, ensuring alignment with compliance, ethical and performance benchmarks for robust production systems. Stakeholder Collaboration & Advocacy: Collaborate with executives, product teams, and compliance officers to align AI architectural initiatives with strategic objectives. Advocate for AI-driven innovation and architectural best practices across the organization. Qualifications Technical Expertise: 12+ years of progressive experience in AI/ML, with a strong track record as an AI Architect, ML Architect, or AI Solutions Lead. 7+ years specifically focused on designing and architecting autonomous/agentic AI systems (e.g., multi-agent frameworks, self-optimizing systems, or LLM-driven decision engines). Expertise in Python (mandatory) and familiarity with Node.js for architectural integrations. Extensive hands-on experience with autonomous AI tools and frameworks: LangChain, Autogen, CrewAI, or architecting custom agentic frameworks. Proficiency in cloud platforms for AI architecture: AWS SageMaker (most preferred), Azure ML, or Google Cloud Vertex AI, with a deep understanding of their AI service offerings. Demonstrable experience with MLOps pipelines (e.g., Kubeflow, MLflow) and designing scalable deployment strategies for AI agents in production environments. Leadership & Strategic Acumen Proven track record of leading the architectural direction of AI/ML teams, managing complex AI projects, and mentoring senior technical staff. Strong understanding and practical application of AI governance frameworks (e.g., EU AI Act, NIST AI RMF) and advanced bias mitigation techniques within AI architectures. Exceptional ability to translate complex technical AI concepts into clear, concise architectural plans and strategies for non-technical stakeholders and executive leadership. Ability to envision and articulate a long-term strategy for AI within the business, aligning AI initiatives with business objectives and market trends. Foster collaboration across various practices, including product management, engineering, and marketing, to ensure cohesive implementation of AI strategies that meet business goals. What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 10 - Officials or Managers (EEO-2 Job Categories-United States of America), IFTECH103.2 - Middle Management Tier II (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 316525 Posted On: 2025-06-11 Location: Gurgaon, Haryana, India Show more Show less
Posted 1 month ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About The Role Grade Level (for internal use): 12 Lead Agentic AI Developer Location: Gurgaon, Hyderabad and Bangalore Job Description A Lead Agentic AI Developer will drive the design, development, and deployment of autonomous AI systems that enable intelligent, self-directed decision-making. Their day-to-day operations focus on advancing AI capabilities, leading teams, and ensuring ethical, scalable implementations. Responsibilities AI System Design and Development: Architect and build autonomous AI systems that integrate with enterprise workflows, cloud platforms, and LLM frameworks. Develop APIs, agents, and pipelines to enable dynamic, context-aware AI decision-making. Team Leadership and Mentorship: Lead cross-functional teams of AI engineers, data scientists, and developers. Mentor junior staff in agentic AI principles, reinforcement learning, and ethical AI governance. Customization and Advancement: Optimize autonomous AI models for domain-specific tasks (e.g., real-time analytics, adaptive automation). Fine-tune LLMs, multi-agent frameworks, and feedback loops to align with business goals. Ethical AI Governance: Monitor AI behavior, audit decision-making processes, and implement safeguards to ensure transparency, fairness, and compliance with regulatory standards. Innovation and Research: Spearhead R&D initiatives to advance agentic AI capabilities. Experiment with emerging frameworks (e.g.,Autogen, AutoGPT, LangChain), neuro-symbolic architectures, and self-improving AI systems. Documentation and Thought Leadership: Publish technical white papers, case studies, and best practices for autonomous AI. Share insights at conferences and contribute to open-source AI communities. System Validation: Oversee rigorous testing of AI agents, including stress testing, adversarial scenario simulations, and bias mitigation. Validate alignment with ethical and performance benchmarks. Stakeholder Leadership: Collaborate with executives, product teams, and compliance officers to align AI initiatives with strategic objectives. Advocate for AI-driven innovation across the organization. Required Skills/Qualifications What We’re Looking For : Technical Expertise: 8+ years as a Senior AI Engineer, ML Architect, or AI Solutions Lead, with 5+ years focused on autonomous/agentic AI systems (e.g., multi-agent frameworks, self-optimizing systems, or LLM-driven decision engines). Expertise in Python (mandatory) and familiarity with Node.js. Hands-on experience with autonomous AI tools: LangChain, Autogen, CrewAI, or custom agentic frameworks. Proficiency in cloud platforms: AWS SageMaker (most preferred), Azure ML, or Google Cloud Vertex AI. Experience with MLOps pipelines (e.g., Kubeflow, MLflow) and scalable deployment of AI agents. Leadership: Proven track record of leading AI/ML teams, managing complex projects, and mentoring technical staff. Ethical AI: Familiarity with AI governance frameworks (e.g., EU AI Act, NIST AI RMF) and bias mitigation techniques. Communication: Exceptional ability to translate technical AI concepts for non-technical stakeholders. Nice To Have Contributions to AI research (published papers, patents) or open-source AI projects (e.g., TensorFlow Agents, AutoGen). Experience with DevOps/MLOps tools: Kubeflow, MLflow, Docker, or Terraform. Expertise in NLP, computer vision, or graph-based AI systems. Familiarity with quantum computing or neuromorphic architectures for AI. What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 10 - Officials or Managers (EEO-2 Job Categories-United States of America), IFTECH103.2 - Middle Management Tier II (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 316524 Posted On: 2025-06-11 Location: Gurgaon, Haryana, India Show more Show less
Posted 1 month ago
14.0 - 16.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Job Category Software Engineering Job Details About Salesforce We're Salesforce, the Customer Company, inspiring the future of business with AI+ Data +CRM. Leading with our core values, we help companies across every industry blaze new trails and connect with customers in a whole new way. And, we empower you to be a Trailblazer, too - driving your performance and career growth, charting new paths, and improving the state of the world. If you believe in business as the greatest platform for change and in companies doing well and doing good - you've come to the right place. Role Description Join the AI team at Salesforce and make a real impact with your software designs and code! This position requires technical skills, outstanding analytical and influencing skills, and extraordinary business insight. It is a multi-functional role that requires building alignment and communication with several engineering organisations. We work in a highly collaborative environment, and you will partner with a highly cross functional team comprised of Data Scientists, Software Engineers, Machine learning engineers, UX experts, and product managers to build upon Agentforce, our innovative new AI framework. We value execution, clear communication, feedback and making learning fun. Your impact - You will: Architect, design, implement, test and deliver highly scalable AI solutions: Agents, AI Copilots/assistants, Chatbots, AI Planners, RAG solutions. Be accountable for defining and driving software architecture and enterprise capabilities (scalability, fault tolerance, extensibility, maintainability, etc.) Independently design sophisticated software systems for high-end solutions, while working in a consultative fashion with other senior engineers and architects in AI Cloud and across the company Determine overall architectural principles, frameworks, and standards to craft vision and roadmaps Analyze and provide feedback on product strategy and technical feasibility Drive long-term design strategies that span multiple sophisticated projects, deliver technical reports and performance presentations to customers and at industry events Actively communicate with, encourage and motivate all levels of staff. Be a domain expert for multiple products, while writing code and working closely with other developers, PM, and UX to ensure features are delivered to meet business and quality requirements Troubleshoot complex production issues and work with support and customers as needed Drives long-term design strategies that span multiple sophisticated projects, deliver technical reports and performance presentations to customers and at industry events Required Skills: 14+ years of experience in building highly scalable Software-as-a-Service applications/ platform Experience building technical architectures that address complex performance issues Thrive in dynamic environments, working on cutting edge projects that often come with ambiguity. Innovation/startup mindset to be able to adapt Deep knowledge of object oriented programming and experience with at least one object oriented programming language, preferably Java Proven ability to mentor team members to support their understanding and growth of software engineering architecture concepts and aid in their technical development High proficiency in at least one high-level programming language and web framework (NodeJS, Express, Hapi, etc.) Proven understanding of web technologies, such as JavaScript, CSS, HTML5, XML, JavaScript, JSON, and/or Ajax Data model design, database technologies (RDBMS & NoSQL), and languages such as SQL and PL/SQL Experience delivering or partnering with teams that ship AI products at high scale. Experience in automated testing including unit and functional testing using Java, JUnit, JSUnit, Selenium Demonstrated ability to drive long-term design strategies that span multiple complex projects Experience delivering technical reports and presentations to customers and at industry events Demonstrated track record of cultivating strong working relationships and driving collaboration across multiple technical and business teams to resolve critical issues Experience with the full software lifecycle in highly agile and ambiguous environments Excellent interpersonal and communication skills. Preferred Skills: Solid experience in API development, API lifecycle management and/or client SDKs development Experience with machine learning or cloud technology platforms like AWS sagemaker, terraform, spinnaker, EKS, GKE Experience with AI/ML and Data science, including Predictive and Generative AI Experience with data engineering, data pipelines or distributed systems Experience with continuous integration (CI) and continuous deployment (CD), and service ownership Familiarity with Salesforce APIs and technologies Ability to support/resolve production customer escalations with excellent debugging and problem solving skills BENEFITS & PERKS Comprehensive benefits package including well-being reimbursement, generous parental leave, adoption assistance, fertility benefits, and more! World-class enablement and on-demand training with Exposure to executive thought leaders and regular 1:1 coaching with leadership Volunteer opportunities and participation in our 1:1:1 model for giving back to the community For more details, visit Accommodations If you require assistance due to a disability applying for open positions please submit a request via this . Posting Statement Salesforce is an equal opportunity employer and maintains a policy of non-discrimination with all employees and applicants for employment. What does that mean exactly It means that at Salesforce, we believe in equality for all. And we believe we can lead the path to equality in part by creating a workplace that's inclusive, and free from discrimination. Any employee or potential employee will be assessed on the basis of merit, competence and qualifications - without regard to race, religion, color, national origin, sex, sexual orientation, gender expression or identity, transgender status, age, disability, veteran or marital status, political viewpoint, or other classifications protected by law. This policy applies to current and prospective employees, no matter where they are in their Salesforce employment journey. It also applies to recruiting, hiring, job assignment, compensation, promotion, benefits, training, assessment of job performance, discipline, termination, and everything in between. Recruiting, hiring, and promotion decisions at Salesforce are fair and based on merit. The same goes for compensation, benefits, promotions, transfers, reduction in workforce, recall, training, and education.
Posted 1 month ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Python, AWS & Generative AI Developer Experience: 3–5 years Location: Bengaluru, Hyderabad, Chennai, Mumbai, Pune, Kolkata, Gurugram Employment Type: Full-Time Company Website: www.cirruslabs.io About CirrusLabs CirrusLabs is a global digital transformation and IT solutions provider that empowers businesses to achieve agility, innovation, and scalable growth. We build solutions that align with real-world business needs while championing a strong culture of engineering excellence and team ownership. Role Overview We are looking for a motivated and skilled Python, AWS & GenAI Developer to join our agile product development team. The ideal candidate is passionate about backend development, cloud-native deployment, and building next-gen AI-driven applications. You will collaborate with cross-functional teams to build scalable APIs, integrate with modern databases, and contribute to Generative AI-powered solutions. Key Responsibilities Agile Development Participate in Agile ceremonies, sprint planning, and iterative development cycles. Refine user stories into technical tasks and deliver working solutions every sprint. Backend & API Development Design and develop RESTful APIs using Python (FastAPI/Flask/Django). Implement integration logic with external services and data pipelines. Database Development Develop and optimize data models using PostgreSQL and Vector Databases (e.g., Pinecone, Weaviate, FAISS). Manage schema changes, migrations, and efficient query design. AI & GenAI Integration Integrate Generative AI models (e.g., OpenAI, HuggingFace) into applications where applicable. Work with embedding models and LLMs to support AI-driven use cases. Deployment & DevOps Package applications into Docker containers and deploy them to Kubernetes clusters. Implement CI/CD practices to support automated testing and delivery. Testing & Code Quality Create comprehensive unit and integration tests across backend layers. Follow clean code principles and participate in peer code reviews. Frontend Collaboration (Optional) Contribute to frontend development using Angular or React, especially for micro-frontend architecture. Work closely with frontend developers to define and support REST API consumption. Team Culture & Collaboration Foster a culture of autonomy, accountability, and knowledge-sharing. Actively engage with the Product Owner to align features with user needs. Feature Demonstration Participate in sprint reviews to demo implemented features and gather feedback for improvements. Required Skills Strong experience in Python development with emphasis on REST APIs. Solid understanding of PostgreSQL and at least one vector database. Familiarity with Generative AI/LLM integration (OpenAI APIs, embeddings, etc.). Experience deploying applications using Docker and Kubernetes. Hands-on experience in agile software development. Knowledge of writing unit tests and using tools like pytest or unittest. Preferred Skills Exposure to frontend frameworks like Angular or React. Understanding of micro-frontend and component-based architecture. Experience with cloud services such as AWS Lambda, S3, SageMaker, ECS/EKS. Familiarity with CI/CD tools like GitHub Actions, Jenkins, or GitLab CI. Show more Show less
Posted 1 month ago
8.0 years
0 Lacs
India
Remote
AI / Generative AI Engineer Location: Remote ( Pan India ) Job Type: Fulltime NOTE: "Only immediate joiners or candidates with a notice period of 15 days or less will be considered" Overview: We are seeking a highly skilled and motivated AI/Generative AI Engineer to join our innovative team. The ideal candidate will have a strong background in designing, developing, and deploying artificial intelligence and machine learning models, with a specific focus on cutting-edge Generative AI technologies. This role requires hands-on experience with one or more major cloud platforms (Google Cloud Platform - GCP, Amazon Web Services - AWS) and/or modern data platforms (Databricks, Snowflake) . You will be instrumental in building and scaling AI solutions that drive business value and transform user experiences. Key Responsibilities: Design and Development: Design, build, train, and deploy scalable and robust AI/ML models, including traditional machine learning algorithms and advanced Generative AI models (e.g., Large Language Models - LLMs, diffusion models). Develop and implement algorithms for tasks such as natural language processing (NLP) , text generation, image synthesis, speech recognition, and forecasting. Work extensively with LLMs, including fine-tuning, prompt engineering, retrieval-augmented generation (RAG) , and evaluating their performance. Develop and manage data pipelines for data ingestion, preprocessing, feature engineering, and model training, ensuring data quality and integrity. Platform Expertise: Leverage cloud AI/ML services on GCP (e.g., Vertex AI, AutoML, BigQuery ML, Model Garden, Gemini) , AWS (e.g., SageMaker, Bedrock, S3 ), Databricks, and/or Snowflake to build and deploy solutions. Architect and implement AI solutions ensuring scalability, reliability, security, and cost-effectiveness on the chosen platform(s). Optimize data storage, processing, and model serving components within the cloud or data platform ecosystem. MLOps and Productionization: Implement MLOps best practices for model versioning, continuous integration/continuous deployment (CI/CD), monitoring, and lifecycle management. Deploy models into production environments and ensure their performance, scalability, and reliability. Monitor and optimize the performance of AI models in production, addressing issues related to accuracy, speed, and resource utilization. Collaboration and Innovation: Collaborate closely with data scientists, software engineers, product managers, and business stakeholders to understand requirements, define solutions, and integrate AI capabilities into applications and workflows. Stay current with the latest advancements in AI, Generative AI, machine learning, and relevant cloud/data platform technologies. Lead and participate in the ideation and prototyping of new AI applications and systems. Ensure AI solutions adhere to ethical standards, responsible AI principles, and regulatory requirements, addressing issues like data privacy, bias, and fairness. Documentation and Communication: Create and maintain comprehensive technical documentation for AI models, systems, and processes. Effectively communicate complex AI concepts and results to both technical and non-technical audiences. Required Qualifications: 8+ years of experience with software development in one or more programming languages, and with data structures/algorithms/Data Architecture. 3+ years of experience with state of the art GenAI techniques (e.g., LLMs, Multi-Modal, Large Vision Models) or with GenAI-related concepts (language modeling, computer vision). 3+ years of experience with ML infrastructure (e.g., model deployment, model evaluation, optimization, data processing, debugging). Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Machine Learning, Data Science, or a related technical field. Proven experience as an AI Engineer, Machine Learning Engineer, or a similar role. Strong programming skills in Python. Familiarity with other languages like Java, Scala, or R is a plus. Solid understanding of machine learning algorithms (supervised, unsupervised, reinforcement learning), deep learning concepts (e.g., CNNs, RNNs, Transformer s), and statistical modeling. Hands-on experience with developing and deploying Generative AI models and techniques, including working with Large Language Models ( LLMs like GPT, BERT, LLaMA, etc .). Proficiency in using common A I/ML frameworks and libraries such as TensorFlow, PyTorch, scikit-learn, Keras, Hugging Face Transformers, LangChain, etc. Demonstrable experience with at least one of the following cloud/data platforms: GCP: Experience with Vertex AI, BigQuery ML, Google Cloud Storage, and other GCP AI/ML services . AWS: Experience with SageMaker, Bedrock, S3, and other AWS AI/ML services . Databricks: Experience building and scaling AI/ML solutions on the Databricks Lakehouse Platform , including MLflow. Snowflake: Experience leveraging Snowflake for data warehousing, data engineering for AI/ML workloads , and Snowpark. Experience with data engineering, including data acquisition, cleaning, transformation, and building ETL/ELT pipelines. Knowledge of MLOps tools and practices for model deployment, monitoring, and management. Familiarity with containerization technologies like Docker and orchestration tools like Kubernetes. Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Show more Show less
Posted 1 month ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description: Senior Data Scientist – Cognitive Services and AI/ML Experience Level: 6-12 years Competency: Cognitive Services Chennai-WFO Primary Skill: Artificial Intelligence & Machine Learning (AI/ML) Algorithms Secondary Skill: Natural Language Processing (NLP) Computer Vision Cloud-based AI Services (AWS SageMaker, Azure Cognitive Services, Google AI Platform) Job Description (JD): The Data Scientist specializing in Cognitive Services will develop and deploy AI/ML solutions that leverage advanced algorithms to solve complex business problems. The role focuses on integrating machine learning models into applications powered by cognitive capabilities such as image recognition, speech-to-text, and sentiment analysis. . . Roles and Responsibilities: AI/ML Model Development : Design and implement machine learning models tailored for cognitive services. Algorithm Design: Leverage algorithms for regression, classification, clustering, NLP, and computer vision tasks. Data Integration: Collaborate with data engineers to ensure high-quality and well-structured data pipelines. Cloud Services: Utilize AI/ML tools on platforms like AWS, Azure, or Google for training, deployment, and scalability. Model Deployment: Work with MLOps frameworks to deploy and monitor models in production environments. Cross-functional Collaboration: Partner with software developers and business analysts to integrate cognitive services into customer-facing applications. Performance Optimization: Optimize models and algorithms for speed and scalability. Research & Innovation: Stay abreast of emerging AI/ML technologies and incorporate them into cognitive solutions. Qualifications: Master’s or PhD in Data Science, Machine Learning, or a related field. 8+ years of hands-on experience in AI/ML with a focus on cognitive services. Proficiency in frameworks such as TensorFlow, Py Torch, and Scikit-learn. Expertise in Python, R, and SQL. Experience with cloud-based AI tools and services like AWS Sage Maker or Azure Cognitive Services. Strong understanding of ML Ops practices. Preferred Skills: Familiarity with neural networks for advanced deep learning tasks. Knowledge of speech recognition, image processing, and sentiment analysis technologies. Experience with big data tools such as Apache Spark and Hadoop. Certification in AWS, Azure, or Google Cloud AI services. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Hyderābād
On-site
Req ID: 327890 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Python Developer - Digital Engineering Sr. Engineer to join our team in Hyderabad, Telangana (IN-TG), India (IN). PYTHON Data Engineer Exposure to retrieval-augmented generation (RAG) systems and vector databases. Strong programming skills in Python (and optionally Scala or Java). Hands-on experience with data storage solutions (e.g., Delta Lake, Parquet, S3, BigQuery) Experience with data preparation for transformer-based models or LLMs Expertise in working with large-scale data frameworks (e.g., Spark, Kafka, Dask) Familiarity with MLOps tools (e.g., MLflow, Weights & Biases, SageMaker Pipelines) About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us. This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here. If you'd like more information on your EEO rights under the law, please click here. For Pay Transparency information, please click here.
Posted 1 month ago
5.0 years
0 Lacs
Hyderābād
On-site
Job Responsibilities: Development of AI/ML models and workflow to apply advanced algorithms and machine learning Enable team to run an automated design engine Creates design standards and assurance processes for easily deployable and scalable models. Ensure successful developments: Be a technical leader through strong example and training of more junior engineers, documenting all relevant product and design information to educate others on novel design techniques and provide guidance on product usage CI/CD Pipeline (Azure Devops/Git) integration as Code repository. Minimum Qualifications (Experience and Skills) 5+ years of Data science experience A strong software engineering background with emphasis on C/C++ or Python 1+ years of experience in AWS Sagemaker Services Exposure to AWS lambda ,API Gateway, AWs Amplify & AWS Serverless , AWS Cognotio, AWS Security Experience in debugging complex issues with a focus on object-oriented software design and development Experience with optimization techniques and algorithms Experience developing artificial neural networks and deep neural networks Previous experience working in an Agile environment, and collaborating with multi-disciplinary teams Ability to communicate and document design work with clarity and completeness Previous experience working on machine learning projects. Team player with a strong sense of urgency to meet product requirements with punctuality and professionalism Preferred Qualifications Programming Experience in Perl / Python / R / Matlab / Shell scripting Knowledge of neural networks, with hands-on experience using ML frameworks such as TensorFlow or PyTorch Knowledge of Convolutional Neural Networks (CNNs), RNN/LSTMs Knowledge of data management fundamentals and data storage principles Knowledge of distributed systems as it pertains to data storage and computing Knowledge of reinforcement learning techniques Knowledge of evolutionary algorithms AWS Certification
Posted 1 month ago
2.0 - 3.0 years
3 - 8 Lacs
Hyderābād
On-site
Job Description: We are hiring an experienced Python Developer who has exposure to Artificial Intelligence and strong skills in DevOps and Cloud technologies. The candidate will be responsible for designing, developing, and maintaining Python applications integrated with AI models and deployed on cloud infrastructure. Key Responsibilities: Develop and maintain high-quality Python-based applications Collaborate with teams to translate business needs into technical solutions Integrate AI algorithms, including large language model workflows and POCs Optimize code for scalability, performance, and maintainability Implement and manage cloud infrastructure on AWS, Azure, or GCP Work with DevOps teams on CI/CD, containerization (Docker, Kubernetes), and automation Debug and troubleshoot software issues efficiently Keep updated with latest developments in AI, Python, DevOps, and Cloud technologies Required Skills: Bachelors degree in Computer Science or related field 2-3 years experience as Python Developer with AI/ML project exposure Hands-on experience with LLMs and AI/ML frameworks (TensorFlow, PyTorch, Scikit-learn) Strong knowledge of DevOps tools such as Docker, Kubernetes, Git, Jenkins Cloud deployment and management experience (AWS / Azure / GCP) Familiarity with infrastructure-as-code (Terraform, CloudFormation) is a plus Good communication and teamwork skills Preferred Qualifications: Experience with cloud ML services (Amazon SageMaker, Google Cloud ML Engine) Knowledge of LLM frameworks like LangFlow Agile/Scrum development methodology experience Apply now if you are passionate about AI and cloud technologies and want to work in an innovative environment!
Posted 1 month ago
0 years
0 Lacs
Chandigarh, India
On-site
Company Profile Oceaneering is a global provider of engineered services and products, primarily to the offshore energy industry. We develop products and services for use throughout the lifecycle of an offshore oilfield, from drilling to decommissioning. We operate the world's premier fleet of work class ROVs. Additionally, we are a leader in offshore oilfield maintenance services, umbilicals, subsea hardware, and tooling. We also use applied technology expertise to serve the defense, entertainment, material handling, aerospace, science, and renewable energy industries. Since year 2003, Oceaneering’s India Center has been an integral part of operations for Oceaneering’s robust product and service offerings across the globe. This center caters to diverse business needs, from oil and gas field infrastructure, subsea robotics to automated material handling & logistics. Our multidisciplinary team offers a wide spectrum of solutions, encompassing Subsea Engineering, Robotics, Automation, Control Systems, Software Development, Asset Integrity Management, Inspection, ROV operations, Field Network Management, Graphics Design & Animation, and more. In addition to these technical functions, Oceaneering India Center plays host to several crucial business functions, including Finance, Supply Chain Management (SCM), Information Technology (IT), Human Resources (HR), and Health, Safety & Environment (HSE). Our world class infrastructure in India includes modern offices, industry-leading tools and software, equipped labs, and beautiful campuses aligned with the future way of work. Oceaneering in India as well as globally has a great work culture that is flexible, transparent, and collaborative with great team synergy. At Oceaneering India Center, we take pride in “Solving the Unsolvable” by leveraging the diverse expertise within our team. Join us in shaping the future of technology and engineering solutions on a global scale. Position Summary The Principal Data Scientist will develop Machine Learning and/or Deep Learning based integrated solutions that address customer needs such as inspection topside and subsea. They will also be responsible for development of machine learning algorithms for automation and development of data analytics programs for Oceaneering’s next generation systems. The position requires the Principal Data Scientist to work with various Oceaneering Business units across global time zones but also offers the flexibility to work in a Hybrid Work-office environment. Essential Duties And Responsibilities Lead and supervise a team of moderately experienced engineers on product/prototype design & development assignments or applications. Work both independently and collaboratively to develop custom data models and algorithms to apply on data sets that will be deployed in existing and new products. Mine and analyze data from company databases to drive optimization and improvement of product development, marketing techniques and business strategies. Assess the effectiveness and accuracy of new data sources and data gathering techniques. Build data models and organize structured and unstructured data to interpret solutions. Prepares data for predictive and prescriptive modeling. Architect solutions by selection of appropriate technology and components Determines the technical direction and strategy for solving complex, significant, or major issues. Plans and evaluates architectural design and identifies technical risks and associated ways to mitigate those risks. Prepares design proposals to reflect cost, schedule, and technical approaches. Recommends test control, strategies, apparatus, and equipment. Develop, construct, test, and maintain architectures. Lead research activities for ongoing government and commercial projects and products. Collaborate on proposals, grants, and publications in algorithm development. Collect data as warranted to support the algorithm development efforts. Work directly with software engineers to implement algorithms into commercial software products. Work with third parties to utilize off the shelf industrial solutions. Algorithm development on key research areas based on client’s technical problem. This requires constant paper reading, and staying ahead of the game by knowing what is and will be state of the art in this field. Ability to work hands-on in cross-functional teams with a strong sense of self-direction. Non-essential Develop an awareness of programming and design alternatives Cultivate and disseminate knowledge of application development best practices Gather statistics and prepare and write reports on the status of the programming process for discussion with management and/or team members Direct research on emerging application development software products, languages, and standards in support of procurement and development efforts Train, manage and provide guidance to junior staff Perform all other duties as requested, directed or assigned Supervisory Responsibilities This position does not have direct supervisory responsibilities. Re Reporting Relationship Engagement Head Qualifications REQUIRED Bachelor’s degree in Electronics and Electrical Engineering (or related field) with eight or more years of past experience working on Machine Learning and Deep Learning based projects OR Master’s degree in Data Science (or related field) with six or more years of past experience working on Machine Learning and Deep Learning based projects DESIRED Strong knowledge of advanced statistical functions: histograms and distributions, Regression studies, scenario analysis etc. Proficient in Object Oriented Analysis, Design and Programming Strong background in Data Engineering tools like Python/C#, R, Apache Spark, Scala etc. Prior experience in handling large amount of data that includes texts, shapes, sounds, images and/or videos. Knowledge of SaaS Platforms like Microsoft Fabric, Databricks, Snowflake, h2o etc. Background experience of working on cloud platforms like Azure (ML) or AWS (SageMaker), or GCP (Vertex), etc. Proficient in querying SQL and NoSQL databases Hands on experience with various databases like MySQL/PostgreSQL/Oracle, MongoDB, InfluxDB, TimescaleDB, neo4j, Arango, Redis, Cassandra, etc. Prior experience with at least one probabilistic/statistical ambiguity resolution algorithm Proficient in Windows and Linux Operating Systems Basic understanding of ML frameworks like PyTorch and TensorFlow Basic understanding of IoT protocols like Kafka, MQTT or RabbitMQ Prior experience with bigdata platforms like Hadoop, Apache Spark, or Hive is a plus. Knowledge, Skills, Abilities, And Other Characteristics Ability to analyze situations accurately, utilizing a variety of analytical techniques in order to make well informed decisions Ability to effectively prioritize and execute tasks in a high-pressure environment Skill to gather, analyze and interpret data. Ability to determine and meet customer needs Ensures that others involved in a project or effort are kept informed about developments and plans Knowledge of communication styles and techniques Ability to establish and maintain cooperative working relationships Skill to prioritize workflow in a changing work environment Knowledge of applicable data privacy practices and laws Strong analytical and problem-solving skills. Additional Information This position is considered OFFICE WORK which is characterized as follows. Almost exclusively indoors during the day and occasionally at night Occasional exposure to airborne dust in the work place Work surface is stable (flat) The physical demands described here are representative of those that must be met by an employee to successfully perform the essential functions of this job. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. This position is considered LIGHT work. OCCASIONAL FREQUENT CONSTANT Lift up to 20 pounds Climbing, stooping, kneeling, squatting, and reaching Lift up to 10 pounds Standing Repetitive movements of arms and hands Sit with back supported Closing Statement In addition, we make a priority of providing learning and development opportunities to enable employees to achieve their potential and take charge of their future. As well as developing employees in a specific role, we are committed to lifelong learning and ongoing education, including developing people skills and identifying future supervisors and managers. Every month, hundreds of employees are provided training, including HSE awareness, apprenticeships, entry and advanced level technical courses, management development seminars, and leadership and supervisory training. We have a strong ethos of internal promotion. We can offer long-term employment and career advancement across countries and continents. Working at Oceaneering means that if you have the ability, drive, and ambition to take charge of your future-you will be supported to do so and the possibilities are endless. Equal Opportunity/Inclusion Oceaneering’s policy is to provide equal employment opportunity to all applicants. Show more Show less
Posted 1 month ago
24.0 years
3 - 8 Lacs
Noida
On-site
SynapseIndia is a software development company with over 24 years of experience, featuring development offices in India and the USA. We serve clients worldwide, delivering innovative solutions tailored to their needs. Our Noida SEZ office is conveniently located just a 10-minute walk from the nearest metro station. Why work with us? Partnerships with Industry Leaders: We are a Google and Microsoft partner, staffed by certified professionals. Global Presence: As a multinational corporation, we have clients and employees across the globe. Structured Environment: We follow CMMI Level-5 compliant processes to ensure quality and efficiency. Timely Salaries: We have consistently paid salaries on time since our inception. Job Stability: Despite market fluctuations, we have not had to lay off employees. Work-Life Balance: Enjoy weekends off on the 2nd and last Saturday of every month, with no night shifts. Our employees are 100% satisfied, thanks to a culture of trust and growth opportunities. Eco-Friendly Workplace: We promote health and well-being with special anti-radiation and energy removal features in our offices. We prioritize the job security of all our employees. We celebrate all festivals with enthusiasm and joy. Yearly Appraisals: Exceptional performers can receive over 100% increments during appraisals. We recognize and reward top performers on a monthly basis for their outstanding contributions. We provide Accidental and Medical Insurance to our employees. Who are we looking for? Designation : Project Manager (Customer Advocacy) Experience Range : 5+ years What is the work? Coordinate project planning, tracking, and reporting for AI-focused initiatives across research, engineering, and product teams. Manage timelines, deliverables, and dependencies across multiple projects to ensure on-time delivery. Facilitate communication between data scientists, ML engineers, software developers, product managers, and external stakeholders. Organize and lead project meetings, including daily stand-ups, sprint planning, retrospectives, and stakeholder reviews. Monitor risks, issues, and changes, ensuring timely mitigation and resolution. Maintain documentation, project plans, and dashboards (e.g., Confluence, Jira, Notion, Trello). Assist in coordinating model training, data pipeline rollouts, infrastructure setup, and AI experiments. Support regulatory or ethical compliance checks relevant to AI deployments (e.g., model bias assessments, audit trails). Collaborate on budget tracking, vendor management, and resource allocation for AI-related tools and cloud usage. What skills and experience are we looking for? Bachelor’s degree in Computer Science, Engineering, Data Science, or a related technical field. 5 + years of experience in project coordination, with at least 1–2 years in AI, data science, or machine learning environments. Familiarity with AI/ML concepts, model lifecycle stages, and data-driven product development. Proficiency with project management tools (e.g., Jira, Asana, MS Project, Notion). Excellent communication, documentation, and interpersonal skills. Strong time management and ability to juggle multiple priorities in a fast-paced environment. Experience with agile or hybrid SDLC in AI/ML contexts. Knowledge of MLOps tools and platforms (e.g., MLflow, Kubeflow, AWS Sagemaker, Databricks). Basic understanding of Python, data workflows, or model deployment pipelines. Exposure to responsible AI frameworks or data governance practices.
Posted 1 month ago
0 years
8 - 10 Lacs
Udaipur
On-site
About the job Role Description This is a full-time on-site role for a Tech Lead (AI and Data) located in Bhopal. The Tech Lead will be responsible for managing and overseeing the technical execution of AI and data projects. Daily tasks involve troubleshooting, providing technical support, supervising IT-related activities, and ensuring the team is trained and well-supported. The Tech Lead will also collaborate with Kadel Labs to ensure successful product development and implementation. Tech Skills Here are six key technical skills an AI Tech Lead should possess: Machine Learning & Deep Learning – Strong grasp of algorithms (supervised, unsupervised, reinforcement) – Experience building and tuning neural networks (CNNs, RNNs, transformers) Data Engineering & Pipeline Architecture – Designing ETL/ELT workflows, data lakes, and feature stores – Proficiency with tools like Apache Spark, Kafka, Airflow, or Databricks Model Deployment & MLOps – Containerization (Docker) and orchestration (Kubernetes) for scalable inference – CI/CD for ML (e.g. MLflow, TFX, Kubeflow) and automated monitoring of model drift Cloud Platforms & Services – Hands-on with AWS (SageMaker, Lambda), Azure (ML Studio, Functions), or GCP (AI Platform) – Infrastructure-as-Code (Terraform, ARM templates) for reproducible environments Software Engineering Best Practices – Strong coding skills in Python (TensorFlow, PyTorch, scikit-learn) and familiarity with Java/Scala or Go – API design (REST/GraphQL), version control (Git), unit testing, and code reviews Data Security & Privacy in AI – Knowledge of PII handling, differential privacy, and secure data storage/encryption – Understanding of compliance standards (GDPR, HIPAA) and bias mitigation techniques Other Qualifications Troubleshooting and Technical Support skills Experience in Information Technology and Customer Service Ability to provide Training and guidance to team members Strong leadership and project management skills Excellent communication and collaboration abilities Experience in AI and data technologies is a plus Bachelor's or Master's degree in Computer Science, Information Technology, or a related field Job Types: Full-time, Permanent Pay: ₹875,652.61 - ₹1,016,396.45 per year Benefits: Health insurance Schedule: Day shift Monday to Friday Work Location: In person
Posted 1 month ago
3.0 years
0 Lacs
Kochi, Kerala, India
Remote
AWS Data Engineer Location: Remote (India) Experience: 3+ Years Employment Type: Full-Time About the Role: We are seeking a talented AWS Data Engineer with at least 3 years of hands-on experience in building and managing data pipelines using AWS services. This role involves working with large-scale data, integrating multiple data sources (including sensor/IoT data), and enabling efficient, secure, and analytics-ready solutions. Experience in the energy industry or working with time-series/sensor data is a strong plus. Key Responsibilities: Build and maintain scalable ETL/ELT data pipelines using AWS Glue, Redshift, Lambda, EMR, S3, and Athena Process and integrate structured and unstructured data, including sensor/IoT and real-time streams Optimize pipeline performance and ensure reliability and fault tolerance Collaborate with cross-functional teams including data scientists and analysts Perform data transformations using Python, Pandas, and SQL Maintain data integrity, quality, and security across the platform Use Terraform and CI/CD tools (e.g., Azure DevOps) for infrastructure and deployment automation Support and monitor pipeline workflows, troubleshoot issues, and implement fixes Contribute to the adoption of emerging tools like AWS Bedrock, Textract, Rekognition, and GenAI solutions Required Skills and Qualifications: Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field 3+ years of experience in data engineering using AWS Strong skills in: AWS Glue, Redshift, S3, Lambda, EMR, Athena Python, Pandas, SQL RDS, Postgres, SAP HANA Solid understanding of data modeling, warehousing, and pipeline orchestration Experience with version control (Git) and infrastructure as code (Terraform) Preferred Skills: Experience working with energy sector dat a or IoT/sensor-based dat aExposure to machine learnin g tools and frameworks (e.g., SageMaker, TensorFlow, Scikit-learn )Familiarity with big data technologie s like Apache Spark, Kafk aExperience with data visualization tool s (Tableau, Power BI, AWS QuickSight )Awareness of data governance and catalog tool s such as AWS Data Quality, Collibra, and AWS Databre wAWS Certifications (Data Analytics, Solutions Architect ) Show more Show less
Posted 1 month ago
10.0 - 15.0 years
40 - 45 Lacs
Bengaluru
Work from Office
AI/ML Architect Experience 10+ years in total, 8+ years in AI/ML development 3+ years in AI/ML architecture Education Bachelors/Masters in CS, AI/ML, Engineering, or similar Title: AI/ML Architect Location: Onsite Bangalore Experience: 10+ years Position Summary: We are seeking an experienced AI/ML Architect to lead the design and deployment of scalable AI solutions. This role requires a strong blend of technical depth, systems thinking, and leadership in machine learning , computer vision , and real-time analytics . You will drive the architecture for edge, on-prem, and cloud-based AI systems, integrating 3rd party data sources, sensor and vision data to enable predictive, prescriptive, and autonomous operations across industrial environments. Key Responsibilities: Architecture & Strategy Define the end-to-end architecture for AI/ML systems including time series forecasting , computer vision , and real-time classification . Design scalable ML pipelines (training, validation, deployment, retraining) using MLOps best practices. Architect hybrid deployment models supporting both cloud and edge inference for low-latency processing. Model Integration Guide the integration of ML models into the IIoT platform for real-time insights, alerting, and decision support. Support model fusion strategies combining disparate data sources, sensor streams with visual data (e.g., object detection + telemetry + 3rd party data ingestion). MLOps & Engineering Define and implement ML lifecycle tooling, including version control, CI/CD, experiment tracking, and drift detection. Ensure compliance, security, and auditability of deployed ML models. Collaboration & Leadership Collaborate with Data Scientists, ML Engineers, DevOps, Platform, and Product teams to align AI efforts with business goals. Mentor engineering and data teams in AI system design, optimization, and deployment strategies. Stay ahead of AI research and industrial best practices; evaluate and recommend emerging technologies (e.g., LLMs, vision transformers, foundation models). Must-Have Qualifications: Bachelors or Master’s degree in Computer Science, AI/ML, Engineering, or a related technical field. 8+ years of experience in AI/ML development, with 3+ years in architecting AI solutions at scale. Deep understanding of ML frameworks (TensorFlow, PyTorch), time series modeling, and computer vision. Proven experience with object detection, facial recognition, intrusion detection , and anomaly detection in video or sensor environments. Experience in MLOps (MLflow, TFX, Kubeflow, SageMaker, etc.) and model deployment on Kubernetes/Docker . Proficiency in edge AI (Jetson, Coral TPU, OpenVINO) and cloud platforms (AWS, Azure, GCP). Nice-to-Have Skills: Knowledge of stream processing (Kafka, Spark Streaming, Flink). Familiarity with OT systems and IIoT protocols (MQTT, OPC-UA). Understanding of regulatory and safety compliance in AI/vision for industrial settings. Experience with charts, dashboards, and integrating AI with front-end systems (e.g., alerts, maps, command center UIs). Role Impact: As AI/ML Architect, you will shape the intelligence layer of our IIoT platform — enabling smarter, safer, and more efficient industrial operations through AI. You will bridge research and real-world impact , ensuring our AI stack is scalable, explainable, and production-grade from day one.
Posted 1 month ago
15.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are seeking a dynamic and experienced Senior Systems Engineering Manager with a strong focus on MLOps to lead our engineering teams. The ideal candidate will possess a deep understanding of system engineering principles, a solid grasp of MLOps-related technical stacks, and excellent leadership skills. You will play a key role in driving strategic initiatives, building and organizing engineering units, and collaborating closely with stakeholders to deliver cutting-edge solutions. Responsibilities Lead and mentor engineering teams focused on designing, developing, and maintaining scalable MLOps infrastructure Build and organize organizational structures, including units and sub-units, to maximize team efficiency Collaborate with account teams and customers, delivering impactful presentations and ensuring stakeholder alignment Drive the recruitment, selection, and development of top engineering talent, fostering a culture of growth and innovation Oversee the implementation of CI/CD pipelines, Infrastructure as Code (IaC), and containerization solutions to enhance development workflows Ensure seamless integration and deployment processes using public cloud platforms, enabling scalable and reliable solutions Inspire, motivate, and guide teams to achieve their best potential Requirements Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field 15+ years of proven experience in systems engineering, with a focus on MLOps or related fields Demonstrated leadership experience, including team building, talent management, and strategic planning Strong understanding of Infrastructure as Code (IaC) principles Expertise in CI/CD pipelines and tools Proficiency in Containerization technologies (e.g., Docker, Kubernetes) Hands-on experience with at least one public cloud platform (e.g., AWS, GCP, Azure) Basic knowledge of Machine Learning (ML) and Data Science (DS) concepts Ability to work effectively with diverse teams, coordinate tasks, and nurture talent Experience in building and scaling organizational units and sub-units Excellent communication and presentation skills, with the ability to engage customers and internal stakeholders Nice to have Experience with Databricks, Sagemaker, or MLFlow Show more Show less
Posted 1 month ago
3.0 - 8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Purpose: The Job holder will be responsible Coding, designing, deploying, and debugging development projects. Take part in analysis, requirement gathering and design. Owning and delivering the automation of data engineering pipelines. Roles and Responsibilities: Solid understanding of backend performance optimization and debugging. Formal training or certification on software engineering concepts and proficient applied experience Strong hands-on experience with Python Experience in developing microservices using Python with FastAPI. Commercial experience in both backend and frontend engineering Hands-on experience with AWS Cloud-based applications development, including EC2, ECS, EKS, Lambda, SQS, SNS, RDS Aurora MySQL & Postgres, DynamoDB, EMR, and Kinesis. Strong engineering background in machine learning, deep learning, and neural networks. Experience with containerized stack using Kubernetes or ECS for development, deployment, and configuration. Experience with Single Sign-On/OIDC integration and a deep understanding of OAuth, JWT/JWE/JWS. Knowledge of AWS SageMaker and data analytics tools. Proficiency in frameworks TensorFlow, PyTorch, or similar. Familiarity with LangChain, Langgraph, or any Agentic Frameworks is a strong plus. Python engineering experience Education Qualification: Graduation: Bachelor of Science (B.Sc) / Bachelor of Technology (B.Tech) / Bachelor of Computer Applications (BCA) Post-Graduation: Master of Science (M.Sc) /Master of Technology (M.Tech) / Master of Computer Applications (MCA) Experience: 3-8 Years. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Greater Bengaluru Area
On-site
We are seeking a dynamic person to join our AI and Data Science team. This position will work on delivering innovative AI and data-driven solutions. The candidate must have strong ML fundamentals, Hands-on experience on GenAI and RAG. On the other hand we’re looking for good engineering skills (Python, Docker etc.) and exposure to cloud technologies is a plus. Required skills Generative Ai: Experience with RAG: Particularly retrieval and reranking Working Experience of different indexing algorithms ( Flat / HSNW) Experience in working with different LLM based Embedding Models ( ada / bge etc) LLM Parameter tuning experience Experience of different prompt engineering techniques Python: Experience with OOps Python Experience with Type hinting Exp with API Frameworks like Flaks / Fast APi is a must. Experience with Docker is important Artificial Intelligence: Experience with Different use-cases (Multi-class /MultiLabel classification) in NLP is Important. Experience in Transformers architecture is Important. Working Understanding of attention and implementation of transformers. Working Understanding of Embeddings ( Word2Vec / Encoder based EMbeddings) is a must. Experience with different cost function / Optimization algorithms in Deep Learning. Cloud Providers Aws AWS : Sagemaker / ECS / S3 / Lambda Machine Learning Generics: Candidate should have used or work on: Transformers RNN (LSTM/Bi-LSTM) Candidate should have knowledge on Machine Learning basics like Linear and Logistic Regression / Random forests Understanding of ML /NLP metrics like (Precision/RecallF1 score) Hyper Parameter tuning , model training / selection Show more Show less
Posted 1 month ago
0 years
0 Lacs
Greater Hyderabad Area
On-site
We are seeking a dynamic person to join our AI and Data Science team. This position will work on delivering innovative AI and data-driven solutions. The candidate must have strong ML fundamentals, Hands-on experience on GenAI and RAG. On the other hand we’re looking for good engineering skills (Python, Docker etc.) and exposure to cloud technologies is a plus. Required skills Generative Ai: Experience with RAG: Particularly retrieval and reranking Working Experience of different indexing algorithms ( Flat / HSNW) Experience in working with different LLM based Embedding Models ( ada / bge etc) LLM Parameter tuning experience Experience of different prompt engineering techniques Python: Experience with OOps Python Experience with Type hinting Exp with API Frameworks like Flaks / Fast APi is a must. Experience with Docker is important Artificial Intelligence: Experience with Different use-cases (Multi-class /MultiLabel classification) in NLP is Important. Experience in Transformers architecture is Important. Working Understanding of attention and implementation of transformers. Working Understanding of Embeddings ( Word2Vec / Encoder based EMbeddings) is a must. Experience with different cost function / Optimization algorithms in Deep Learning. Cloud Providers Aws AWS : Sagemaker / ECS / S3 / Lambda Machine Learning Generics: Candidate should have used or work on: Transformers RNN (LSTM/Bi-LSTM) Candidate should have knowledge on Machine Learning basics like Linear and Logistic Regression / Random forests Understanding of ML /NLP metrics like (Precision/RecallF1 score) Hyper Parameter tuning , model training / selection Show more Show less
Posted 1 month ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Description And Requirements CareerArc Code CA-PS Hybrid "At BMC trust is not just a word - it's a way of life!" We are an award-winning, equal opportunity, culturally diverse, fun place to be. Giving back to the community drives us to be better every single day. Our work environment allows you to balance your priorities, because we know you will bring your best every day. We will champion your wins and shout them from the rooftops. Your peers will inspire, drive, support you, and make you laugh out loud! We help our customers free up time and space to become an Autonomous Digital Enterprise that conquers the opportunities ahead - and are relentless in the pursuit of innovation! The DSOM product line includes BMC’s industry-leading Digital Services and Operation Management products. We have many interesting SaaS products, in the fields of: Predictive IT service management, Automatic discovery of inventories, intelligent operations management, and more! We continuously grow by adding and implementing the most cutting-edge technologies and investing in Innovation! Our team is a global and versatile group of professionals, and we LOVE to hear our employees’ innovative ideas. So, if Innovation is close to your heart – this is the place for you! BMC is looking for an experienced Data Science Engineer with hands-on experience with Classical ML, Deep Learning Networks and Large Language Models, knowledge to join us and design, develop, and implement microservice based edge applications, using the latest technologies. In this role, you will be responsible for End-to-end design and execution of BMC Data Science tasks, while acting as a focal point and expert for our data science activities. You will research and interpret business needs, develop predictive models, and deploy completed solutions. You will provide expertise and recommendations for plans, programs, advance analysis, strategies, and policies. Here is how, through this exciting role, YOU will contribute to BMC's and your own success: Ideate, design, implement and maintain enterprise business software platform for edge and cloud, with a focus on Machine Learning and Generative AI Capabilities, using mainly Python Work with a globally distributed development team to perform requirements analysis, write design documents, design, develop and test software development projects. Understand real world deployment and usage scenarios from customers and product managers and translate them to AI/ML features that drive value of the product. Work closely with product managers and architects to understand requirements, present options, and design solutions. Work closely with customers and partners to analyze time-series data and suggest the right approaches to drive adoption. Analyze and clearly communicate both verbally and in written form the status of projects or issues along with risks and options to the stakeholders. To ensure you’re set up for success, you will bring the following skillset & experience: You have 8+ years of hands-on experience in data science or machine learning roles. You have experience working with sensor data, time-series analysis, predictive maintenance, anomaly detection, or similar IoT-specific domains. You have strong understanding of the entire ML lifecycle: data collection, preprocessing, model training, deployment, monitoring, and continuous improvement. You have proven experience designing and deploying AI/ML models in real-world IoT or edge computing environments. You have strong knowledge of machine learning frameworks (e.g., scikit-learn, TensorFlow, PyTorch, XGBoost). Whilst these are nice to have, our team can help you develop in the following skills: Experience with digital twins, real-time analytics, or streaming data systems. Contribution to open-source ML/AI/IoT projects or relevant publications. Experience with Agile development methodology and best practice in unit testin Experience with Kubernetes (kubectl, helm) will be an advantage. Experience with cloud platforms (AWS, Azure, GCP) and tools for ML deployment (SageMaker, Vertex AI, MLflow, etc.). BMC Software maintains a strict policy of not requesting any form of payment in exchange for employment opportunities, upholding a fair and ethical hiring process. At BMC we believe in pay transparency and have set the midpoint of the salary band for this role at 8,047,800 INR. Actual salaries depend on a wide range of factors that are considered in making compensation decisions, including but not limited to skill sets; experience and training, licensure, and certifications; and other business and organizational needs. The salary listed is just one component of BMC's employee compensation package. Other rewards may include a variable plan and country specific benefits. We are committed to ensuring that our employees are paid fairly and equitably, and that we are transparent about our compensation practices. ( Returnship@BMC ) Had a break in your career? No worries. This role is eligible for candidates who have taken a break in their career and want to re-enter the workforce. If your expertise matches the above job, visit to https://bmcrecruit.avature.net/returnship know more and how to apply. Min salary 6,035,850 Our commitment to you! BMC’s culture is built around its people. We have 6000+ brilliant minds working together across the globe. You won’t be known just by your employee number, but for your true authentic self. BMC lets you be YOU! If after reading the above, You’re unsure if you meet the qualifications of this role but are deeply excited about BMC and this team, we still encourage you to apply! We want to attract talents from diverse backgrounds and experience to ensure we face the world together with the best ideas! BMC is committed to equal opportunity employment regardless of race, age, sex, creed, color, religion, citizenship status, sexual orientation, gender, gender expression, gender identity, national origin, disability, marital status, pregnancy, disabled veteran or status as a protected veteran. If you need a reasonable accommodation for any part of the application and hiring process, visit the accommodation request page. Mid point salary 8,047,800 Max salary 10,059,750 Show more Show less
Posted 1 month ago
8.0 - 13.0 years
25 - 35 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
AWS Architect with a combination of GenAI Developer having exp on services in AWS like Glue, DynamoDB, Sagemaker AWS Infrastructure: S3, Lambda, Glue, Bedrock, Boto3. Glue Infrastructure+Pyspark Snowflake Standard SQL-based DynamoDB JSON and API.
Posted 1 month ago
2.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
Elatre is a growth-focused digital company powering global brands with end-to-end marketing, web, and technology solutions. We’re currently scaling a powerful AI-first healthcare platform and are looking for a skilled DevOps Engineer to help us build a secure, scalable, and robust backend infrastructure using AWS cloud. What You’ll Do Design and manage AWS infrastructure for scalable web and mobile applications Set up and maintain CI/CD pipelines and automation (GitHub Actions or AWS CodePipeline) Deploy and manage services using AWS Elastic Beanstalk, ECS (Fargate), or EC2 Set up and optimize Amazon RDS (PostgreSQL) with Multi-AZ, backups, and monitoring Manage S3, CloudFront, IAM, and security policies across environments Monitor performance, health, and logs using CloudWatch and X-Ray Handle deployment automation, cost optimization, and resource scaling Collaborate with backend and AI teams to manage real-time API endpoints, Lambda functions, and storage pipelines Set up caching layers with ElastiCache (Redis) for high-performance response Implement infrastructure-as-code using Terraform or CloudFormation Must-Have Skills 2+ years of hands-on experience with AWS Strong knowledge of EC2, RDS (PostgreSQL), S3, CloudFront, IAM, CloudWatch, Lambda Experience with Docker and container orchestration (ECS or Kubernetes) CI/CD pipeline setup and version control workflows (Git, GitHub Actions, AWS CodeBuild) Familiarity with Terraform or AWS CloudFormation Good understanding of system security and DevOps best practices Comfortable managing scalable infrastructure (targeting 20K+ active users) Problem-solving mindset with proactive monitoring and incident response skills Bonus (Not Mandatory) Experience with AppSync (GraphQL), WebSockets, or API Gateway Exposure to AI/ML pipelines in AWS (SageMaker, Polly, etc.) Experience in high-availability mobile backend systems Familiarity with HIPAA or GDPR-compliant infrastructure What We Offer Competitive pay based on experience Fully remote, flexible working hours Opportunity to work on global products with cutting-edge technology Transparent and collaborative work culture Direct impact on infrastructure design decisions Show more Show less
Posted 1 month ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are seeking a dynamic person to join our AI and Data Science team. This position will work on delivering innovative AI and data-driven solutions. The candidate must have strong ML fundamentals, Hands-on experience on GenAI and RAG. On the other hand we’re looking for good engineering skills (Python, Docker etc.) and exposure to cloud technologies is a plus. Required skills Generative Ai: Experience with RAG: Particularly retrieval and reranking Working Experience of different indexing algorithms ( Flat / HSNW) Experience in working with different LLM based Embedding Models ( ada / bge etc) LLM Parameter tuning experience Experience of different prompt engineering techniques Python: Experience with OOps Python Experience with Type hinting Exp with API Frameworks like Flaks / Fast APi is a must. Experience with Docker is important Artificial Intelligence: Experience with Different use-cases (Multi-class /MultiLabel classification) in NLP is Important. Experience in Transformers architecture is Important. Working Understanding of attention and implementation of transformers. Working Understanding of Embeddings ( Word2Vec / Encoder based EMbeddings) is a must. Experience with different cost function / Optimization algorithms in Deep Learning. Cloud Providers Aws AWS : Sagemaker / ECS / S3 / Lambda Machine Learning Generics: Candidate should have used or work on: Transformers RNN (LSTM/Bi-LSTM) Candidate should have knowledge on Machine Learning basics like Linear and Logistic Regression / Random forests Understanding of ML /NLP metrics like (Precision/RecallF1 score) Hyper Parameter tuning , model training / selection Show more Show less
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France