Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Company : Our client is a trusted global innovator of IT and business services. They help clients transform through consulting, industry solutions, business process services, digital & IT modernization and managed services. Our client enables them, as well as society, to move confidently into the digital future. We are committed to our clients’ long-term success and combine global reach with local client attention to serve them in over 50 countries around the globe. · Job Title: Python Developer · Location: Bangalore · Experience: 5+ yrs · Job Type : Contract to hire. · Notice Period:- Immediate joiners. Mandatory Skills: POSITION OVERVIEW : Senior Software Developer � Python POSITION GENERAL DUTIES AND TASKS : 1. Write high-quality, testable, and maintainable Python code using object-oriented programming (OOP), SOLID principles, and design patterns. 2. Develop RESTful APIs and backend services for AI/ML model serving using FastAPI. 3. Collaborate with AI/ML engineers to integrate and deploy Machine Learning, Deep Learning, and Generative AI models into production environments. 4. Contribute to software architecture and design discussions to ensure scalable and e?icient solutions. 5. Implement CI/CD pipelines and adhere to DevOps best practices for reliable and repeatable deployments. 6. Design for observability, incorporating structured logging, performance monitoring, and alerting mechanisms. 7. Optimize code and system performance, ensuring reliability and robustness at scale. 8. Participate in code reviews, promote clean code practices, and mentor junior developers when needed. Required Qualifications: ? Bachelor�s or Master�s degree in Computer Science, IT, or a related field. ? 5+ years of hands-on experience in software development, with a focus on Python. ? Deep understanding of OOP concepts, software architecture, and design patterns. ? Experience with backend web frameworks, preferably FastAPI. ? Familiarity with integrating ML/DL models into software solutions. ? Practical experience with CI/CD, containerization (Docker), and version control systems (Git). ? Exposure to MLOps practices and tools for model deployment and monitoring. ? Strong collaboration and communication skills in cross-functional engineering teams. ? Familiarity with cloud platforms like AWS (e.g., Sagemaker, Bedrock) or Azure (e.g., ML Studio, OpenAI Service).
Posted 6 days ago
12.0 - 16.0 years
0 Lacs
delhi
On-site
We are seeking a talented Systems Architect (AVP level) with specialized knowledge in designing and expanding Generative AI solutions for production environments. In this pivotal position, you will collaborate across various teams including data scientists, ML engineers, and product leaders to mold enterprise-level GenAI platforms. Your responsibilities will include designing and scaling LLM-based systems such as chatbots, copilots, RAG, and multi-modal AI, architecting data pipelines, training/inference workflows, and integrating MLOps. You will be tasked with ensuring that systems are modular, secure, scalable, and cost-effective. Additionally, you will work on model orchestration, agentic AI, vector DBs, and CI/CD for AI. The ideal candidate should possess 12-15 years of experience in cloud-native and distributed systems, with 2-3 years focusing on GenAI/LLMs utilizing tools like LangChain, HuggingFace, and Kubeflow. Proficiency in cloud platforms such as AWS, GCP, or Azure (SageMaker, Vertex AI, Azure ML) is essential. Experience with RAG, semantic search, agent orchestration, and MLOps is highly valued. Strong architectural acumen, effective stakeholder communication skills, and preferred certifications in cloud technologies, AI open-source contributions, and knowledge of security and governance are all advantageous.,
Posted 6 days ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
As a Software Engineer specializing in AI/ML/LLM/Data Science at Entra Solutions, a FinTech company within the mortgage Industry, you will play a crucial role in designing, developing, and deploying AI-driven solutions using cutting-edge technologies such as Machine Learning, NLP, and Large Language Models (LLMs). Your primary focus will be on building and optimizing retrieval-augmented generation (RAG) systems, LLM fine-tuning, and vector search technologies using Python. You will be responsible for developing scalable AI pipelines that ensure high performance and seamless integration with both cloud and on-premises environments. Additionally, this role will involve implementing MLOps best practices, optimizing AI model performance, and deploying intelligent applications. In this role, you will: - Develop, fine-tune, and deploy AI/ML models and LLM-based applications for real-world use cases. - Build and optimize retrieval-augmented generation (RAG) systems using Vector Databases such as ChromaDB, Pinecone, and FAISS. - Work on LLM fine-tuning, embeddings, and prompt engineering to enhance model performance. - Create end-to-end AI solutions with APIs using frameworks like FastAPI, Flask, or similar technologies. - Establish and maintain scalable data pipelines for training and inferencing AI models. - Deploy and manage models using MLOps best practices on cloud platforms like AWS or Azure. - Optimize AI model performance for low-latency inference and scalability. - Collaborate with cross-functional teams including Product, Engineering, and Data Science to integrate AI capabilities into applications. Qualifications: Must Have: - Proficiency in Python - Strong hands-on experience in AI/ML frameworks such as TensorFlow, PyTorch, Hugging Face, LangChain, and OpenAI APIs. Good to Have: - Experience with LLM fine-tuning, embeddings, and transformers. - Knowledge of NLP, vector search technologies (ChromaDB, Pinecone, FAISS, Milvus). - Experience in building scalable AI models and data pipelines with Spark, Kafka, or Dask. - Familiarity with MLOps tools like Docker, Kubernetes, and CI/CD for AI models. - Hands-on experience in cloud-based AI deployment using platforms like AWS Lambda, SageMaker, GCP Vertex AI, or Azure ML. - Knowledge of prompt engineering, GPT models, or knowledge graphs. What's in it for you - Competitive Salary & Full Benefits Package - PTOs / Medical Insurance - Exposure to cutting-edge AI/LLM projects in an innovative environment - Career Growth Opportunities in AI/ML leadership - Collaborative & AI-driven work culture Entra Solutions is an equal employment opportunity employer, and we welcome applicants from diverse backgrounds. Join us and be a part of our dynamic team driving innovation in the FinTech industry.,
Posted 6 days ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a part of ZS, you will have the opportunity to work in a place driven by passion that aims to change lives. ZS is a management consulting and technology firm that is dedicated to enhancing life and its quality. The core strength of ZS lies in its people, who work collectively to develop transformative solutions for patients, caregivers, and consumers worldwide. By adopting a client-first approach, ZS employees bring impactful results to every engagement by partnering closely with clients to design custom solutions and technological products that drive value and yield positive outcomes in key areas of their business. Your role at ZS will require you to bring inquisitiveness for learning, innovative ideas, courage, and dedication to make a life-changing impact. At ZS, the individuals are highly valued, recognizing both the visible and invisible facets of their identities, personal experiences, and belief systems. These elements shape the uniqueness of each individual and contribute to the diverse tapestry within ZS. ZS acknowledges and celebrates personal interests, identities, and the thirst for knowledge as integral components of success within the organization. Learn more about the diversity, equity, and inclusion initiatives at ZS, along with the networks that support ZS employees in fostering community spaces, accessing necessary resources for growth, and amplifying the messages they are passionate about. As an Architecture & Engineering Specialist specializing in ML Engineering at ZS's India Capability & Expertise Center (CEC), you will be part of a team that constitutes over 60% of ZS employees across three offices in New Delhi, Pune, and Bengaluru. The CEC plays a pivotal role in collaborating with colleagues from North America, Europe, and East Asia to deliver practical solutions to clients that drive the company's operations. Upholding standards of analytical, operational, and technological excellence, the CEC leverages collective knowledge to enable ZS teams to achieve superior outcomes for clients. Joining ZS's Scaled AI practice within the Architecture & Engineering Expertise Center will immerse you in a dynamic ecosystem focused on generating continuous business value for clients through innovative machine learning, deep learning, and engineering capabilities. In this role, you will collaborate with data scientists to craft cutting-edge AI models, develop and utilize advanced ML platforms, establish and implement sophisticated ML pipelines, and oversee the entire ML lifecycle. **Responsibilities:** - Design and implement technical features using best practices for the relevant technology stack - Collaborate with client-facing teams to grasp the solution context, contribute to technical requirement gathering and analysis - Work alongside technical architects to validate design and implementation strategies - Write production-ready code that is easily testable, comprehensible to other developers, and addresses edge cases and errors - Ensure top-notch quality deliverables by adhering to architecture/design guidelines, coding best practices, and engaging in periodic design/code reviews - Develop unit tests and higher-level tests to handle expected edge cases, errors, and optimal scenarios - Utilize bug tracking, code review, version control, and other tools for organizing and delivering work - Participate in scrum calls, agile ceremonies, and effectively communicate progress, issues, and dependencies - Contribute consistently by researching and evaluating the latest technologies, conducting proofs-of-concept, and creating prototype solutions - Aid the project architect in designing modules/components of the overall project/product architecture - Break down large features into estimable tasks, lead estimation, and defend them with clients - Independently implement complex features with minimal guidance, such as service or application-wide changes - Systematically troubleshoot code issues/bugs using stack traces, logs, monitoring tools, and other resources - Conduct code/script reviews of senior engineers within the team - Mentor and cultivate technical talent within the team **Requirements:** - Minimum 5+ years of hands-on experience in deploying and productionizing ML models at scale - Proficiency in scaling GenAI or similar applications to accommodate high user traffic, large datasets, and reduce response time - Strong expertise in developing RAG-based pipelines using frameworks like LangChain & LlamaIndex - Experience in crafting GenAI applications such as answering engines, extraction components, and content authoring - Expertise in designing, configuring, and utilizing ML Engineering platforms like Sagemaker, MLFlow, Kubeflow, or other relevant platforms - Familiarity with Big data technologies including Hive, Spark, Hadoop, and queuing systems like Apache Kafka/Rabbit MQ/AWS Kinesis - Ability to quickly adapt to new technologies, innovate in solution creation, and independently conduct POCs on emerging technologies - Proficiency in at least one Programming language such as PySpark, Python, Java, Scala, etc., and solid foundations in Data Structures - Hands-on experience in building metadata-driven, reusable design patterns for data pipeline, orchestration, and ingestion patterns (batch, real-time) - Experience in designing and implementing solutions on distributed computing and cloud services platforms (e.g., AWS, Azure, GCP) - Hands-on experience in constructing CI/CD pipelines and awareness of application monitoring practices **Additional Skills:** - AWS/Azure Solutions Architect certification with a comprehensive understanding of the broader AWS/Azure stack - Knowledge of DevOps CI/CD, data security, and experience in designing on cloud platforms - Willingness to travel to global offices as required to collaborate with clients or internal project teams **Perks & Benefits:** ZS provides a holistic total rewards package encompassing health and well-being, financial planning, annual leave, personal growth, and professional development. The organization offers robust skills development programs, various career progression options, internal mobility paths, and a collaborative culture that empowers individuals to thrive both independently and as part of a global team. ZS is committed to fostering a flexible and connected work environment that enables employees to combine work from home and on-site presence at clients/ZS offices for the majority of the week. This approach allows for the seamless integration of the ZS culture and innovative practices through planned and spontaneous face-to-face interactions. **Travel:** Travel is an essential aspect of working at ZS, especially for client-facing roles. Business needs dictate the priority for travel, and while some projects may be local, all client-facing employees should be prepared to travel as required. Travel opportunities provide avenues to strengthen client relationships, gain diverse experiences, and enhance professional growth through exposure to different environments and cultures. **Application Process:** Candidates must either possess or be able to obtain work authorization for their intended country of employment. To be considered, applicants must submit an online application, including a complete set of transcripts (official or unofficial). *Note: NO AGENCY CALLS, PLEASE.* For more information, visit [ZS Website](www.zs.com).,
Posted 6 days ago
7.0 - 10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Work Location : Hyderabad What Gramener offers you Gramener will offer you an inviting workplace, talented colleagues from diverse backgrounds, career path, steady growth prospects with great scope to innovate. Our goal is to create an ecosystem of easily configurable data applications focused on storytelling for public and private use Cloud Lead – Analytics & Data Products We’re looking for a Cloud Architect/Lead to design, build, and manage scalable AWS infrastructure that powers our analytics and data product initiatives. This role focuses on automating infrastructure provisioning, application/API hosting, and enabling data and GenAI workloads through a modern, secure cloud environment. Roles and Responsibilities Design and provision AWS infrastructure using Terraform or AWS CloudFormation to support evolving data product needs. Develop and manage CI/CD pipelines using Jenkins, AWS CodePipeline, CodeBuild, or GitHub Actions. Deploy and host internal tools, APIs, and applications using ECS, EKS, Lambda, API Gateway, and ELB. Provision and support analytics and data platforms using S3, Glue, Redshift, Athena, Lake Formation, and orchestration tools like Step Functions or Apache Airflow (MWAA). Implement cloud security, networking, and compliance using IAM, VPC, KMS, CloudWatch, CloudTrail, and AWS Config. Collaborate with data engineers, ML engineers, and analytics teams to align infrastructure with application and data product requirements. Support GenAI infrastructure, including Amazon Bedrock, SageMaker, or integrations with APIs like OpenAI. Skills And Qualifications 7-10 years of experience in cloud engineering, DevOps, or cloud architecture roles. Hands-on expertise with the AWS ecosystem and tools listed above. Proficiency in scripting (e.g., Python, Bash) and infrastructure automation. Experience deploying containerized workloads using Docker, ECS, EKS, or Fargate. Familiarity with data engineering and GenAI workflows is a plus. AWS certifications (e.g., Solutions Architect, DevOps Engineer) are preferred. About Us We help consult and deliver solutions to organizations where data is at the core of decision making. We undertake strategic data consulting for organizations in laying out the roadmap for data driven decision making, in order to equip organizations to convert data into a strategic differentiator. Through a host of our product and service offerings we analyse and visualize large amounts of data. To know more about us visit Gramener Website and Gramener Blog. Apply for this role Apply for this Role
Posted 6 days ago
12.0 - 16.0 years
0 Lacs
delhi
On-site
We are looking for a Systems Architect (AVP level) with extensive experience in designing and scaling Generative AI solutions for production. As a Systems Architect, you will play a crucial role in collaborating with data scientists, ML engineers, and product leaders to shape enterprise-grade GenAI platforms. Your responsibilities will include designing and scaling LLM-based systems such as chatbots, copilots, RAG, and multi-modal AI. You will also be responsible for architecting data pipelines, training/inference workflows, and MLOps integration. It is essential to ensure that the systems you design are modular, secure, scalable, and cost-effective. Additionally, you will work on model orchestration, agentic AI, vector DBs, and CI/CD for AI. The ideal candidate should have 12-15 years of experience in cloud-native and distributed systems, with at least 2-3 years of experience in GenAI/LLMs using tools like LangChain, HuggingFace, and Kubeflow. Proficiency in cloud platforms such as AWS, GCP, or Azure (SageMaker, Vertex AI, Azure ML) is required. Experience with technologies like RAG, semantic search, agent orchestration, and MLOps will be beneficial for this role. Strong architectural thinking and effective communication with stakeholders are essential skills. Preferred qualifications include cloud certifications, AI open-source contributions, and knowledge of security and governance principles. If you are passionate about designing cutting-edge Generative AI solutions and possess the necessary skills and experience, we encourage you to apply for this leadership role.,
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Job Title: AI/ML Engineer (Python AWS REST APIs) Department: Web Location: Indore Job Type: Full-time Experience: 3-5 years Notice Period: 0-15 days (immediate joiners preferred) Work Arrangement: On-site (Work from Office) Overview Advantal Technologies is seeking a passionate AI/ML Engineer to join our team in building the core AI-driven functionality of an intelligent visual data encryption system. The role involves designing, training, and deploying AI models (e.g., CLIP, DCGANs, Decision Trees), integrating them into a secure backend, and operationalizing the solution via AWS cloud services and Python-based APIs. Key Responsibilities: AI/ML Development Design and train deep learning models for image classification and sensitivity tagging using CLIP, DCGANs, and Decision Trees. Build synthetic datasets using DCGANs for balancing. Fine-tune pre-trained models for customized encryption logic. Implement explainable classification logic for model outputs. Validate model performance using custom metrics and datasets. API Development Design and develop Python RESTful APIs using FastAPI or Flask for: Image upload and classification Model inference endpoints Encryption trigger calls Integrate APIs with AWS Lambda and Amazon API Gateway. AWS Integration Deploy and manage AI models on Amazon SageMaker for training and real-time inference. Use AWS Lambda for serverless backend compute. Store encrypted image data on Amazon S3 and metadata on Amazon RDS (PostgreSQL). Use AWS Cognito for secure user authentication and KMS for key management. Monitor job status via CloudWatch and enable secure, scalable API access. Required Skills & Experience: Must-Have 35 years of experience in AI/ML (especially vision-based systems). Strong experience with PyTorch or TensorFlow for model development. Proficient in Python with experience building RESTful APIs. Hands-on experience with Amazon SageMaker, Lambda, API Gateway, and S3. Knowledge of OpenSSL/PyCryptodome or basic cryptographic concepts. Understanding of model deployment, serialization, and performance tuning. Nice-to-Have Experience with CLIP model fine-tuning. Familiarity with Docker, GitHub Actions, or CI/CD pipelines. Experience in data classification under compliance regimes (e.g., GDPR, HIPAA). Familiarity with multi-tenant SaaS design patterns. Tools & Technologies: Python, PyTorch, TensorFlow FastAPI, Flask AWS: SageMaker, Lambda, S3, RDS, Cognito, API Gateway, KMS Git, Docker, Postgres, OpenCV, OpenSSL If interested, please share your resume to hr@advantal.ne.
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are looking for a skilled and proactive Cloud Security Engineer to join our dynamic team at Grid Dynamics. This role is focused on ensuring the security and compliance of our public cloud infrastructure across AWS and GCP environments. You will be instrumental in designing, implementing, and monitoring cloud security solutions, working closely with IT, engineering, and external SOC partners. This position is open in Hyderabad, Bangalore, and Chennai . This job is centred around the following practical tasks: Public cloud security architecture and compliance Selecting and deploying key native public cloud security tools and enabling the required security features in AWS and GCP Cloud security governance and compliance, including applying relevant security policies and ensuring that our public cloud infrastructure meets industry standard security baselines (e.g. CIS) Working with IT and other Grid Dynamics teams on creating, deploying, and updating cloud security configuration templates/standard builds/etc. Assisting with cloud key management in order to prevent hardcoding (AWS KMS and GCP’s Key Management, HashiCorp Vault etc.) Enabling and configuring cloud web application firewalls such as AWS WAF and Google Cloud Armor Public cloud security monitoring and incident response Assisting with Elastic SIEM roll-out and implementation in both AWS and GCP, enabling and configuring native cloud security monitoring tools (CloudWatch, Google Cloud Logging & Monitoring) to work with Elastic SIEM Threat detection and response in the cloud (AWS GuardDuty, AWS Detective, Google Security Command Center, Chronicle) Cloud data classification and protection (Amazon Macie, Google Data Loss Prevention (DLP) Collaborating with IT and an external SOC provider on incident-related matters Producing cloud alerts and incidents metrics for high level management reports Public cloud security auditing and vulnerability management Conducting regular security assessments and participating in internal audits employing native cloud vulnerability scanning tools (AWS Inspector and Google Security Command Center), as well as compliance checkers (AWS Config, AWS Audit Manager, GCP Policy Intelligence) Assisting the affected systems owners in mitigating the uncovered vulnerabilities and security misconfigurations Assisting developers with utilising SDLC-centric cloud security tools such as AWS CloudGuru, SageMaker Clarify, CodeWhisperer. Producing vulnerability metrics for high level management reports General requirements Where necessary, readiness to respond out of business hours taking into account Grid Dynamics geography Being able to take initiative in solving security problems Self-discipline and consistency in taking care of routine tasks Being collaborative with other security team members, as well as IT and various development/engineering teams, or any users of the affected systems Education & Qualifications Bachelor’s or Master’s degree in Computer Science , Information Security , Engineering , or a related field. Relevant cloud security certifications are highly desirable, such as: AWS Certified Security – Specialty Google Professional Cloud Security Engineer Certified Information Systems Security Professional (CISSP) Certified Cloud Security Professional (CCSP)
Posted 1 week ago
10.0 years
0 Lacs
Thiruvananthapuram, Kerala, India
On-site
Candidates ready to join immediately can share their details via email for quick processing. 📌 CCTC | ECTC | Notice Period | Location Preference nitin.patil@ust.com Act fast for immediate attention! ⏳📩 Roles and Responsibilities: Architecture & Infrastructure Design Architect scalable, resilient, and secure AI/ML infrastructure on AWS using services like EC2, SageMaker, Bedrock, VPC, RDS, DynamoDB, CloudWatch . Develop Infrastructure as Code (IaC) using Terraform , and automate deployments with CI/CD pipelines . Optimize cost and performance of cloud resources used for AI workloads. AI Project Leadership Translate business objectives into actionable AI strategies and solutions. Oversee the entire AI lifecycle —from data ingestion, model training, and evaluation to deployment and monitoring. Drive roadmap planning, delivery timelines, and project success metrics. Model Development & Deployment Lead selection and development of AI/ML models, particularly for NLP, GenAI, and AIOps use cases . Implement frameworks for bias detection, explainability , and responsible AI . Enhance model performance through tuning and efficient resource utilization. Security & Compliance Ensure data privacy, security best practices, and compliance with IAM policies, encryption standards , and regulatory frameworks. Perform regular audits and vulnerability assessments to ensure system integrity. Team Leadership & Collaboration Lead and mentor a team of cloud engineers, ML practitioners, software developers, and data analysts. Promote cross-functional collaboration with business and technical stakeholders. Conduct technical reviews and ensure delivery of production-grade solutions. Monitoring & Maintenance Establish robust model monitoring , ing , and feedback loops to detect drift and maintain model reliability. Ensure ongoing optimization of infrastructure and ML pipelines. Must-Have Skills: 10+ years of experience in IT with 4+ years in AI/ML leadership roles. Strong hands-on experience in AWS services : EC2, SageMaker, Bedrock, RDS, VPC, DynamoDB, CloudWatch. Expertise in Python for ML development and automation. Solid understanding of Terraform, Docker, Git , and CI/CD pipelines . Proven track record in delivering AI/ML projects into production environments . Deep understanding of MLOps, model versioning, monitoring , and retraining pipelines . Experience in implementing Responsible AI practices – including fairness, explainability, and bias mitigation. Knowledge of cloud security best practices and IAM role configuration. Excellent leadership, communication, and stakeholder management skills. Good-to-Have Skills: AWS Certifications such as AWS Certified Machine Learning – Specialty or AWS Certified Solutions Architect. Familiarity with data privacy laws and frameworks (GDPR, HIPAA). Experience with AI governance and ethical AI frameworks. Expertise in cost optimization and performance tuning for AI on the cloud. Exposure to LangChain , LLMs , Kubeflow , or GCP-based AI services . Skills Enterprise Architecture,Enterprise Architect,Aws,Python
Posted 1 week ago
1.0 - 3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Job Role: Data Scientist / AI Solution Engineer– India contractor Band Level: NA Reports to: Team Leader/Manager Preferred Location : Gurugram, Haryana, India Work Timings: 11:30 am – 08 pm IST Implement Proof of Concept and Pilot machine learning solutions using AWS ML toolkit and SaaS platforms Configure and optimize pre-built ML models for specific business requirements Set up automated data pipelines leveraging AWS services and third-party tools Create dashboards and visualizations to communicate insights to stakeholders Document technical processes and knowledge transfer for future maintenance Requirements Bachelor’s degree in computer science, Data Science, or related field 1-3 years of professional experience implementing machine learning solutions. We can entertain someone who is fresh graduate with significant work in AI in either internship or projects Demonstrated experience with AWS machine learning services (SageMaker, AWS ML Services, and understanding of underpinnings of ML models and evaluations.) Proficiency with data science SaaS tools (Dataiku, Indico, H2O.ai, or similar platforms) Working knowledge of AWS data engineering services (S3, Glue, Athena, Lambda) Experience with Python and common data manipulation libraries Strong problem-solving skills and ability to work independently Preferred Qualifications Previous contract or work experience in similar roles Familiarity with API integration between various platforms Experience with BI tools (Power BI, QuickSight) Knowledge of cost optimization techniques for AWS ML services Prior experience in our industry (please see company overview)
Posted 1 week ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Job Title: Data Science Candidate Specification: 6+ years, Notice � Immediate to 15 days, Hybrid model. Job Description 5+ years of hands-on experience as an AI Engineer, Machine Learning Engineer, or a similar role focused on building and deploying AI/ML solutions. Strong proficiency in Python and its relevant ML/data science libraries (e.g., NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch). Extensive experience with at least one major deep learning framework such as TensorFlow, PyTorch, or Keras. Solid understanding of machine learning principles, algorithms (e.g., regression, classification, clustering, ensemble methods), and statistical modeling. Experience with cloud platforms (e.g., AWS, Azure, GCP) and their AI/ML services (e.g., SageMaker, Azure ML, Vertex AI). Skills Required RoleData Science ( AI ML ) Industry TypeIT Services & Consulting Functional AreaIT-Software Required Education Bachelor Degree Employment TypeFull Time, Permanent Key Skills DATA SCIENCE AI ENGINEER MACHINE LEARNING DATA SCIENCE AI ML PYTHON AWS Other Information Job CodeGO/JC/686/2025 Recruiter NameSheena Rakesh
Posted 1 week ago
6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Description Enphase Energy is a global energy technology company and a leading provider of solar, battery, and electric vehicle charging products. Founded in 2006, our innovative microinverter technology revolutionized solar power, making it a safer, more reliable, and scalable energy source. Today, the Enphase Energy System enables users to make, use, save, and sell their own power. Enphase is also one of the most successful and innovative clean energy companies in the world, with more than 80 million products shipped across 160 countries. Join our dynamic teams designing and developing next-gen energy technologies and help drive a sustainable future! About The Role The Sr. Data Scientist will be responsible for analyzing product performance in the fleet. Provides support for the data management activities of the Quality/Customer Service organization. Collaborates with Engineering/Quality/CS teams and Information Technology. What You Will be doing Strong understanding of industrial processes, sensor data, and IoT platforms, essential for building effective predictive maintenance models Experience translating theoretical concepts into engineered features, with a demonstrated ability to create features capturing important events or transitions within the data Expertise in crafting custom features that highlight unique patterns specific to the dataset or problem, enhancing model predictive power. Ability to combine and synthesize information from multiple data sources to develop more informative features Advanced knowledge in Apache Spark (PySpark, SparkSQL, SparkR) and distributed computing, demonstrated through efficient processing and analysis of large-scale datasets. Proficiency in Python, R, and SQL, with a proven track record of writing optimized and efficient Spark code for data processing and model training Hands-on experience with cloud-based machine learning platforms such as AWS SageMaker and Databricks, showcasing scalable model development and deployment Demonstrated capability to develop and implement custom statistical algorithms tailored to specific anomaly detection tasks Proficiency in statistical methods for identifying patterns and trends in large datasets, essential for predictive maintenance. Demonstrated expertise in engineering features to highlight deviations or faults for early detection. Proven leadership in managing predictive maintenance projects from conception to deployment, with a successful track record of cross-functional team collaboration Experience extracting temporal features, such as trends, seasonality, and lagged values, to improve model accuracy. Skills in filtering, smoothing, and transforming data for noise reduction and effective feature extraction Experience optimizing code for performance in high-throughput, low-latency environments. Experience deploying models into production, with expertise in monitoring their performance and integrating them with CI/CD pipelines using AWS, Docker, or Kubernetes Familiarity with end-to-end analytical architectures, including data lakes, data warehouses, and real-time processing systems Experience creating insightful dashboards and reports using tools such as Power BI, Tableau, or custom visualization frameworks to effectively communicate model results to stakeholders 6+ years of experience in data science with a significant focus on predictive maintenance and anomaly detection Who You Are And What You Bring Bachelor’s or Master’s degree/ Diploma in Engineering, Statistics, Mathematics or Computer Science 6+ years of experience as a Data Scientist Strong problem-solving skills Proven ability to work independently and accurately
Posted 1 week ago
5.0 - 10.0 years
20 - 25 Lacs
Pune
Hybrid
Greetings from Intelliswift- An LTTS Company Role : Fullstack Developer Work Location:- Pune Experience:- 5+ years Job Description in details: Job Summary Role : Fullstack Developer Experience : 5 to 8 Years Job Location : Pune As a Fullstack Developer specializing in generative AI and cloud technologies, you will design, build, and maintain end-to-end applications on AWS. Youll leverage services such as Bedrock, SageMaker, LangChain and Amplify to integrate AI/ML capabilities, architect scalable infrastructure, and deliver seamless front-end experiences using React. Youll partner with UX/UI designers, ML engineers, DevOps teams, and product stakeholders to take features from concept through production deployment. Job Description: 5+ years of professional experience as a Fullstack Developer building scalable web applications. Proficiency in Python and/or JavaScript/TypeScript; strong command of modern frameworks (React, Node.js). Hands-on AWS expertise: Bedrock, SageMaker, Amplify, Lambda, API Gateway, DynamoDB/RDS, CloudWatch, IAM, VPC. Architect & develop full-stack solutions using React for front-end, Python/Node.js for back-end, and AWS Lambda/API Gateway or containers for serverless services. Integrate Generative AI capabilities leveraging AWS Bedrock, LangChain retrieval-augmented pipelines, and custom prompt engineering to power intelligent assistants and data-driven insights. Design & Manage AWS Infrastructure using CDK/CloudFormation for VPCs, IAM policies, S3, DynamoDB/RDS, ECS/EKS, and Implement DevOps/MLOps Workflows: establish CI/CD pipelines (CodePipeline, CodeBuild, Jenkins), containerization (Docker), automated testing, and rollout strategies. Develop Interactive UIs in React: translate Figma/Sketch designs into responsive components, integrate with backend APIs, and harness AWS Amplify for accelerated feature delivery. Solid understanding of AI/ML concepts, including prompt engineering, generative AI frameworks (LangChain), and model deployment patterns. Experience designing and consuming APIs: RESTful and GraphQL. DevOps/MLOps skills: CI/CD pipeline creation, containerization (Docker), orchestration (ECS/EKS), infrastructure as code. Cloud architecture know-how: security groups, network segmentation, high-availability patterns, cost optimization. Excellent problem-solving ability and strong communication skills to collaborate effectively across distributed teams. Share your updated profiles on shakambnari.nayak@intelliswift.com with details.
Posted 1 week ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
WorkMode :Hybrid Work Location : Chennai / Hyderabad Work Timing : 2 PM to 11 PM Primary : GEN AI Python, AWS Bedrock, Claude, Sagemaker , Machine Learning experience) 8+ years of full-stack development experience 5+ years of AI/ Gen AI development Strong proficiency in JavaScript/TypeScript, Python, or similar languages Experience with modern frontend frameworks (React, Vue.js, Angular) Backend development experience with REST APIs and microservices Knowledge of AWS services, specifically AWS Bedrock, Sagemaker Experience with generative AI models, LLM integration and Machine Learning Understanding of prompt engineering and model optimization Hands-on experience with foundation models (Claude, GPT, LLaMA, etc.) Experience retrieval-augmented generation (RAG) Knowledge of vector databases and semantic search AWS cloud platform expertise (Lambda, API Gateway, S3, RDS, etc.) Knowledge of financial regulatory requirements and risk frameworks. Experience integrating AI solutions into financial workflows or trading systems. Published work or patents in financial AI or applied machine learning.
Posted 1 week ago
1.0 - 2.0 years
2 - 5 Lacs
India
On-site
Company Overview: Techmindz is a premier IT training institute located in Infopark, Kochi, backed by NDimensionz and recognized for industry‑aligned, hands‑on training programs with strong placement support. Key Responsibilities Conduct structured and interactive training sessions on core AI concepts including Machine Learning, Deep Learning, and Data Science tools. Develop and maintain engaging course content—slide decks, coding notebooks, real-world datasets, projects, and assessments. Guide students through hands-on labs, algorithm implementation, and use-case-based projects. Evaluate student progress via quizzes, assignments, capstone projects, and provide personalized feedback. Stay updated with the latest trends in AI/ML frameworks (e.g., TensorFlow, PyTorch, Scikit-learn). Collaborate with the curriculum team to refine learning paths and keep content industry-relevant. Contribute to faculty discussions to ensure quality training delivery and continuous improvement. Requirements Bachelor’s degree in Computer Science, Data Science, or a related field. 1–2 years of experience in AI/ML development or teaching/training in related domains. Strong grasp of supervised/unsupervised learning, neural networks, model evaluation, and data preprocessing. Hands-on experience with Python and key libraries (NumPy, Pandas, Scikit-learn, Matplotlib, TensorFlow/PyTorch). Familiarity with Jupyter Notebooks, version control (Git), and data visualization tools. Excellent communication and presentation skills with the ability to explain complex topics clearly. Prior teaching/training or mentorship experience is an added advantage. Exposure to real-world datasets, cloud AI tools (Google Colab, AWS SageMaker), or certifications is a bonus. How to Apply Send your resume (highlighting AI/ML projects, teaching exposure, and certifications if any) to: careers@techmindz.com Job Types: Full-time, Freelance Pay: ₹200,000.00 - ₹500,000.00 per year Education: Bachelor's (Preferred) Experience: total work: 1 year (Preferred) Teaching: 1 year (Preferred) Language: English (Preferred) Work Location: In person
Posted 1 week ago
12.0 years
0 Lacs
Hyderābād
On-site
Tezo is a new generation Digital & AI solutions provider, with a history of creating remarkable outcomes for our customers. We bring exceptional experiences using cutting-edge analytics, data proficiency, technology, and digital excellence. We are seeking a highly experienced and dynamic Practice Head – Data Science & AI to lead our data practice in Hyderabad. This role is ideal for a technology leader with a strong foundation in Data Science, Artificial Intelligence (AI), and Machine Learning (ML), along with proven experience in building and scaling data practices. The ideal candidate will also have a strong business acumen with experience in solution selling and pre-sales. Key Responsibilities: Leadership & Strategy: Define and drive the strategic vision for the Data Science and AI practice. Build, lead, and mentor a high-performing team of data scientists, ML engineers, and AI experts. Collaborate with cross-functional teams to integrate data-driven solutions into broader business strategies. Technical Expertise: Lead the design and delivery of advanced AI/ML solutions across various domains. Stay abreast of industry trends, emerging technologies, and best practices in AI, ML, and data science. Provide technical guidance and hands-on support as needed for key initiatives. Practice Development: Establish frameworks, methodologies, and best practices to scale the data science practice. Define and implement reusable components, accelerators, and IPs for efficient solution delivery. Client Engagement & Pre-Sales: Support business development by working closely with sales teams in identifying opportunities, creating proposals, and delivering presentations. Engage in solution selling by understanding client needs and proposing tailored AI/ML-based solutions. Build strong relationships with clients and act as a trusted advisor on their data journey. Required Skills & Experience: 12+ years of overall experience with at least 5+ years in leading data science/AI teams. Proven experience in setting up or leading a data science or AI practice. Strong hands-on technical background in AI, ML, NLP, predictive analytics, and data engineering. Experience with tools and platforms like Python, R, TensorFlow, PyTorch, Azure ML, AWS SageMaker, etc. Strong understanding of data strategy, governance, and architecture. Demonstrated success in solutioning and pre-sales engagements. Excellent communication, leadership, and stakeholder management skills.
Posted 1 week ago
5.0 years
11 - 15 Lacs
Hyderābād
On-site
Job Title: Python Backend Engineer – AWS | GenAI & ML Experience: 5 Years Employment Type: Full-Time Job Summary: We are seeking an experienced Python Backend Engineer with strong AWS expertise and a background in AI/ML to build and scale intelligent backend systems and GenAI-driven applications. The ideal candidate should have hands-on experience building REST APIs using Django or FastAPI and implementing AI/LLM applications using Langchain. Experience with LangGraph is a strong plus. Key Responsibilities: Design, develop, and maintain Python-based backend systems and AI-powered services. Build and manage RESTful APIs using Django or FastAPI for AI/ML model integration. Develop and deploy machine learning and GenAI models using frameworks such as TensorFlow, PyTorch, or Scikit-learn. Implement GenAI pipelines using Langchain; LangGraph experience is a plus. Utilize AWS services including EC2, Lambda, S3, SageMaker, and CloudFormation for infrastructure and deployment. Collaborate with data scientists, DevOps, and architects to integrate models and workflows into production. Build and manage CI/CD pipelines for backend and model deployments. Ensure performance, scalability, and security of applications in cloud environments. Monitor production systems, troubleshoot issues, and optimize model and API performance. Required Skills and Experience: 5 years of hands-on experience in Python backend development. Strong experience building RESTful APIs using Django or FastAPI. Proficiency in AWS cloud services: EC2, S3, Lambda, SageMaker, CloudFormation, etc. Solid understanding of ML/AI concepts and model deployment practices. Experience with ML libraries like Pandas, NumPy, Scikit-learn, TensorFlow, or PyTorch. Hands-on experience with Langchain for building GenAI applications. Familiarity with DevOps tools: Docker, Kubernetes, Git, Jenkins, Terraform. Good understanding of microservices architecture and CI/CD workflows. Agile development experience. Nice to Have: Experience with LangGraph for agentic workflows or graph-based orchestration. Knowledge of LLMs, embeddings, and vector databases (e.g., Pinecone, FAISS). Exposure to OpenAI APIs, AWS Bedrock, or similar GenAI platforms. Understanding of MLOps tools and practices for model monitoring, versioning, and retraining. Job Types: Full-time, Permanent Pay: ₹1,100,000.00 - ₹1,500,000.00 per year Benefits: Health insurance Provident Fund Location Type: In-person Schedule: Day shift Monday to Friday Morning shift Work Location: In person Speak with the employer +91 9966550640
Posted 1 week ago
1.0 - 3.0 years
4 - 8 Lacs
Hyderābād
On-site
Job Role: Data Scientist / AI Solution Engineer– India contractor Band Level: NA Reports to: Team Leader/Manager Preferred Location : Gurugram, Haryana, India Work Timings: 11:30 am – 08 pm IST - Implement Proof of Concept and Pilot machine learning solutions using AWS ML toolkit and SaaS platforms - Configure and optimize pre-built ML models for specific business requirements - Set up automated data pipelines leveraging AWS services and third-party tools - Create dashboards and visualizations to communicate insights to stakeholders - Document technical processes and knowledge transfer for future maintenance Requirements - Bachelor’s degree in computer science, Data Science, or related field - 1-3 years of professional experience implementing machine learning solutions. -We can entertain someone who is fresh graduate with significant work in AI in either internship or projects - Demonstrated experience with AWS machine learning services (SageMaker, AWS ML Services, and understanding of underpinnings of ML models and evaluations.) - Proficiency with data science SaaS tools (Dataiku, Indico, H2O.ai, or similar platforms) - Working knowledge of AWS data engineering services (S3, Glue, Athena, Lambda) - Experience with Python and common data manipulation libraries - Strong problem-solving skills and ability to work independently Preferred Qualifications - Previous contract or work experience in similar roles - Familiarity with API integration between various platforms - Experience with BI tools (Power BI, QuickSight) - Knowledge of cost optimization techniques for AWS ML services - Prior experience in our industry (please see company overview)
Posted 1 week ago
5.0 years
9 - 16 Lacs
India
On-site
Gen-AI Tech Lead - Enterprise AI Applications About Us We're a cutting-edge technology company building enterprise-grade AI solutions that transform how businesses operate. Our platform leverages the latest in Generative AI to create intelligent applications for document processing, automated decision-making, and knowledge management across industries. Role Overview We're seeking an exceptional Gen-AI Tech Lead to architect, build, and scale our next-generation AI-powered enterprise applications. You'll lead the technical strategy for implementing Large Language Models, fine-tuning custom models, and deploying production-ready AI systems that serve millions of users. Key Responsibilities - AI/ML Leadership (90% Hands-on) Design and implement enterprise-scale Generative AI applications using custom LLMs or (GPT, Claude, Llama, Gemini) Lead fine-tuning initiatives for domain-specific models and custom use cases Build and optimize model training pipelines for large-scale data processing Develop RAG (Retrieval-Augmented Generation) systems with vector databases and semantic search Implement prompt engineering strategies and automated prompt optimization Create AI evaluation frameworks and model performance monitoring systems Enterprise Application Development Build scalable Python applications integrating multiple AI models and APIs Develop microservices architecture for AI model serving and orchestration Implement real-time AI inference systems with sub-second response times Design fault-tolerant systems with fallback mechanisms and error handling Create APIs and SDKs for enterprise AI integration Build AI model version control and A/B testing frameworks MLOps & Infrastructure Containerize AI applications using Docker and orchestrate with Kubernetes Design and implement CI/CD pipelines for ML model deployment Set up model monitoring, drift detection, and automated retraining systems Optimize inference performance and cost efficiency in cloud environments Implement security and compliance measures for enterprise AI applications Technical Leadership Lead a team of 3-5 AI engineers and data scientists Establish best practices for AI development, testing, and deployment Mentor team members on cutting-edge AI technologies and techniques Collaborate with product and business teams to translate requirements into AI solutions Drive technical decision-making for AI architecture and technology stack Required Skills & Experience Core AI/ML Expertise Python : 5+ years of production Python development with AI/ML libraries LLMs : Hands-on experience with GPT-4, Claude, Llama 2/3, Gemini, or similar models Fine-tuning : Proven experience fine-tuning models using LoRA, QLoRA, or full parameter tuning Model Training : Experience training models from scratch or continued pre-training Frameworks : Expert-level knowledge of PyTorch, TensorFlow, Hugging Face Transformers Vector Databases : Experience with Pinecone, Weaviate, ChromaDB, or Qdrant Technical StackAI/ML Stack Models : OpenAI GPT, Anthropic Claude, Meta Llama, Google Gemini Frameworks : PyTorch, Hugging Face Transformers, LangChain, LlamaIndex Training : Distributed training with DeepSpeed, Accelerate, or Fairscale Serving : vLLM, TensorRT-LLM, or Triton Inference Server Vector Search : Pinecone, Weaviate, FAISS, Elasticsearch Infrastructure & DevOps Containerization : Docker, Kubernetes, Helm charts Cloud : AWS (ECS, EKS, Lambda, SageMaker), GCP Vertex AI Databases : PostgreSQL, MongoDB, Redis, Neo4j Monitoring : Prometheus, Grafana, DataDog, MLflow CI/CD : GitHub Actions, Jenkins, ArgoCD Professional Growth Work directly with founders and C-level executives Opportunity to publish research and speak at AI conferences Access to latest AI models and cutting-edge research Mentorship from industry experts and AI researchers Budget for attending top AI conferences (NeurIPS, ICML, ICLR) Ideal Candidate Profile Passionate about pushing the boundaries of AI technology Strong engineering mindset with focus on production systems Experience shipping AI products used by thousands of users Stays current with latest AI research and implements cutting-edge techniques Excellent problem-solving skills and ability to work under ambiguity Leadership experience in fast-paced, high-growth environments Apply now and help us democratize AI for enterprise customers worldwide. Job Type: Full-time Pay: ₹900,000.00 - ₹1,600,000.00 per year Schedule: Monday to Friday Supplemental Pay: Performance bonus
Posted 1 week ago
2.0 years
0 Lacs
Haryana
On-site
Provectus helps companies adopt ML/AI to transform the ways they operate, compete, and drive value. The focus of the company is on building ML Infrastructure to drive end-to-end AI transformations, assisting businesses in adopting the right AI use cases, and scaling their AI initiatives organization-wide in such industries as Healthcare & Life Sciences, Retail & CPG, Media & Entertainment, Manufacturing, and Internet businesses. We are seeking a highly skilled Machine Learning (ML) Tech Lead with a strong background in Large Language Models (LLMs) and AWS Cloud services. The ideal candidate will oversee the development and deployment of cutting-edge AI solutions while managing a team of 5-10 engineers. This leadership role demands hands-on technical expertise, strategic planning, and team management capabilities to deliver innovative products at scale. Responsibilities: Leadership & Management Lead and manage a team of 5-10 engineers, providing mentorship and fostering a collaborative team environment; Drive the roadmap for machine learning projects aligned with business goals; Coordinate cross-functional efforts with product, data, and engineering teams to ensure seamless delivery. Machine Learning & LLM Expertise Design, develop, and fine-tune LLMs and other machine learning models to solve business problems; Evaluate and implement state-of-the-art LLM techniques for NLP tasks such as text generation, summarization, and entity extraction; Stay ahead of advancements in LLMs and apply emerging technologies; Expertise in multiple main fields of ML: NLP, Computer Vision, RL, deep learning and classical ML. AWS Cloud Expertise Architect and manage scalable ML solutions using AWS services (e.g., SageMaker, Lambda, Bedrock, S3, ECS, ECR, etc.); Optimize models and data pipelines for performance, scalability, and cost-efficiency in AWS; Ensure best practices in security, monitoring, and compliance within the cloud infrastructure. Technical Execution Oversee the entire ML lifecycle, from research and experimentation to production and maintenance; Implement MLOps and LLMOps practices to streamline model deployment and CI/CD workflows; Debug, troubleshoot, and optimize production ML models for performance. Team Development & Communication Conduct regular code reviews and ensure engineering standards are upheld; Facilitate professional growth and learning for the team through continuous feedback and guidance; Communicate progress, challenges, and solutions to stakeholders and senior leadership. Qualifications: Proven experience with LLMs and NLP frameworks (e.g., Hugging Face, OpenAI, or Anthropic models); Strong expertise in AWS Cloud Services; Strong experience in ML/AI, including at least 2 years in a leadership role; Hands-on experience with Python, TensorFlow/PyTorch, and model optimization; Familiarity with MLOps tools and best practices; Excellent problem-solving and decision-making abilities; Strong communication skills and the ability to lead cross-functional teams; Passion for mentoring and developing engineers.
Posted 1 week ago
0 years
0 Lacs
Haryana
On-site
Join us at Provectus to be a part of a team that is dedicated to building cutting-edge technology solutions that have a positive impact on society. Our company specializes in AI and ML technologies, cloud services, and data engineering, and we take pride in our ability to innovate and push the boundaries of what's possible. As an ML Engineer, you’ll be provided with all opportunities for development and growth. Let's work together to build a better future for everyone! Requirements: Comfortable with standard ML algorithms and underlying math. Strong hands-on experience with LLMs in production, RAG architecture, and agentic systems AWS Bedrock experience strongly preferred Practical experience with solving classification and regression tasks in general, feature engineering. Practical experience with ML models in production. Practical experience with one or more use cases from the following: NLP, LLMs, and Recommendation engines. Solid software engineering skills (i.e., ability to produce well-structured modules, not only notebook scripts). Python expertise, Docker. English level - strong Intermediate. Excellent communication and problem-solving skills. Will be a plus: Practical experience with cloud platforms (AWS stack is preferred, e.g. Amazon SageMaker, ECR, EMR, S3, AWS Lambda). Practical experience with deep learning models. Experience with taxonomies or ontologies. Practical experience with machine learning pipelines to orchestrate complicated workflows. Practical experience with Spark/Dask, Great Expectations. Responsibilities: Create ML models from scratch or improve existing models. Collaborate with the engineering team, data scientists, and product managers on production models. Develop experimentation roadmap. Set up a reproducible experimentation environment and maintain experimentation pipelines. Monitor and maintain ML models in production to ensure optimal performance. Write clear and comprehensive documentation for ML models, processes, and pipelines. Stay updated with the latest developments in ML and AI and propose innovative solutions.
Posted 1 week ago
0.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
About Gartner IT: Join a world-class team of skilled engineers who build creative digital solutions to support our colleagues and clients. We make a broad organizational impact by delivering cutting-edge technology solutions that power Gartner. Gartner IT values its culture of nonstop innovation, an outcome-driven approach to success, and the notion that great ideas can come from anyone on the team. About the role: Gartner is seeking a talented and passionate MLOps Engineer to join our growing team. In this role, you will be responsible for building Python and Spark-based ML solutions that ensure the reliability and efficiency of our machine learning systems in production. You will collaborate closely with data scientists to operationalize existing models and optimize our ML workflows. Your expertise in Python, Spark, model inferencing, and AWS services will be crucial in driving our data-driven initiatives. What you’ll do: Develop ML inferencing and data pipelines with AWS tools (S3, EMR, Glue, Athena). Python API development using Frameworks like FASTAPI and DJANGO Deploy and optimize ML models on SageMaker and EKS Implement IaC with Terraform and CI/CD for seamless deployments. Ensure quality, scalability and performance of API’s. Collaborate with product manager, data scientists and other engineers for smooth operations. Communicate technical insights clearly and support production troubleshooting when needed. What you’ll need: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Must have: 0-2 years of experience building data and MLOPS pipelines using Python and Spark. Strong proficiency in Python. Exposure to Spark is good to have. Hands-on experience Restful development using Python frameworks like Fast API and Django Experience with Docker and Kubernetes (EKS) or Sagemaker. Experience with CloudFormation or Terraform for deploying and managing AWS resources. Strong problem-solving and analytical skills. Ability to work effectively within a agile environment. Who you are: Bachelor’s degree or foreign equivalent degree in Computer Science or a related field required Excellent communication and prioritization skills. Able to work independently or within a team proactively in a fast-paced AGILE-SCRUM environment. Owns success – Takes responsibility for successful delivery of the solutions. Strong desire to improve upon their skills in software testing and technologies Don’t meet every single requirement? We encourage you to apply anyway. You might just be the right candidate for this, or other roles. Who are we? At Gartner, Inc. (NYSE:IT), we guide the leaders who shape the world. Our mission relies on expert analysis and bold ideas to deliver actionable, objective insight, helping enterprise leaders and their teams succeed with their mission-critical priorities. Since our founding in 1979, we’ve grown to more than 21,000 associates globally who support ~14,000 client enterprises in ~90 countries and territories. We do important, interesting and substantive work that matters. That’s why we hire associates with the intellectual curiosity, energy and drive to want to make a difference. The bar is unapologetically high. So is the impact you can have here. What makes Gartner a great place to work? Our sustained success creates limitless opportunities for you to grow professionally and flourish personally. We have a vast, virtually untapped market potential ahead of us, providing you with an exciting trajectory long into the future. How far you go is driven by your passion and performance. We hire remarkable people who collaborate and win as a team. Together, our singular, unifying goal is to deliver results for our clients. Our teams are inclusive and composed of individuals from different geographies, cultures, religions, ethnicities, races, genders, sexual orientations, abilities and generations. We invest in great leaders who bring out the best in you and the company, enabling us to multiply our impact and results. This is why, year after year, we are recognized worldwide as a great place to work. What do we offer? Gartner offers world-class benefits, highly competitive compensation and disproportionate rewards for top performers. In our hybrid work environment, we provide the flexibility and support for you to thrive — working virtually when it's productive to do so and getting together with colleagues in a vibrant community that is purposeful, engaging and inspiring. Ready to grow your career with Gartner? Join us. The policy of Gartner is to provide equal employment opportunities to all applicants and employees without regard to race, color, creed, religion, sex, sexual orientation, gender identity, marital status, citizenship status, age, national origin, ancestry, disability, veteran status, or any other legally protected status and to seek to advance the principles of equal employment opportunity. Gartner is committed to being an Equal Opportunity Employer and offers opportunities to all job seekers, including job seekers with disabilities. If you are a qualified individual with a disability or a disabled veteran, you may request a reasonable accommodation if you are unable or limited in your ability to use or access the Company’s career webpage as a result of your disability. You may request reasonable accommodations by calling Human Resources at +1 (203) 964-0096 or by sending an email to ApplicantAccommodations@gartner.com. Job Requisition ID:101728 By submitting your information and application, you confirm that you have read and agree to the country or regional recruitment notice linked below applicable to your place of residence. Gartner Applicant Privacy Link: https://jobs.gartner.com/applicant-privacy-policy For efficient navigation through the application, please only use the back button within the application, not the back arrow within your browser.
Posted 1 week ago
0 years
0 Lacs
Delhi, India
On-site
Job Location: - Gurugram, Haryana, India Work Timings: 11:30 am – 08 pm IST Exp:- 3-10 y Requirements - Bachelor’s degree in computer science, Data Science, or related field - Experience in implementing machine learning solutions. -We can entertain someone who is fresh graduate with significant work in AI in either internship or projects - Demonstrated experience with AWS machine learning services (SageMaker, AWS ML Services, and understanding of underpinnings of ML models and evaluations.) - Proficiency with data science SaaS tools (Dataiku, Indico, H2O.ai, or similar platforms) - Working knowledge of AWS data engineering services (S3, Glue, Athena, Lambda) - Experience with Python and common data manipulation libraries - Strong problem-solving skills and ability to work independently Preferred Qualifications - Previous contract or work experience in similar roles - Familiarity with API integration between various platforms - Experience with BI tools (Power BI, QuickSight) - Knowledge of cost optimization techniques for AWS ML services - Prior experience in our industry (please see company overview)
Posted 1 week ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are seeking a visionary AI Architect to lead the design and integration of cutting-edge AI systems, including Generative AI , Large Language Models (LLMs) , multi-agent orchestration , and retrieval-augmented generation (RAG) frameworks. This role demands a strong technical foundation in machine learning, deep learning, and AI infrastructure, along with hands-on experience in building scalable, production-grade AI systems on the cloud. The ideal candidate combines architectural leadership with hands-on proficiency in modern AI frameworks, and can translate complex business goals into innovative, AI-driven technical solutions. Primary Stack & Tools: Languages : Python, SQL, Bash ML/AI Frameworks : PyTorch, TensorFlow, Scikit-learn, Hugging Face Transformers GenAI & LLM Tooling : OpenAI APIs, LangChain, LlamaIndex, Cohere, Claude, Azure OpenAI Agentic & Multi-Agent Frameworks : LangGraph, CrewAI, Agno, AutoGen Search & Retrieval : FAISS, Pinecone, Weaviate, Elasticsearch Cloud Platforms : AWS, GCP, Azure (preferred: Vertex AI, SageMaker, Bedrock) MLOps & DevOps : MLflow, Kubeflow, Docker, Kubernetes, CI/CD pipelines, Terraform, FAST API Data Tools : Snowflake, BigQuery, Spark, Airflow Key Responsibilities: Architect scalable and secure AI systems leveraging LLMs , GenAI , and multi-agent frameworks to support diverse enterprise use cases (e.g., automation, personalization, intelligent search). Design and oversee implementation of retrieval-augmented generation (RAG) pipelines integrating vector databases, LLMs, and proprietary knowledge bases. Build robust agentic workflows using tools like LangGraph , CrewAI , or Agno , enabling autonomous task execution, planning, memory, and tool use. Collaborate with product, engineering, and data teams to translate business requirements into architectural blueprints and technical roadmaps. Define and enforce AI/ML infrastructure best practices , including security, scalability, observability, and model governance. Manage technical road-map, sprint cadence, and 3–5 AI engineers; coach on best practices. Lead AI solution design reviews and ensure alignment with compliance, ethics, and responsible AI standards. Evaluate emerging GenAI & agentic tools; run proofs-of-concept and guide build-vs-buy decisions. Qualifications: 10+ years of experience in AI/ML engineering or data science, with 3+ years in AI architecture or system design. Proven experience designing and deploying LLM-based solutions at scale, including fine-tuning , prompt engineering , and RAG-based systems . Strong understanding of agentic AI design principles , multi-agent orchestration , and tool-augmented LLMs . Proficiency with cloud-native ML/AI services and infrastructure design across AWS, GCP, or Azure. Deep expertise in model lifecycle management, MLOps, and deployment workflows (batch, real-time, streaming). Familiarity with data governance , AI ethics , and security considerations in production-grade systems. Excellent communication and leadership skills, with the ability to influence technical and business stakeholders.
Posted 1 week ago
5.0 - 8.0 years
4 - 15 Lacs
Pune, Maharashtra, India
On-site
We are seeking experienced GenAI Developers to join our team in Pune. As a key member of our AI innovation group, you will be responsible for designing, architecting, and implementing scalable Generative AI solutions leveraging AWS services. You will lead the transformation of existing LLM-based architectures into modern, cloud-native solutions and collaborate with cross-functional teams to deliver cutting-edge AI applications. Key Responsibilities: Design and architect scalable GenAI solutions using AWS Bedrock and other AWS services. Lead the transformation of homegrown LLM-based architecture into a managed cloud-native solution. Provide architectural guidance across components: RAG pipelines, LLM orchestration, vector DB integration, inference workflows, and scalable endpoints. Collaborate closely with GenAI developers, MLOps, and data engineering teams to ensure alignment on implementation. Evaluate, integrate, and benchmark frameworks such as LangChain, Autogen, Haystack, or LlamaIndex. Ensure infrastructure and solutions are secure, scalable, and cost-optimized on AWS. Act as a technical SME and hands-on contributor for architecture reviews and POCs. Must-Have Skills: Deep expertise with AWS Bedrock, S3, Lambda, SageMaker, API Gateway, DynamoDB/Redshift, etc. Proven experience architecting LLM applications with RAG, embeddings, prompt engineering. Hands-on understanding of frameworks like LangChain, LlamaIndex, or Autogen. Knowledge of LLMs like Anthropic Claude, Mistral, Falcon, or custom models. Strong understanding of API design, containerization (Docker), and serverless architecture. Experience leading cloud-native transformations. Preferred: Experience with CI/CD, DevOps integration for ML/AI pipelines. Exposure to AWS, Azure/GCP in GenAI (bonus).
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough