Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0.0 years
0 Lacs
Chennai, Tamil Nadu
On-site
Location Chennai, Tamil Nadu, India Job ID R-230200 Date posted 26/06/2025 Job Title: Senior Consultant - AI Engineer Introduction to role: Are you ready to redefine an industry and change lives? As a Senior Consultant - AI Engineer, you'll develop and deploy key AI products that generate business and scientific insights through advanced data science techniques. Dive into building models using both foundational and innovative methods, processing structured and unstructured data, and collaborating closely with internal partners to solve complex problems in drug development, manufacturing, and supply chain. This is your chance to make a direct impact on patients by transforming our ability to develop life-changing medicines! About the Operations IT team Operations IT is a global capability supporting the Global Operations organization across Pharmaceutical Technology Development, Manufacturing & Global Engineering, Quality Control, Sustainability, Supply Chain, Logistics, and Global External Sourcing and Procurement. We operate from key hubs in the UK, Sweden, the US, China, and our Global Technology Centers in India and Mexico. Our work directly impacts patients by transforming our ability to develop life-changing medicines, combining pioneering science with leading digital technology platforms and data. Accountabilities: Drive the implementation of advanced modelling algorithms (e.g., classification, regression, clustering, NLP, image analysis, graph theory, generative AI) to generate actionable business insights. Mentor AI scientists, plan and supervise technical work, and collaborate with stakeholders. Work within an agile framework and in multi-functional teams to align AI solutions with business goals. Engage internal stakeholders and external partners for the successful delivery of AI solutions. Continuously monitor and optimize AI models to improve accuracy and efficiency (scalable, reliable, and well-maintained). Document processes, models, and key learnings & contribute to building internal AI capabilities. Ensure AI models adhere to ethical standards, privacy regulations, and fairness guidelines. Essential Skills/Experience: Bachelor's in operations research, mathematics, computer science, or related quantitative field. Advanced expertise in Python and familiarity with database systems (e.g. SQL, NoSQL, Graph). Proven proficiency in at least 3 of the following domains: Generative AI: advanced expertise working with: LLMs/transformer models, AWS Bedrock, SageMaker, LangChain Computer Vision: image classification and object detection MLOps: putting models into production in the AWS ecosystem Optimization: production scheduling, planning Traditional ML: time series analysis, unsupervised anomaly detection, analysis of high dimensional data Proficiency in ML libraries sklearn, pandas, TensorFlow/PyTorch Experience productionizing ML/ Gen AI services and working with complex datasets. Strong understanding of software development, algorithms, optimization, and scaling. Excellent communication and business analysis skills. Desirable Skills/Experience: Master’s or PhD in a relevant quantitative field. Cloud engineering experience (AWS cloud services) Snowflake Software development experience (e.g. React JS, Node JS) When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca, we leverage technology to impact patients and ultimately save lives. Our global organization is purpose-led, ensuring that we can fulfill our mission to push the boundaries of science and discover life-changing medicines. We take pride in working close to the cause, opening the locks to save lives, ultimately making a massive difference to the outside world. Here you'll find a dynamic environment where innovation thrives and diverse minds work inclusively together. Ready to make a meaningful impact? Apply now and be part of our journey to revolutionize healthcare! Date Posted 27-Jun-2025 Closing Date 09-Jul-2025 AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorization and employment eligibility verification requirements.
Posted 1 month ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title : AI/ML Engineer Location : Bengaluru, India Experience : 6 months - 2 years Company Overview IAI Solutions operates at the edge of applied AI where foundational research meets real-world deployment. We craft intelligent systems that think in teams, adapt with context, and deliver actionable insight across domains. Position Summary We are looking for an AI/ML Engineer with a strong background in Python, Flask, FastAPI and Object-Oriented Programming (OOP). The ideal candidate should have significant experience in prompt engineering, open source model fine-tuning, and using the HuggingFace libraries. Additionally, expertise in working with cloud platforms such as AWS SageMaker or similar services for training AI models is essential. Priority will be given to candidates with a research background, particularly those who have successfully fine-tuned and deployed AI models in real-world applications. Key Responsibilities Develop, fine-tune, and deploy AI models using Python and Django frameworks. Apply prompt engineering techniques to optimize model outputs and improve accuracy. Utilize HuggingFace libraries and other ML tools to build and fine-tune state-of-the-art models. Work on cloud platforms like AWS SageMaker or equivalent to train and deploy AI models efficiently. Collaborate with research teams to translate cutting-edge AI research into scalable solutions. Implement object-oriented programming (OOP) principles and problem-solving strategies in developing AI solutions. Stay updated with the latest advancements in AI/ML and integrate new techniques into ongoing projects. Document and share findings, best practices, and solutions across the engineering team. An Ideal Candidate Will Have Strong proficiency in Python and Flask/FastAPI. Experience in prompt engineering and fine-tuning AI models. Extensive experience with HuggingFace libraries and similar AI/ML tools. Strong experience in AI Agentic Architecture Hands-on experience with cloud platforms such as AWS SageMaker for training and deploying models. Proficiency in Databases like MongoDB or PostgreSQL, as well as vector databases such as FAISS, Qdrant, or Elasticsearch Hands-on experience with Docker and Git for version control. Background in AI/ML research, with a preference for candidates from research institutes. Demonstrated experience in training and deploying machine learning models in real-world applications. Solid understanding of object-oriented programming and problem-solving skills. Strong analytical skills and the ability to work independently or in a team environment. Excellent communication skills, with the ability to present complex technical concepts to non-technical stakeholders. Must Have Skills Python Object-Oriented Programming (OOP) Prompt engineering HuggingFace libraries and similar AI/ML tools Open Source Model fine-tuning AI Agentic Architecture such as LangGraph and CrewA Docker and Git for version control. Databases like MongoDB or PostgreSQL, as well as vector databases such as FAISS, Qdrant, or Elasticsearch Good To Have Deep Learning and Machine Learning AWS SageMaker or similar services for training AI models Previous experience in academic or industrial research, with published work in AI/ML. Proven track record of successful AI model deployments and optimizations. Experience with databases like MongoDB or PostgreSQL, as well as vector databases such as FAISS, Qdrant, or Elasticsearch. Perks & Benefits Work on groundbreaking AI/ML projects in a collaborative and innovative environment. Access to state-of-the-art tools and cloud platforms. Opportunities for professional development and continuous learning. Competitive salary. (ref:hirist.tech)
Posted 1 month ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
When you join Verizon You want more out of a career. A place to share your ideas freely even if theyre daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What Youll Be Doing... Join Verizon as we continue to grow our industry-leading network to improve the ways people, businesses, and things connect. We are looking for an experienced, talented and motivated AI&ML Engineer to lead AI Industrialization for Verizon. You will also serve as a subject matter expert regarding the latest industry knowledge to improve the Home Product and solutions and/or processes related to Machine Learning, Deep Learning, Responsible AI, Gen AI, Natural Language Processing, Computer Vision and other AI practices. Deploying machine learning models - On Prem, Cloud and Kubernetes environments Driving data-derived insights across the business domain by developing advanced statistical models, machine learning algorithms and computational algorithms based on business initiatives. Creating and implementing data and ML pipelines for model inference, both in real-time and in batches. Architecting, designing, and implementing large-scale AI/ML systems in a production environment. Monitor the performance of data pipelines and make improvements as necessary What were looking for... You have strong analytical skills and are eager to work in a collaborative environment with global teams to drive ML applications in business problems, develop end-to-end analytical solutions, and communicate insights and findings to leadership. You work independently and are always willing to learn new technologies. You thrive in a dynamic environment and can interact with various partners and multi-functional teams to implement data science-driven business solutions. You'll Need To Have Bachelor's degree with four or more years of relevant work experience. Expertise in advanced analytics/ predictive modelling in a consulting role. Experience with all phases of end-to-end Analytics project Hands-on programming expertise in Python (with libraries like NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch), R (for specific data analysis tasks) Knowledge of Machine Learning Algorithms - Linear Regression, Logistic Regression, Decision Trees, Random Forests, Support Vector Machines (SVMs), Neural Networks (Deep Learning), Bayesian Networks Data Engineering - Data Cleaning and Preprocessing, Feature Engineering, Data Transformation, Data Visualization Cloud Platforms - AWS SageMaker, Azure Machine Learning, Cloud AI Platform Even better if you have one or more of the following: Advanced degree in Computer Science, Data Science, Machine Learning, or a related field. Knowledge on Home domain with key areas like Smart Home, Digital security and wellbeing Experience with stream-processing systems: Spark-Streaming, Storm etc. #TPDNONCDIO Where youll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Locations Hyderabad, India Chennai, India
Posted 1 month ago
0 years
10 - 12 Lacs
India
Remote
Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company Operating at the forefront of cloud analytics, big-data platform engineering, and enterprise AI , our teams design mission-critical data infrastructure for global clients across finance, retail, telecom, and emerging tech. We build distributed ingestion pipelines on Azure & Databricks, unlock real-time insights with Spark/Kafka, and automate delivery through modern DevOps so businesses can act on high-fidelity data, fast. Role & Responsibilities Engineer robust data pipelines: build scalable batch & streaming workflows with Apache Spark, Kafka, and Azure Data Factory/Databricks. Implement Delta Lakehouse layers: design bronze-silver-gold medallion architecture to guarantee data quality and lineage. Automate CI/CD for ingestion: create Git-based workflows, containerized builds, and automated testing to ship reliable code. Craft clean, test-driven Python: develop modular PySpark/Pandas services, enforce SOLID principles, and maintain git-versioned repos. Optimize performance & reliability: profile jobs, tune clusters, and ensure SLAs for throughput, latency, and cost. Collaborate in Agile squads: partner with engineers, analysts, and consultants to translate business questions into data solutions. Skills & Qualifications Must-Have 1-2 yrs hands-on with Apache Spark or Kafka and Python (PySpark/Pandas/Polars). Experience building Delta Lake / medallion architectures on Azure or Databricks. Proven ability to design event-driven pipelines and write unit/integration tests. Git-centric workflow knowledge plus CI/CD tooling (GitHub Actions, Azure DevOps). Preferred Exposure to SQL/Relational & NoSQL stores and hybrid lake-house integrations. STEM/computer-science degree or equivalent foundation in algorithms and OOP. Benefits & Culture Highlights Flexible, remote-first teams: outcome-driven culture with quarterly hackathons and dedicated learning budgets. Growth runway: clear promotion paths from Associate to Senior Engineer, backed by certified Azure & Databricks training. Inclusive collaboration: small, empowered Agile squads that value knowledge-sharing, mentorship, and transparent feedback. Skills: modern javascript,cloud,vector databases,angular,pipelines,ci,containerization,ml,aws,langchain,shell scripting,mlops,performance testing,knowledge-graph design (rdf/owl/sparql),data,feature engineering,ci/cd,python,aws services (sagemaker, bedrock, lambda),synthetic-data augmentation,generative ai,data-cataloging,metadata management,lineage,data governance
Posted 1 month ago
0 years
10 - 12 Lacs
India
Remote
Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company Operating at the forefront of cloud analytics, big-data platform engineering, and enterprise AI , our teams design mission-critical data infrastructure for global clients across finance, retail, telecom, and emerging tech. We build distributed ingestion pipelines on Azure & Databricks, unlock real-time insights with Spark/Kafka, and automate delivery through modern DevOps so businesses can act on high-fidelity data, fast. Role & Responsibilities Engineer robust data pipelines: build scalable batch & streaming workflows with Apache Spark, Kafka, and Azure Data Factory/Databricks. Implement Delta Lakehouse layers: design bronze-silver-gold medallion architecture to guarantee data quality and lineage. Automate CI/CD for ingestion: create Git-based workflows, containerized builds, and automated testing to ship reliable code. Craft clean, test-driven Python: develop modular PySpark/Pandas services, enforce SOLID principles, and maintain git-versioned repos. Optimize performance & reliability: profile jobs, tune clusters, and ensure SLAs for throughput, latency, and cost. Collaborate in Agile squads: partner with engineers, analysts, and consultants to translate business questions into data solutions. Skills & Qualifications Must-Have 1-2 yrs hands-on with Apache Spark or Kafka and Python (PySpark/Pandas/Polars). Experience building Delta Lake / medallion architectures on Azure or Databricks. Proven ability to design event-driven pipelines and write unit/integration tests. Git-centric workflow knowledge plus CI/CD tooling (GitHub Actions, Azure DevOps). Preferred Exposure to SQL/Relational & NoSQL stores and hybrid lake-house integrations. STEM/computer-science degree or equivalent foundation in algorithms and OOP. Benefits & Culture Highlights Flexible, remote-first teams: outcome-driven culture with quarterly hackathons and dedicated learning budgets. Growth runway: clear promotion paths from Associate to Senior Engineer, backed by certified Azure & Databricks training. Inclusive collaboration: small, empowered Agile squads that value knowledge-sharing, mentorship, and transparent feedback. Skills: modern javascript,cloud,vector databases,angular,pipelines,ci,containerization,apache spark,aws,ml,langchain,shell scripting,kafka,performance testing,mlops,pandas,knowledge-graph design (rdf/owl/sparql),sql,data,feature engineering,nosql,delta lake,ci/cd,python,aws services (sagemaker, bedrock, lambda),pyspark,synthetic-data augmentation,generative ai,data-cataloging,metadata management,databricks,git,lineage,data governance,azure
Posted 1 month ago
6.0 years
15 - 17 Lacs
India
Remote
Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company Operating at the intersection of Artificial Intelligence, Cloud Infrastructure, and Enterprise SaaS , we create data-driven products that power decision-making for Fortune 500 companies and high-growth tech firms. Our multidisciplinary teams ship production-grade generative-AI and Retrieval-Augmented Generation (RAG) solutions that transform telecom, finance, retail, and healthcare workflows—without compromising on scale, security, or speed. Role & Responsibilities Build & ship LLM/RAG solutions: design, train, and productionize advanced ML and generative-AI models (GPT-family, T5) that unlock new product capabilities. Own data architecture: craft schemas, ETL/ELT pipelines, and governance processes to guarantee high-quality, compliant training data on AWS. End-to-end MLOps: implement CI/CD, observability, and automated testing (Robot Framework, JMeter, XRAY) for reliable model releases. Optimize retrieval systems: engineer vector indices, semantic search, and knowledge-graph integrations that deliver low-latency, high-relevance results. Cross-functional leadership: translate business problems into measurable ML solutions, mentor junior scientists, and drive sprint ceremonies. Documentation & knowledge-sharing: publish best practices and lead internal workshops to scale AI literacy across the organization. Skills & Qualifications Must-Have – Technical Depth: 6 + years building ML pipelines in Python; expert in feature engineering, evaluation, and AWS services (SageMaker, Bedrock, Lambda). Must-Have – Generative AI & RAG: proven track record shipping LLM apps with LangChain or similar, vector databases, and synthetic-data augmentation. Must-Have – Data Governance: hands-on experience with metadata, lineage, data-cataloging, and knowledge-graph design (RDF/OWL/SPARQL). Must-Have – MLOps & QA: fluency in containerization, CI/CD, and performance testing; ability to embed automation within GitLab-based workflows. Preferred – Domain Expertise: background in telecom or large-scale B2B platforms where NLP and retrieval quality are mission-critical. Preferred – Full-Stack & Scripting: familiarity with Angular or modern JS for rapid prototyping plus shell scripting for orchestration. Benefits & Culture Highlights High-impact ownership: green-field autonomy to lead flagship generative-AI initiatives used by millions. Flex-first workplace: hybrid schedule, generous learning stipend, and dedicated cloud credits for experimentation. Inclusive, data-driven culture: celebrate research publications, OSS contributions, and diverse perspectives while solving hard problems together. Skills: data,modern javascript,cloud,vector databases,angular,pipelines,ci,containerization,ml,aws,langchain,shell scripting,mlops,performance testing,knowledge-graph design (rdf/owl/sparql),feature engineering,ci/cd,python,aws services (sagemaker, bedrock, lambda),synthetic-data augmentation,generative ai,data-cataloging,metadata management,lineage,data governance
Posted 1 month ago
0 years
10 - 12 Lacs
India
Remote
Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company Operating at the forefront of cloud analytics, big-data platform engineering, and enterprise AI , our teams design mission-critical data infrastructure for global clients across finance, retail, telecom, and emerging tech. We build distributed ingestion pipelines on Azure & Databricks, unlock real-time insights with Spark/Kafka, and automate delivery through modern DevOps so businesses can act on high-fidelity data, fast. Role & Responsibilities Engineer robust data pipelines: build scalable batch & streaming workflows with Apache Spark, Kafka, and Azure Data Factory/Databricks. Implement Delta Lakehouse layers: design bronze-silver-gold medallion architecture to guarantee data quality and lineage. Automate CI/CD for ingestion: create Git-based workflows, containerized builds, and automated testing to ship reliable code. Craft clean, test-driven Python: develop modular PySpark/Pandas services, enforce SOLID principles, and maintain git-versioned repos. Optimize performance & reliability: profile jobs, tune clusters, and ensure SLAs for throughput, latency, and cost. Collaborate in Agile squads: partner with engineers, analysts, and consultants to translate business questions into data solutions. Skills & Qualifications Must-Have 1-2 yrs hands-on with Apache Spark or Kafka and Python (PySpark/Pandas/Polars). Experience building Delta Lake / medallion architectures on Azure or Databricks. Proven ability to design event-driven pipelines and write unit/integration tests. Git-centric workflow knowledge plus CI/CD tooling (GitHub Actions, Azure DevOps). Preferred Exposure to SQL/Relational & NoSQL stores and hybrid lake-house integrations. STEM/computer-science degree or equivalent foundation in algorithms and OOP. Benefits & Culture Highlights Flexible, remote-first teams: outcome-driven culture with quarterly hackathons and dedicated learning budgets. Growth runway: clear promotion paths from Associate to Senior Engineer, backed by certified Azure & Databricks training. Inclusive collaboration: small, empowered Agile squads that value knowledge-sharing, mentorship, and transparent feedback. Skills: modern javascript,cloud,vector databases,angular,pipelines,ci,containerization,ml,aws,langchain,shell scripting,mlops,performance testing,knowledge-graph design (rdf/owl/sparql),data,feature engineering,ci/cd,python,aws services (sagemaker, bedrock, lambda),synthetic-data augmentation,generative ai,data-cataloging,metadata management,lineage,data governance
Posted 1 month ago
10.0 years
0 Lacs
Greater Kolkata Area
On-site
Join our Team About this opportunity: We are seeking a highly skilled and motivated AI Engineer to join our growing AI/ML team. The ideal candidate will be responsible for developing, deploying, and scaling machine learning and GenAI (LLM-based) solutions across business domains. You will work closely with data scientists, data engineers, and business stakeholders to turn innovative ideas into impactful solutions using state-of-the-art cloud technologies and MLOps practices. What you will do: Develop, Train, Test, and Deploy Machine Learning and GenAI-LLM models. Collect, Clean, and preprocess large-scale datasets for AI/ML training and evaluation. Collaborate with cross-functional teams to understand business needs and translate them into AI solutions. Design and implement scalable AI services and pipelines using Python and Cloud technologies (e.g., Azure, AWS).Continuously improve model performance through tuning, optimization, and retraining. Knowledge of MLOps practices to use IaC to deploy models into production and industrialising the business solution The skills you bring: Strong expertise in AWS services: Glue, SageMaker, Lambda, CloudWatch, S3, IAM, etc. Solid programming skills in Python and experience with PySpark for large-scale data processing. Experience with DevOps/MLOps tools such as Azure DevOps, GitHub Actions. Experience & Education: Bachelor’s or Master’s in Computer Science, Data Science, AI, or related field. 5–10 years of experience in ML/AI model development and deployment. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Kolkata Req ID: 769126
Posted 1 month ago
0 years
0 Lacs
Borivali, Maharashtra, India
On-site
Description The Amazon Web Services Professional Services (ProServe) team is seeking a skilled Delivery Consultant to join our team at Amazon Web Services (AWS). In this role, you'll work closely with customers to design, implement, and manage AWS solutions that meet their technical requirements and business objectives. You'll be a key player in driving customer success through their cloud journey, providing technical expertise and best practices throughout the project lifecycle. Possessing a deep understanding of AWS products and services, as a Delivery Consultant you will be proficient in architecting complex, scalable, and secure solutions tailored to meet the specific needs of each customer. You’ll work closely with stakeholders to gather requirements, assess current infrastructure, and propose effective migration strategies to AWS. As trusted advisors to our customers, providing guidance on industry trends, emerging technologies, and innovative solutions, you will be responsible for leading the implementation process, ensuring adherence to best practices, optimizing performance, and managing risks throughout the project. The AWS Professional Services organization is a global team of experts that help customers realize their desired business outcomes when using the AWS Cloud. We work together with customer teams and the AWS Partner Network (APN) to execute enterprise cloud computing initiatives. Our team provides assistance through a collection of offerings which help customers achieve specific outcomes related to enterprise cloud adoption. We also deliver focused guidance through our global specialty practices, which cover a variety of solutions, technologies, and industries. Key job responsibilities As an experienced technology professional, you will be responsible for: Designing and implementing complex, scalable, and secure AWS solutions tailored to customer needs Providing technical guidance and troubleshooting support throughout project delivery Collaborating with stakeholders to gather requirements and propose effective migration strategies Acting as a trusted advisor to customers on industry trends and emerging technologies Sharing knowledge within the organization through mentoring, training, and creating reusable artifacts About The Team Diverse Experiences: AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job below, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture - Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth - We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance - We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Basic Qualifications Experience in cloud architecture and implementation Bachelor's degree in Computer Science, Engineering, related field, or equivalent experience Proven track record in designing and developing end-to-end Machine Learning and Generative AI solutions, from conception to deployment Experience in applying best practices and evaluating alternative and complementary ML and foundational models suitable for given business contexts Foundational knowledge of data modeling principles, statistical analysis methodologies, and demonstrated ability to extract meaningful insights from complex, large-scale datasets Experience in mentoring junior team members, and guiding them on machine learning and data modeling applications Preferred Qualifications AWS experience preferred, with proficiency in a wide range of AWS services (e.g., Bedrock, SageMaker, EC2, S3, Lambda, IAM, VPC, CloudFormation) AWS Professional level certifications (e.g., Machine Learning Speciality, Machine Learning Engineer Associate, Solutions Architect Professional) preferred Experience with automation and scripting (e.g., Terraform, Python) Knowledge of security and compliance standards (e.g., HIPAA, GDPR) Strong communication skills with the ability to explain technical concepts to both technical and non-technical audiences Experience in developing and optimizing foundation models (LLMs), including fine-tuning, continuous training, small language model development, and implementation of Agentic AI systems Experience in developing and deploying end-to-end machine learning and deep learning solutions Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - AWS ProServe IN - Karnataka Job ID: A2941027
Posted 1 month ago
8.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Position: Head of AI Cybersecurity Key Responsibilities: Define and lead Protectt.ai’s AI cybersecurity strategy, setting vision, priorities, and execution roadmap. Build and manage the AI security function, including team development and cross-functional alignment. Identify and mitigate threats across the AI/ML lifecycle; implement threat modeling, red teaming, and secure development practices. Collaborate with AI Research and Engineering teams to ensure secure model development, deployment, and monitoring. Establish internal governance and external trust frameworks covering transparency, fairness, and responsible AI use. Drive adoption of privacy-preserving technologies and lead security assessments, audits, and incident response for AI systems. Represent industry forums, standards bodies, and customer security engagements. Continuously monitor and act on emerging AI security threats and research to maintain industry leadership. Qualifications: Bachelor's or Master’s in Computer Science, Cybersecurity, AI, or related field (PhD preferred). 8+ years in cybersecurity, including 3+ years focused on AI/ML systems or adversarial machine learning. Deep understanding of risks across AI/ML systems, including model threats, data vulnerabilities, and privacy risks. Proven leadership in secure architecture design and large-scale AI threat mitigation. Hands-on with tools such as IBM ART, CleverHans, SecML, or PrivacyRaven. Familiarity with security standards (NIST AI RMF, ISO/IEC 42001, OWASP Top 10 for LLMs). Strong communication skills across technical and executive levels. Preferred Skills: Experience with cloud ML platforms (AWS SageMaker, Vertex AI, Azure ML). Knowledge of adversarial ML defense and secure model deployment. Contributions to AI security research or open-source tools. Understanding of regulations (EU AI Act, GDPR) and responsible AI frameworks. Experience working with enterprise security, compliance, and audit functions. If the role interests you, feel free to reply here or email your updated CV to nidhi.parikh@antal.com
Posted 1 month ago
4.0 years
0 Lacs
Chandigarh
On-site
Job Summary We are looking for a skilled and motivated AI Engineer to join our team. The ideal candidate will have a strong foundation in Python, machine learning frameworks, and data science libraries. You will be responsible for developing, training, and deploying cutting-edge machine learning models, including applications in NLP, computer vision, and other AI domains. Key Responsibilities Develop and deploy machine learning models into production environments Train and fine-tune models using large and diverse datasets Implement AI techniques such as natural language processing (NLP), computer vision, and deep learning Collaborate with data scientists, ML engineers, and software developers to optimize model performance and scalability Utilize cloud-based AI services for scalable deployment and model management Required Skills & Qualifications 4+ year’s in Python and machine learning frameworks like TensorFlow or PyTorch 2+ year's experience with data science libraries such as NumPy, Pandas, and Scikit-learn 2+ year's experience of supervised, unsupervised, and deep learning techniques Familiarity with cloud AI services (e.g., AWS SageMaker, Google AI Platform, Azure ML) Strong problem-solving skills and ability to work in a fast-paced environment Preferred Qualifications Experience with model monitoring and performance tuning in production Exposure to MLOps tools and CI/CD for ML pipelines Understanding of model explainability and ethical AI practices Why Join Us Build with Purpose: Work on impactful, high-scale products that solve real problems using cutting-edge technologies. Tech-First Culture: Join a team where engineering is at the core — we prioritize clean code, scalability, automation, and continuous learning. Freedom to Innovate: You’ll have ownership from day one — with room to experiment, influence architecture, and bring your ideas to life. Collaborate with the Best: Work alongside passionate engineers, product thinkers, and designers who value clarity, speed, and technical excellence. Paladin Tech is an equal opportunity employer. We are committed to creating an inclusive and diverse workplace and welcome candidates of all backgrounds and identities. Job Types: Full-time, Permanent Schedule: Day shift Work Location: In person
Posted 1 month ago
0 years
12 - 18 Lacs
Hyderābād
Remote
Job Description: About the Role: Our team is responsible for building the backend components of MLOps platform on AWS. The backend components we build are the fundamental blocks for feature engineering, feature serving, model deployment and model inference in both batch and online modes. What you’ll do here Design & build backend components of our MLOps platform on AWS. Collaborate with geographically distributed cross-functional teams. Participate in on-call rotation with the rest of the team to handle production incidents. What you’ll need to succeed Must have skills: Experience with web development frameworks such as Flask, Django or FastAPI. Experience working with WSGI & ASGI web servers such as Gunicorn, Uvicorn etc. Experience with concurrent programming designs such as AsyncIO. Experience with unit and functional testing frameworks. Experience with any of the public cloud platforms like AWS, Azure, GCP, preferably AWS. Experience with CI/CD practices, tools, and frameworks. Nice to have skills: Experience with Apache Kafka and developing Kafka client applications in Python. Experience with MLOps platorms such as AWS Sagemaker, Kubeflow or MLflow. Experience with big data processing frameworks, preferably Apache Spark. Experience with containers (Docker) and container platorms like AWS ECS or AWS EKS. Experience with DevOps & IaC tools such as Terraform, Jenkins etc. Experience with various Python packaging options such as Wheel, PEX or Conda. Experience with metaprogramming techniques in Python. Skills Required "Python Development (Flask, Django or FastAPI) WSGI & ASGI web servers (Gunicorn, Uvicorn etc) AWS" Job Type: Contractual / Temporary Contract length: 12 months Pay: ₹100,000.00 - ₹150,000.00 per month Location Type: Hybrid work Schedule: Day shift Work Location: Hybrid remote in Hyderabad, Telangana
Posted 1 month ago
3.0 years
3 - 4 Lacs
Mohali
Remote
Job Title: Pre-Sales Technical Business Analyst (AI/ML & MERN Stack) Location: [Your Location / Remote] Job Type: Full-time | Pre-Sales | Technical Consulting About the Role: We are seeking a dynamic Pre-Sales Technical Business Analyst with a strong foundation in AI/ML solutions , MERN stack technologies , and API integration . This role bridges the gap between clients’ business requirements and our technical solutions, playing a pivotal role in shaping proposals, leading product demos, and translating client needs into technical documentation and strategic solutions. Key Responsibilities: Client Engagement: Collaborate with the sales team to understand client requirements, pain points, and objectives. Participate in discovery calls, solution walkthroughs, and RFP/RFI responses. Solution Design & Technical Analysis: Analyze and document business needs, converting them into detailed technical requirements. Propose architectural solutions using AI/ML models and the MERN stack (MongoDB, Express.js, React.js, Node.js) . Provide input on data pipelines, model training, and AI workflows where needed. Technical Presentations & Demos: Prepare and deliver compelling demos and presentations for clients. Act as a technical expert during pre-sales discussions to communicate the value of proposed solutions. Documentation & Proposal Support: Draft technical sections of proposals, SoWs, and functional specs. Create user flows, diagrams, and system interaction documents. Collaboration: Work closely with engineering, product, and delivery teams to ensure alignment between business goals and technical feasibility. Conduct feasibility analysis and risk assessments on proposed features or integrations. Required Skills & Experience: 3+ years in a Business Analyst or Pre-Sales Technical Consultant role. Proven experience in AI/ML workflows (understanding of ML lifecycle, model deployment, data prep). Strong technical knowledge of the MERN stack – including RESTful APIs, database schema design, and frontend/backend integration. Solid understanding of API design , third-party integrations, and system interoperability. Ability to translate complex technical concepts into simple business language. Hands-on experience with documentation tools like Swagger/Postman for API analysis. Proficient in writing user stories, business cases, and technical specifications. Preferred Qualifications: Exposure to cloud platforms (AWS, Azure, GCP) and ML platforms (SageMaker, Vertex AI, etc.). Experience with Agile/Scrum methodologies. Familiarity with AI use cases like recommendation systems, NLP, predictive analytics. Experience with data visualization tools or BI platforms is a plus. Job Types: Full-time, Permanent Pay: ₹30,000.00 - ₹35,000.00 per month Schedule: Day shift Fixed shift Monday to Friday Work Location: In person
Posted 1 month ago
5.0 years
2 - 10 Lacs
Calcutta
On-site
Kolkata,West Bengal,India +1 more Job ID 769126 Join our Team About this opportunity: We are seeking a highly skilled and motivated AI Engineer to join our growing AI/ML team. The ideal candidate will be responsible for developing, deploying, and scaling machine learning and GenAI (LLM-based) solutions across business domains. You will work closely with data scientists, data engineers, and business stakeholders to turn innovative ideas into impactful solutions using state-of-the-art cloud technologies and MLOps practices. What you will do: Develop, Train, Test, and Deploy Machine Learning and GenAI-LLM models. Collect, Clean, and preprocess large-scale datasets for AI/ML training and evaluation. Collaborate with cross-functional teams to understand business needs and translate them into AI solutions. Design and implement scalable AI services and pipelines using Python and Cloud technologies (e.g., Azure, AWS).Continuously improve model performance through tuning, optimization, and retraining. Knowledge of MLOps practices to use IaC to deploy models into production and industrialising the business solution The skills you bring: Strong expertise in AWS services: Glue, SageMaker, Lambda, CloudWatch, S3, IAM, etc. Solid programming skills in Python and experience with PySpark for large-scale data processing. Experience with DevOps/MLOps tools such as Azure DevOps, GitHub Actions. Experience & Education: Bachelor’s or Master’s in Computer Science, Data Science, AI, or related field. 5–10 years of experience in ML/AI model development and deployment. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply?
Posted 1 month ago
0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Minimum of (4+) years of experience in AI-based application development. Fine-tune pre-existing models to improve performance and accuracy. Experience with TensorFlow or PyTorch, Scikit-learn, or similar ML frameworks and familiarity with APIs like OpenAI or vertex AI Experience with NLP tools and libraries (e.g., NLTK, SpaCy, GPT, BERT). Implement frameworks like LangChain, Anthropics Constitutional AI, OpenAIs, Hugging Face, and Prompt Engineering techniques to build robust and scalable AI applications. Evaluate and analyze RAG solution and Utilise the best-in-class LLM to define customer experience solutions (Fine tune Large Language models (LLM)). Architect and develop advanced generative AI solutions leveraging state-of-the-art language models (LLMs) such as GPT, LLaMA, PaLM, BLOOM, and others. Strong understanding and experience with open-source multimodal LLM models to customize and create solutions. Explore and implement cutting-edge techniques like Few-Shot Learning, Reinforcement Learning, Multi-Task Learning, and Transfer Learning for AI model training and fine-tuning. Proficiency in data preprocessing, feature engineering, and data visualization using tools like Pandas, NumPy, and Matplotlib. Optimize model performance through experimentation, hyperparameter tuning, and advanced optimization techniques. Proficiency in Python with the ability to get hands-on with coding at a deep level. Develop and maintain APIs using Python's FastAPI, Flask, or Django for integrating AI capabilities into various systems. Ability to write optimized and high-performing scripts on relational databases (e.g., MySQL, PostgreSQL) or non-relational database (e.g., MongoDB or Cassandra) Enthusiasm for continuous learning and professional developement in AI and leated technologies. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Knowledge of cloud services like AWS, Google Cloud, or Azure. Proficiency with version control systems, especially Git. Familiarity with data pre-processing techniques and pipeline development for Al model training. Experience with deploying models using Docker, Kubernetes Experience with AWS Bedrock, and Sagemaker is a plus Strong problem-solving skills with the ability to translate complex business problems into Al solutions.
Posted 1 month ago
6.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
As a trusted global transformation partner, Welocalize accelerates the global business journey by enabling brands and companies to reach, engage, and grow international audiences. Welocalize delivers multilingual content transformation services in translation, localization, and adaptation for over 250 languages with a growing network of over 400,000 in-country linguistic resources. Driving innovation in language services, Welocalize delivers high-quality training data transformation solutions for NLP-enabled machine learning by blending technology and human intelligence to collect, annotate, and evaluate all content types. Our team works across locations in North America, Europe, and Asia serving our global clients in the markets that matter to them. www.welocalize.com To perform this job successfully, an individual must be able to perform each essential duty satisfactorily. The requirements listed below are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. Job Reference: Role Summary: The Machine Learning R&D Engineer role is responsible for the design, development and implementation of machine learning solutions to serve our organization. This includes ownership or oversight of projects from conception to deployment with appropriate cloud services. The role also includes responsibility for following best practices with which to optimize and measure the performance of our models and algorithms against business goals. Tasks and Responsibilities: Machine learning model research and development: design, develop and deploy machine learning models for localization and business workflow processes, including machine translation and quality assurance. Utilize appropriate metrics to evaluate model performance and iterate accordingly Ensure code quality, write robust, well-documented, and structured Python code Define and design solutions to machine learning problems. Work closely with cross-functional teams to understand business requirements and design solutions that meet those needs Explain complex technical concepts clearly to non-technical stakeholders Mentorship: Guide junior team members and contribute to a collaborative team environment Success indicators of a Machine Learning R&D Engineer: Effective Model Development: success is evident when the models developed are accurate, efficient, and align with project requirements Positive Team Collaboration: demonstrated ability to collaborate effectively with various teams and stakeholders, contributing positively to project outcomes Continuous Learning and Improvement: a commitment to continuous learning and applying new techniques to improve existing models and processes Clear Communication: ability to articulate findings, challenges, and insights to a range of stakeholders, ensuring understanding and appropriate Skills and Knowledge Excellent, in depth understanding of machine learning concepts and methodologies, including supervised and unsupervised learning, deep learning, classification Hands-on experience with natural language processing (NLP) techniques and tools Ability to write robust, production-grade code in Python Excellent communication and documentation skills. Able to explain complex technical concepts to non-technical stakeholders Experience taking ownership of projects from conception to deployment. Ability to transform business needs to solutions Nice to have: Experience using Large Language Models in production High proficiency with machine learning frameworks such as TensorFlow, PyTorch, and Scikit-learn Hands-on experience with AWS technologies including EC2, S3, and other deployment strategies. Experience with SNS, Sagemaker a plus Experience with ML management technologies and deployment techniques, such as AWS ML offerings, Docker, GPU deployments, etc Education and Experience Bachelor’s degree in Computer Science, AI/ML, or related field (Master’s/PhD preferred) 6+ years of experience in AI/ML research and development
Posted 1 month ago
15.0 years
0 Lacs
Nagpur, Maharashtra, India
On-site
Job description Job Title: Tech Lead (AI/ML) – Machine Learning & Generative AI Location: Nagpur (Hybrid / On-site) Experience: 8–15 years Employment Type: Full-time Job Summary: We are seeking a highly experienced Python Developer with a strong background in traditional Machine Learning and growing proficiency in Generative AI to join our AI Engineering team. This role is ideal for professionals who have delivered scalable ML solutions and are now expanding into LLM-based architectures, prompt engineering, and GenAI productization. You’ll be working at the forefront of applied AI, driving both model performance and business impact across diverse use cases. Key Responsibilities: Design and develop ML-powered solutions for use cases in classification, regression, recommendation, and NLP. Build and operationalize GenAI solutions, including fine-tuning, prompt design, and RAG implementations using models such as GPT, LLaMA, Claude, or Gemini. Develop and maintain FastAPI-based services that expose AI models through secure, scalable APIs. Lead data modeling, transformation, and end-to-end ML pipelines, from feature engineering to deployment. Integrate with relational (MySQL) and vector databases (e.g., ChromaDB, FAISS, Weaviate) to support semantic search, embedding stores, and LLM contexts. Mentor junior team members and review code, models, and system designs for robustness and maintainability. Collaborate with product, data science, and infrastructure teams to translate business needs into AI capabilities. Optimize model and API performance, ensuring high availability, security, and scalability in production environments. Core Skills & Experience: Strong Python programming skills with 5+ years of applied ML/AI experience. Demonstrated experience building and deploying models using TensorFlow, PyTorch, scikit-learn, or similar libraries. Practical knowledge of LLMs and GenAI frameworks, including Hugging Face, OpenAI, or custom transformer stacks. Proficient in REST API design using FastAPI and securing APIs in production environments. Deep understanding of MySQL (query performance, schema design, transactions). Hands-on with vector databases and embeddings for search, retrieval, and recommendation systems. Strong foundation in software engineering practices: version control (Git), testing, CI/CD. Preferred/Bonus Experience: Deployment of AI solutions on cloud platforms (AWS, GCP, Azure). Familiarity with MLOps tools (MLflow, Airflow, DVC, SageMaker, Vertex AI). Experience with Docker, Kubernetes, and container orchestration. Understanding of prompt engineering, tokenization, LangChain, or multi-agent orchestration frameworks. Exposure to enterprise-grade AI applications in BFSI, healthcare, or regulated industries is a plus. What We Offer: Opportunity to work on a cutting-edge AI stack integrating both classical ML and advanced GenAI. High autonomy and influence in architecting real-world AI solutions. A dynamic and collaborative environment focused on continuous learning and innovation.
Posted 1 month ago
5.0 years
0 Lacs
Ghaziabad, Uttar Pradesh, India
On-site
Job Type: Part-Time (Hourly Basis) Job Summary: We are seeking a highly skilled and motivated AI Trainer with expertise in Artificial Intelligence, Natural Language Processing (NLP) , Generative AI, and AWS AI Services . The trainer will be responsible for delivering structured training sessions to students, helping them gain conceptual and practical knowledge of modern AI concepts and tools. Key Responsibilities: 1. Training Delivery Design and deliver comprehensive training modules on: o Introduction to AI (history, key concepts, applications, ethics). o AWS AI Services (e.g., Amazon SageMaker, Comprehend, Recognition, Lex, Polly). o Natural Language Processing (NLP) using AWS and open-source libraries. o AI Search Techniques and Rule-Based Systems. o Introduction to Generative AI (foundation models, prompt engineering, LLMs on AWS). Conduct interactive lectures, hands-on labs, and assessments. Adapt training delivery to suit learners of varying technical backgrounds. 2. Hands-on Projects and Labs Guide learners through real-world projects using: o AWS AI/ML tools (e.g., SageMaker, Bedrock, Amazon Kendra). o NLP libraries (e.g., NLTK, spaCy, Hugging Face Transformers). o Search and rule-based AI techniques. Support practical implementation and debugging of projects . Required Qualifications: Bachelor's or master’s degree in computer science, Data Science, or related field. Experience: Proven experience (2–5 years) in teaching or working with: o AI and ML frameworks o AWS cloud services o NLP and Generative AI tools Certification in AWS (e.g., AWS Certified Machine Learning – Specialty) is a big plus. Technical Skills: Proficient in Python and related libraries (NumPy, Pandas, scikit-learn, etc.) Strong understanding of NLP (tokenization, entity recognition, sentiment analysis) Familiarity with rule-based systems and AI search algorithms (DFS, BFS, A* etc.) Experience with LLMs, prompt engineering, and tools like Amazon Bedrock Cloud experience on AWS AI/ML stack including: o Amazon SageMaker o AWS Comprehend o Amazon Lex and Polly o AWS Bedrock (for generative AI) Desirable: Prior experience in educational institutions or corporate training Contribution to open-source or AI research Knowledge of ethical AI, fairness, and bias mitigation techniques
Posted 1 month ago
2.0 years
0 Lacs
India
Remote
BugRaid AI is reimagining how engineering teams handle incidents — with a powerful blend of AI agents, traditional machine learning, and modern observability pipelines. We’re seeking an AI Engineer who excels at the intersection of infrastructure and intelligence — someone who enjoys solving complex technical problems with practical models that perform well in real-world scenarios. Location: Remote Preference: Hyderabad / Bangalore candidates only Type: Full-time | ESOPs + Salary Experience: 2+ years Immediate joiners only preferred Responsibilities Impactful Machine Learning: Apply and scale models like Random Forest, DBSCAN, KNN, and GNNs to understand noisy logs, alerts, and metrics in real-time and batch. Agent Intelligence : Develop lightweight reasoning agents that assist SREs in debugging, resolving, and predicting incidents. AI Agent Architecture: Design LLM-powered agents for logs, metrics, traces, and incident resolution. Prompt Engineering & Tooling: Create advanced function-calling and reasoning workflows for multi-step execution. Zero Data Retention Architecture: All data is read-only, secure, and compliant (PII, GDPR, PCI DSS) — your work resides within ephemeral AWS-native environments. Infrastructure-Aware ML: Collaborate closely with our AWS stack (Lambda, ECS, Kinesis, S3, Bedrock) to ensure scalable, secure models that serve with low latency. Feedback Loops & Fine-Tuning: Incorporate real-time signal feedback and model evaluation for precise incident response. Collaborative Development: Partner with backend and infrastructure engineers to deploy models as microservices and REST endpoints seamlessly. Data Analysis: Evaluate quality, clean, and structure raw data for downstream processing. Design scalable and accurate prediction algorithms. Collaborate with engineering teams to transition analytical prototypes into production-ready systems. Generate actionable insights to improve business operations. Qualifications Bachelor's degree or equivalent experience in a quantitative field (Computer Science, Engineering, etc.) 2+ years of practical ML experience Experience with Random Forest, DBSCAN, KNN, GNN (Graph Neural Networks) Proficiency in Python and ML libraries such as scikit-learn, XGBoost, PyTorch. Experience with RLHF, LangChain, or open-source agentic libraries Comfort with AWS services (Lambda, ECS, Kinesis, S3, SageMaker, Bedrock) Experience in log analysis, anomaly detection, or observability systems is a big plus Strong debugging skills and systems-level thinking At least 1-2 years in quantitative analytics or data modeling Deep understanding of predictive modeling, machine learning, clustering, classification techniques, and algorithms Proficiency in a programming language (Python, JavaScript) Knowledge of Big Data frameworks and visualization tools (preferred) Why Join BugRaid.AI We’re building a groundbreaking AI incident response platform. Remote-first team, open feedback loops, and high trust culture. Shareholding opportunities with meaningful equity and ownership of key AI pipeline components. Backed by real customers in our beta stage, we are addressing practical operational challenges. Ready to help redefine how AI manages software failures? 📩 Send your resume or LinkedIn profile to manoj.bhamidipati@bugraid.ai or DM me.
Posted 1 month ago
8.0 years
0 Lacs
Kochi, Kerala, India
On-site
🌩️ Senior / Lead AWS Cloud Engineer 📍 Location: Kochi, Pune, and Chennai 🕒 Experience: 8+ Years 💼 Type: Full-time We are looking for a Senior or Lead AWS Cloud Engineer with a strong foundation in software development, cloud architecture, and DevOps practices. This is a hands-on technical leadership role, ideal for someone who thrives in building modern, scalable cloud-native systems. 🔧 Your Role and Responsibilities Design and develop scalable, secure, and reliable cloud-native applications on AWS Lead implementation of containerized environments using Kubernetes (EKS/OpenShift) Automate infrastructure using Terraform or AWS CDK Build and maintain CI/CD pipelines with GitLab CI, GitHub Actions, Jenkins, or ArgoCD Collaborate with cross-functional teams to ensure production-ready, high-quality solutions Mentor junior engineers and conduct code/architecture reviews Optimize performance and ensure observability using tools like Datadog 📚 Qualifications Bachelor’s/Master’s degree in Computer Science, Engineering, or a related field AWS or other Cloud Certifications (e.g., Solutions Architect, DevOps Engineer) are a plus Excellent communication and leadership skills ✅ Must-Have Skills (5+ Years) Proficiency in modern programming languages: TypeScript, Python, Go Strong experience with AWS Serverless (e.g., AWS Lambda) Deep understanding of AWS-managed databases: RDS, DynamoDB Hands-on with Kubernetes, AWS EKS/ECS, or OpenShift Proven CI/CD experience with tools like GitLab CI, GitHub, Jenkins, ArgoCD Familiarity with Git, Jira, Confluence 💡 Should-Have Skills (3+ Years) AWS networking: API Gateway AWS storage services: S3 Exposure to AWS AI/ML (e.g., SageMaker, Bedrock, Amazon Q) IaC tools: Terraform, AWS CDK 🌟 Nice-to-Have Skills (1+ Year) Experience with Amazon Connect or other Contact Center solutions Use of HashiCorp Vault for secrets management Knowledge of Kafka, Amazon MSK Familiarity with multi-cloud (Azure, GCP) Experience with monitoring tools: Datadog, Dynatrace, New Relic Understanding of FinOps and cloud cost optimization Knowledge of SSO technologies: OAuth2, OpenID Connect, JWT, SAML Working knowledge of Linux and shell scripting If you're passionate about modern cloud infrastructure, building developer-friendly systems, and leading engineering excellence — we’d love to hear from you! Apply with your resume to m.neethu@ssconsult.in
Posted 1 month ago
6.0 - 10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Role: Kore.AI & LLM Integration Specialist Job Location: Chennai, Pune Experience Slab 6-10 years. Domain --Telecom Mandatory Skills: Kore.AI platform experience LLM (Large Language Model) integration and implementation expertise Required Qualifications: Hands-on experience with Kore.AI , including bot development, NLP configuration, and integrations. Strong working knowledge of LLM models , such as GPT, Claude, or similar — including fine-tuning, prompt engineering, and use in conversational AI workflows. Proficiency in Java and Spring Boot is a plus. Familiarity with AWS services (e.g., Lambda, S3, API Gateway, Bedrock, SageMaker) is a plus.
Posted 1 month ago
0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Minimum of (4+) years of experience in AI-based application development. Fine-tune pre-existing models to improve performance and accuracy. Experience with TensorFlow or PyTorch, Scikit-learn, or similar ML frameworks and familiarity with APIs like OpenAI or vertex AI Experience with NLP tools and libraries (e.g., NLTK, SpaCy, GPT, BERT). Implement frameworks like LangChain, Anthropics Constitutional AI, OpenAIs, Hugging Face, and Prompt Engineering techniques to build robust and scalable AI applications. Evaluate and analyze RAG solution and Utilise the best-in-class LLM to define customer experience solutions (Fine tune Large Language models (LLM)). Architect and develop advanced generative AI solutions leveraging state-of-the-art language models (LLMs) such as GPT, LLaMA, PaLM, BLOOM, and others. Strong understanding and experience with open-source multimodal LLM models to customize and create solutions. Explore and implement cutting-edge techniques like Few-Shot Learning, Reinforcement Learning, Multi-Task Learning, and Transfer Learning for AI model training and fine-tuning. Proficiency in data preprocessing, feature engineering, and data visualization using tools like Pandas, NumPy, and Matplotlib. Optimize model performance through experimentation, hyperparameter tuning, and advanced optimization techniques. Proficiency in Python with the ability to get hands-on with coding at a deep level. Develop and maintain APIs using Python's FastAPI, Flask, or Django for integrating AI capabilities into various systems. Ability to write optimized and high-performing scripts on relational databases (e.g., MySQL, PostgreSQL) or non-relational database (e.g., MongoDB or Cassandra) Enthusiasm for continuous learning and professional developement in AI and leated technologies. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Knowledge of cloud services like AWS, Google Cloud, or Azure. Proficiency with version control systems, especially Git. Familiarity with data pre-processing techniques and pipeline development for Al model training. Experience with deploying models using Docker, Kubernetes Experience with AWS Bedrock, and Sagemaker is a plus Strong problem-solving skills with the ability to translate complex business problems into Al solutions.
Posted 1 month ago
0.0 - 10.0 years
0 Lacs
Kolkata, West Bengal
On-site
Kolkata,West Bengal,India +1 more Job ID 769126 Join our Team About this opportunity: We are seeking a highly skilled and motivated AI Engineer to join our growing AI/ML team. The ideal candidate will be responsible for developing, deploying, and scaling machine learning and GenAI (LLM-based) solutions across business domains. You will work closely with data scientists, data engineers, and business stakeholders to turn innovative ideas into impactful solutions using state-of-the-art cloud technologies and MLOps practices. What you will do: Develop, Train, Test, and Deploy Machine Learning and GenAI-LLM models. Collect, Clean, and preprocess large-scale datasets for AI/ML training and evaluation. Collaborate with cross-functional teams to understand business needs and translate them into AI solutions. Design and implement scalable AI services and pipelines using Python and Cloud technologies (e.g., Azure, AWS).Continuously improve model performance through tuning, optimization, and retraining. Knowledge of MLOps practices to use IaC to deploy models into production and industrialising the business solution The skills you bring: Strong expertise in AWS services: Glue, SageMaker, Lambda, CloudWatch, S3, IAM, etc. Solid programming skills in Python and experience with PySpark for large-scale data processing. Experience with DevOps/MLOps tools such as Azure DevOps, GitHub Actions. Experience & Education: Bachelor’s or Master’s in Computer Science, Data Science, AI, or related field. 5–10 years of experience in ML/AI model development and deployment. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply?
Posted 1 month ago
3.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Job Title : AI/ML Engineer (Python + AWS + REST APIs) Department : Web Location : Indore Job Type : Full-time Experience : 3-5 years Notice Period : (immediate joiners preferred) Work Arrangement : On-site (Work from Office) Overview Advantal Technologies is seeking a passionate AI/ML Engineer to join our team in building the core AI-driven functionality an intelligent visual data encryption system. The role involves designing, training, and deploying AI models (e.g., CLIP, DCGANs, Decision Trees), integrating them into a secure backend, and operationalizing the solution via AWS cloud services and Python-based APIs. Key Responsibilities AI/ML Development : Design and train deep learning models for image classification and sensitivity tagging using CLIP, DCGANs, and Decision Trees. Build synthetic datasets using DCGANs for balancing. Fine-tune pre-trained models for customized encryption logic. Implement explainable classification logic for model outputs. Validate model performance using custom metrics and datasets. API Development Design and develop Python RESTful APIs using FastAPI or Flask for : Image upload and classification Model inference endpoints Encryption trigger calls Integrate APIs with AWS Lambda and Amazon API Gateway. AWS Integration Deploy and manage AI models on Amazon SageMaker for training and real-time inference. Use AWS Lambda for serverless backend compute. Store encrypted image data on Amazon S3 and metadata on Amazon RDS (PostgreSQL). Use AWS Cognito for secure user authentication and KMS for key management. Monitor job status via CloudWatch and enable secure, scalable API access. Required Skills & Experience Experience with CLIP model fine-tuning. Familiarity with Docker, GitHub Actions, or CI/CD pipelines. Experience in data classification under compliance regimes (e.g., GDPR, HIPAA). Familiarity with multi-tenant SaaS design patterns. Tools & Technologies Python, PyTorch, TensorFlow FastAPI, Flask AWS : SageMaker, Lambda, S3, RDS, Cognito, API Gateway, KMS Git, Docker, Postgres, OpenCV, OpenSSL (ref:hirist.tech)
Posted 1 month ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
JOB_POSTING-3-71839 Job Description Role Title: Manager - Advanced Insights & Analytics (L09) Company Overview Synchrony (NYSE: SYF) is a premier consumer financial services company delivering one of the industry’s most complete digitally enabled product suites. Our experience, expertise and scale encompass a broad spectrum of industries including digital, health and wellness, retail, telecommunications, home, auto, outdoors, pet and more. We have recently been ranked #2 among India’s Best Companies to Work for by Great Place to Work. We were among the Top 50 India’s Best Workplaces in Building a Culture of Innovation by All by GPTW and Top 25 among Best Workplaces in BFSI by GPTW. We have also been recognized by Ambition Box Employee Choice Awards among the Top 20 Mid-Sized Companies, ranked #3 among Top Rated Companies for Women, and Top-Rated Financial Services Companies. Synchrony celebrates ~51% women diversity, 105+ people with disabilities, and ~50 veterans and veteran family members. We offer Flexibility and Choice for all employees and provide best-in-class employee benefits and programs that cater to work-life integration and overall well-being. We provide career advancement and upskilling opportunities, focusing on Advancing Diverse Talent to take up leadership roles. Organizational Overview The role is part of Collections & Recovery Analytics, supporting Collections, Recovery, and Servicing solutions . The team is focused on improving Collections, Recovery and Servicing Solutions performance, by building BI solutions to support Operations Leaders. This includes key performance indicators for cardholders and accounts, as well as metrics to support strategy development and execution. Role Summary/Purpose Design and develop business intelligence products using Tableau and perform detailed analysis on Collections & Recovery performance, Customer Service, Customer Solutions, and Fraud, using SQL/SAS and Python, so that leaders can make better & impactful decisions. A successful candidate must be intellectually curious, proactive, collaborative, driven, and communicative.Analytics projects will include, but shall not necessarily be limited to: Identifying and analyzing drivers of Servicing performance Leveraging advanced analytical techniques to build unique solutions that improve agent experience or improve business outcome Leveraging time series techniques to understand historical performance and predict future outcome. Developing and testing hypotheses using A/B tests and what-if scenarios This position is remote, where you have the option to work from home. On occasion we may request for you to commute to our nearest office for in person engagement activities such as team meetings, training and culture events. To ensure the safety of our colleagues and communities, we require employees who come together in-person to be fully vaccinated. We’re proud to offer you choice and flexibility. Key Responsibilities Champion Customers: Develop in depth data solutions that provide key business insights. Leverage data analytics to derive insights from customer behavior, customer experience and associate metrics to drive business actions Relate and Inspire: Collaborate with process owners across different teams in Servicing on analytical projects. Incorporate new data elements, sources and channels to help promote efficient and effective strategies and customer contact preferences Elevate Every Day : Execute projects under aggressive deadlines with limited guidance and direction from management. Act as Owners: Prepare and deliver presentations summarizing findings and recommendations; demonstrate an ability to communicate the same in technical and layman terms. Support loss/cost and sizing estimations to enable prioritization and demonstrate business benefit. Provide thought leadership, strategic and analytic support to the group through the utilization of data mining skills and business knowledge. Conduct ad-hoc analyses and reporting as needed. Develop process controls and produce documentation, as needed. Perform other duties and/or special projects as assigned. Qualifications/Requirements Bachelor's degree in STEM related fields, such as Engineering, Computer Science, Data Science, or Math with minimum 2-5 years of experience between performing analytics; OR in lieu of a degree, 4-7 years of experience between performing analytics. 2+ years of experience with BI applications; decision support systems, query and reporting, online analytical processing, statistical analysis, forecasting, data mining, and data visualization. 2+ years of experience with tools, facilities, and techniques for managing and administering data. 2+ years of experience with any analytical tool (e.g., SAS, Python, R, MATLAB, PyTorch, TensorFlow + Keras, AWS SageMaker, etc.). 2+ years of SQL/SAS experience. 2+ years of experience with Microsoft Office (Word, Excel, PowerPoint, and Visio). Desired Characteristics Capable and influential in delivering compelling advanced analytics solutions; capable of being proactive around the same and not just reactive. Ability to propose alternative data options; recommend, drive, and implement the most efficient solutions. 2+ years of experience in Data Science. Experience working with large volumes of data from multiple data sources, primarily Oracle and SQL Server; experience with PySpark or Spark to query data from Hadoop or Data Lake Excellent communication and collaboration skills; must be able to work with and communicate across many functional organizations. Ability to work collaboratively as well as independently. Knowledge of Financial Services Industry. Ability to think out-of-the-box and drive positive change and improvement. Work Timings This role qualifies for Enhanced Flexibility and Choice offered in Synchrony India and will require the incumbent to be available between 06:00 AM Eastern Time – 11:30 AM Eastern Time (timings are anchored to US Eastern hours and will adjust twice a year locally). This window is for meetings with India and US teams. The remaining hours will be flexible for the employee to choose. Exceptions may apply periodically due to business needs. Please discuss this with the hiring manager for more details. For Internal Applicants Understand the criteria or mandatory skills required for the role, before applying. Inform your Manager or HRM before applying for any role on Workday. Ensure that your Professional Profile is updated (fields such as Education, Prior experience, Other skills) and it is mandatory to upload your updated resume (Word or PDF format) Must not be any corrective action plan (First Formal/Final Formal, PIP) Employees who have completed 18 months in the organization and 12 months in current role and level are only eligible. Level 7+ employees can apply Grade/Level: 09 Job Family Group Data Analytics
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France