Jobs
Interviews

1576 Sagemaker Jobs - Page 33

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

Job Summary We’re seeking a hands-on GenAI & Computer Vision Engineer with 3–5 years of experience delivering production-grade AI solutions. You must be fluent in the core libraries, tools, and cloud services listed below, and able to own end-to-end model development—from research and fine-tuning through deployment, monitoring, and iteration. In this role, you’ll tackle domain-specific challenges like LLM hallucinations, vector search scalability, real-time inference constraints, and concept drift in vision models. Key Responsibilities Generative AI & LLM Engineering Fine-tune and evaluate LLMs (Hugging Face Transformers, Ollama, LLaMA) for specialized tasks Deploy high-throughput inference pipelines using vLLM or Triton Inference Server Design agent-based workflows with LangChain or LangGraph, integrating vector databases (Pinecone, Weaviate) for retrieval-augmented generation Build scalable inference APIs with FastAPI or Flask, managing batching, concurrency, and rate-limiting Computer Vision Development Develop and optimize CV models (YOLOv8, Mask R-CNN, ResNet, EfficientNet, ByteTrack) for detection, segmentation, classification, and tracking Implement real-time pipelines using NVIDIA DeepStream or OpenCV (cv2); optimize with TensorRT or ONNX Runtime for edge and cloud deployments Handle data challenges—augmentation, domain adaptation, semi-supervised learning—and mitigate model drift in production MLOps & Deployment Containerize models and services with Docker; orchestrate with Kubernetes (KServe) or AWS SageMaker Pipelines Implement CI/CD for model/version management (MLflow, DVC), automated testing, and performance monitoring (Prometheus + Grafana) Manage scalability and cost by leveraging cloud autoscaling on AWS (EC2/EKS), GCP (Vertex AI), or Azure ML (AKS) Cross-Functional Collaboration Define SLAs for latency, accuracy, and throughput alongside product and DevOps teams Evangelize best practices in prompt engineering, model governance, data privacy, and interpretability Mentor junior engineers on reproducible research, code reviews, and end-to-end AI delivery Required Qualifications You must be proficient in at least one tool from each category below: LLM Frameworks & Tooling: Hugging Face Transformers, Ollama, vLLM, or LLaMA Agent & Retrieval Tools: LangChain or LangGraph; RAG with Pinecone, Weaviate, or Milvus Inference Serving: Triton Inference Server; FastAPI or Flask Computer Vision Frameworks & Libraries: PyTorch or TensorFlow; OpenCV (cv2) or NVIDIA DeepStream Model Optimization: TensorRT; ONNX Runtime; Torch-TensorRT MLOps & Versioning: Docker and Kubernetes (KServe, SageMaker); MLflow or DVC Monitoring & Observability: Prometheus; Grafana Cloud Platforms: AWS (SageMaker, EC2/EKS) or GCP (Vertex AI, AI Platform) or Azure ML (AKS, ML Studio) Programming Languages: Python (required); C++ or Go (preferred) Additionally Bachelor’s or Master’s in Computer Science, Electrical Engineering, AI/ML, or a related field 3–5 years of professional experience shipping both generative and vision-based AI models in production Strong problem-solving mindset; ability to debug issues like LLM drift, vector index staleness, and model degradation Excellent verbal and written communication skills Typical Domain Challenges You’ll Solve LLM Hallucination & Safety: Implement grounding, filtering, and classifier layers to reduce false or unsafe outputs Vector DB Scaling: Maintain low-latency, high-throughput similarity search as embeddings grow to millions Inference Latency: Balance batch sizing and concurrency to meet real-time SLAs on cloud and edge hardware Concept & Data Drift: Automate drift detection and retraining triggers in vision and language pipelines Multi-Modal Coordination: Seamlessly orchestrate data flow between vision models and LLM agents in complex workflows About Company Hi there! We are Auriga IT. We power businesses across the globe through digital experiences, data and insights. From the apps we design to the platforms we engineer, we're driven by an ambition to create world-class digital solutions and make an impact. Our team has been part of building the solutions for the likes of Zomato, Yes Bank, Tata Motors, Amazon, Snapdeal, Ola, Practo, Vodafone, Meesho, Volkswagen, Droom and many more. We are a group of people who just could not leave our college-life behind and the inception of Auriga was solely based on a desire to keep working together with friends and enjoying the extended college life. Who Has not Dreamt of Working with Friends for a Lifetime Come Join In Our Website - https://aurigait.com/

Posted 1 month ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Senior Principal Consultant, AIML Engineer! In this role, we are looking for candidates who have relevant years of experience in designing and developing machine learning and deep learning system. Who have professional software development experience. Hands on running machine learning tests and experiments. Implementing appropriate ML algorithms engineers. Responsibilities . Drive the vision for modern data and analytics platform to deliver well architected and engineered data and analytics products leveraging cloud tech stack and third-party products . Close the gap between ML research and production to create ground-breaking new products, features and solve problems for our customers . Design, develop, test, and deploy data pipelines, machine learning infrastructure and client-facing products and services . Build and implement machine learning models and prototype solutions for proof-of-concept . Scale existing ML models into production on a variety of cloud platforms . Analyze and resolve architectural problems, working closely with engineering, data science and operations teams Qualifications we seek in you! Minimum Qualifications / Skills . Bachelor%27s degree in computer science engineering, information technology or BSc in Computer Science, Mathematics or similar field . Master&rsquos degree is a plus . Integration - APIs, micro-services and ETL/ELT patterns . DevOps (Good to have) - Ansible, Jenkins, ELK . Containerization - Docker, Kubernetes etc . Orchestration - Airflow, Step Functions, Ctrl M etc . Languages and scripting: Python, Scala Java etc . Cloud Services - AWS, GCP, Azure and Cloud Native . Analytics and ML tooling - Sagemaker, ML Studio . Execution Paradigm - low latency/Streaming, batch Preferred Qualifications/ Skills . Data platforms - Big Data (Hadoop, Spark, Hive, Kafka etc.) and Data Warehouse (Teradata, Redshift, BigQuery, Snowflake etc.) . Visualization Tools - PowerBI, Tableau Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 1 month ago

Apply

5.0 years

0 Lacs

India

Remote

We're seeking a Mid-Level Machine Learning Engineer to join our growing Data Science & Engineering team. In this role, you will design, develop, and deploy ML models that power our cutting-edge technologies like voice ordering, prediction algorithms and customer-facing analytics. You'll collaborate closely with data engineers, backend engineers, and product managers to take models from prototyping through to production, continuously improving accuracy, scalability, and maintainability. Essential Job Functions Model Development: Design and build next-generation ML models using advanced tools like PyTorch, Gemini, and Amazon SageMaker - primarily on Google Cloud or AWS platforms Feature Engineering: Build robust feature pipelines; extract, clean, and transform largescale transactional and behavioral data. Engineer features like time- based attributes, aggregated order metrics, categorical encodings (LabelEncoder, frequency encoding) Experimentation & Evaluation: Define metrics, run A/B tests, conduct cross-validation, and analyze model performance to guide iterative improvements. Train and tune regression models (XGBoost, LightGBM, scikit-learn, TensorFlow/Keras) to minimize MAE/RMSE and maximize R² Own the entire modeling lifecycle end-to-end, including feature creation, model development, testing, experimentation, monitoring, explainability, and model maintenance Monitoring & Maintenance: Implement logging, monitoring, and alerting for model drift and data-quality issues; schedule retraining workflows Collaboration & Mentorship: Collaborate closely with data science, engineering, and product teams to define, explore, and implement solutions to open-ended problems that advance the capabilities and applications of Checkmate, mentor junior engineers on best practices in ML engineering Documentation & Communication: Produce clear documentation of model architecture, data schemas, and operational procedures; present findings to technical and non-technical stakeholders Requirements Academics: Bachelors/Master's degree in Computer Science, Engineering, Statistics, or related field Experience: 5+ years of industry experience (or 1+ year post-PhD). Building and deploying advanced machine learning models that drive business impact Proven experience shipping production-grade ML models and optimization systems, including expertise in experimentation and evaluation techniques. Hands-on experience building and maintaining scalable backend systems and ML inference pipelines for real-time or batch prediction Programming & Tools: Proficient in Python and libraries such as pandas, NumPy, scikit-learn; familiarity with TensorFlow or PyTorch. Hands-on with at least one cloud ML platform (AWS SageMaker, Google Vertex AI, or Azure ML). Data Engineering: Hands-on experience with SQL and NoSQL databases; comfortable working with Spark or similar distributed frameworks. Strong foundation in statistics, probability, and ML algorithms like XGBoost/LightGBM; ability to interpret model outputs and optimize for business metrics. Experience with categorical encoding strategies and feature selection. Solid understanding of regression metrics (MAE, RMSE, R²) and hyperparameter tuning. Cloud & DevOps: Proven skills deploying ML solutions in AWS, GCP, or Azure; knowledge of Docker, Kubernetes, and CI/CD pipelines Collaboration: Excellent communication skills; ability to translate complex technical concepts into clear, actionable insights Working Terms: Candidates must be flexible and work during US hours at least until 6 p.m. ET in the USA, which is essential for this role & must also have their own system/work setup for remote work

Posted 1 month ago

Apply

16.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Senior Manager – AI Strategy & Advisory Lead Job Summary: As a Senior Manager – AI Strategy & Advisory Lead, you will be responsible for driving enterprise-level AI strategy consulting engagements for EY’s global clients. You will help organizations design responsible, scalable, and value-centric AI roadmaps, govern enterprise data readiness, assess AI infrastructure needs, and operationalize AI through modern delivery models. This role sits at the intersection of business strategy, data, technology, and transformation—requiring both strategic foresight and technical fluency. You will collaborate with client executives, business unit leaders, data and cloud teams, and EY’s global alliance partners to shape the next generation of intelligent enterprises. Key Responsibilities: Client Engagement & Delivery Lead AI strategy and advisory engagements across domains such as AI-enabled business transformation, AI/ML operating models, data readiness, infrastructure modernization, and governance. Engage with CxOs and business/technology leaders to assess AI maturity, define strategic priorities, and develop roadmaps aligned with their enterprise objectives. Oversee development of AI opportunity portfolios, ROI assessments, and implementation blueprints. Ensure compliance with responsible AI principles (ethics, fairness, transparency, privacy, etc.) in all solution approaches. Practice Leadership Shape and evolve EY’s AI Strategy & Advisory offerings and go-to-market (GTM) assets including methodologies, accelerators, and thought leadership. Contribute to EY’s global AI Center of Excellence (CoE) with reusable frameworks, playbooks, and client value stories. Mentor and lead a team of consultants and managers, building a culture of innovation, continuous learning, and delivery excellence. Team Leadership & Capability Building Build and lead a high-performing team of AI strategists, data advisors, and transformation consultants. Mentor team members on strategic thinking, client engagement, and technical depth across data and AI domains. Establish capability frameworks, training plans, and reusable delivery accelerators for AI strategy engagements. Foster a collaborative, innovative, and purpose-driven team culture aligned with EY values. Business Development Lead business development activities including proposal development, client workshops, solutioning, and account growth. Collaborate with industry and account teams to identify, qualify, and shape AI-led opportunities. Required Qualifications: Bachelor’s degree in Engineering, Computer Science, Information Technology, or Electronics from a recognized university (e.g., B.E./B.Tech). Master’s degree: M.Tech or MBA with specialization in Information Systems, Infrastructure Management, or Technology Strategy Relevant certifications are a plus - AI/ML certifications (e.g., TensorFlow, Microsoft AI Engineer, AWS Machine Learning Specialist) Cloud certifications (Azure/AWS/GCP) will be an added advantage 12–16 years of relevant experience in management consulting, strategy, or AI-driven digital transformation roles. Deep understanding of AI/ML technologies, business applications, and emerging trends. Familiarity with AI/ML platforms and ecosystem tools (e.g., Azure ML, AWS Sagemaker, Vertex AI, DataRobot) will be a plus Proven track record in leading AI or advanced analytics strategy engagements for large enterprises. Knowledge of AI model lifecycle, responsible AI principles, data platforms, and cloud-native architectures. Strong business consulting and stakeholder engagement skills, with ability to translate business needs into actionable AI strategies. Experience working across industries such as financial services, healthcare, consumer, or manufacturing is preferred. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 1 month ago

Apply

0 years

4 - 8 Lacs

Thiruvananthapuram

On-site

Job Requirements Build , train, and implement Python machine learning models for diverse applications by utilizing Python's vast libraries and frameworks, including TensorFlow, PyTorch, and scikit-learn, to develop strong and effective AI solutions. Work Experience Proficiency in python and various ML libraries like TensorFlow, Pytorch, Numpy, Pandas etc. Knowledge of Object-Oriented Analysis and Design, Software Design Patterns and Java coding principles Good understanding of ML and deep learning techniques and algorithms. Knowledge on MLOps, DevOps and Cloud Platforms like AWS SageMaker would be good to have. Experience with Elasticsearch for efficient data indexing, search, and retrieval. Data Handling techniques: Cleaning and preprocessing, knowledge of databases and DB integration is good to have

Posted 1 month ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description Summary Many employers promise the chance to make a difference – at GE Vernova, you can change the world. Bringing clean, affordable power to the developing world, decarbonizing the world’s electricity network, helping to build the grid of the future powered by renewable energy … they’re all part of our company’s strategy. If you're passionate about applying AI, and excited to tackle UN SDG-7,13 and Energy Transition challenges, as well as motivated by the prospect of shaping the future of energy industry through innovation and new business models, we encourage you to apply. Join us in our journey to redefine what's possible with AI and make a lasting impact on the world of Energy. We are seeking a dynamic, forward-thinking and results-driven Senior Artificial Intelligence (AI) Application architect, who will work on building and deployment of grid innovation applications using model-to-code, and architecting prototype systems to validate and verify in containerized form. Reporting to AI leader in CTO organization, the Senior AI application architect will work in close collaboration with GA product lines, R&D teams, product management and other GA functions. This role will also be responsible for building the data analytics applications framework and work closely with other functions across Grid Automation (GA) business to identify areas where the business can leverage data and data analytics to drive efficiency, increase customer satisfaction, and develop POCs to solve critical problems for our customers. As part of the MLOps architecture development, the Senior AI application architect will enable the ML Model end to end lifecycle i. e., from Commissioning training datasets to deploying it in production environment through automated CI/CD pipeline. Job Description ESSENTIAL RESPONSIBILITIES: The Senior AI Application Architect Will Be Responsible For Demonstrate novel & transformational applications/analytics to drive innovation & differentiation. Define the framework to collect, structure and use of databases for AI, to extract value. Develop AI/ML application to build differentiated products and solutions; with ability to work on customers value-driven applications/analytics to drive innovations. Design and deploy high-quality, scalable, and secure AI/ML models and applications on the GE GridNode/ edge platforms, using container or microservices principles. Develop and implement strategies for optimizing the performance and scalability of machine learning models in production. Collaborate with product management, R&D, and other functions in to understand their needs and develop innovative solutions. Implement and maintain data pipelines for AI/ML models. Monitor and optimize the performance of AI/ML models in production. Identification of Intellectual property/IP clearance. Collaborate with cross-functional teams. Qualifications/Requirements Master’s/PhD Degree in computer science, Information technology (IT), electrical engineering, or electric power engineering, specifically in the computer and electric power engineering field with minimum 6+ years of data science working experience. 6+ years of professional working environment and knowledge of artificial intelligence (AI) and machine learning (ML), including, unsupervised learning, supervised learning, and reinforcement learning, large language models (LLMs). 5-10 years R&D or Applications experience related to power system protection and automation. Proven experience in applying AI/ML frameworks/workflows, AI/MLOps with CI/CD using Cloud-native and on-prem development and deployment in OT/industrial automation environments. Hands-on professional experience in developing and testing AI/ML algorithms; AND/OR demonstrated professional experience with different scenarios of grid/physics models in power system simulation tools, MATLAB/PSCAD; as well as dynamics PSS/E, Digsilent, and equivalent. Experience with MLOps principles. Experience with DevOps, data pipelines, Azure ML registry, deployment methods viz. Docker, K8s, etc. Able to share ideas and work well in a team environment, proactive approach to tasks displaying initiative. Guide and mentor others in the team. Flexible and adaptable; open to change and modification of tasks, working in multi-tasking environment. Demonstrated professional experience with different scenarios of appropriate AI/ML models for energy/grid applications. Desired Characteristics 9+ years of industry experience. Research or industry experience with simulation using scientific programming tools or languages, such as MATLAB, C++, C#, or Python, R, etc. Experience in developing and implementing ML models, such as predictive maintenance, load forecasting, and grid optimization using cloud servers such as AWS Sagemaker or equivalent in the Power Systems domain. Hands-on experience in MLOps, data engineering, and cloud, working with real-time distribution grid data. Experience with Linux virtualized system deployment using VM, Hypervisor (EsXi, KVM, Xen etc.), Dockers and related tools. Experience as a system architect, team lead, industry recognized subject matter expert. Advanced experience in utilizing and applying common programming languages, such as Python, C/C++, Java, Spark and Hadoop, R Programming, Kafka, C#, MATLAB, along with good familiarity with power system modelling and data communication format. Expertise in Machine learning/Deep learning methods - LLM, NLP, Computer vision/ Image Processing. Expertise of GraphDB, SQL/NoSQL, MS Access, databases. Understanding/experience applying data analytics for Electrical Power System or industrial OT system. Understanding of GPU Experience, Spark, Scala for distributed computing. Strong root causing, trouble shooting and debugging skills using tools such as Wireshark, TCPDump and other Linux and Windows system tools. Strong communication skills and a proactive and open approach to conflict resolution. Strong organizational skills, self-motivated, and self-directed. Knowledge of modern protection and control and distribution automation developments and trends. Proven record of writing and presenting papers at industry conferences/journals. Additional Information Relocation Assistance Provided: Yes

Posted 1 month ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About the Role: We are seeking an experienced AI Lead Engineer to join our dynamic team, driving innovation in AI-driven solutions with a global impact. This role is specifically tailored for a highly motivated professional with a strong technical background in Python and Java, hands-on expertise in developing Retrieval-Augmented Generation (RAG)-based chatbots, Machine Learning, Deep Learning, and solid cloud deployment experience, preferably on AWS. As our AI Lead Engineer, you will spearhead the end-to-end development lifecycle, including solution architecture, design, implementation, deployment, and scaling of cutting-edge AI applications. Your leadership will ensure the delivery of high-performance, reliable, and impactful solutions. Key Responsibilities: Lead the design and development of AI-driven applications, particularly focusing on RAG-based chatbot solutions. Architect robust solutions leveraging Python and Java to ensure scalability, reliability, and maintainability. Deploy, manage, and scale AI applications using AWS cloud infrastructure, optimizing performance and resource utilization. Collaborate closely with cross-functional teams to understand requirements, define project scopes, and deliver solutions effectively. Mentor team members, providing guidance on best practices in software development, AI methodologies, and cloud deployments. Ensure solutions meet quality standards, including thorough testing, debugging, performance tuning, and documentation. Continuously research emerging AI technologies and methodologies to incorporate best practices and innovation into our products. Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, Mathematics, Statistics or related fields. At least 5 years of professional experience in AI/Machine Learning engineering. Strong programming skills in Python and Java. Demonstrated hands-on experience building Retrieval-Augmented Generation (RAG)-based chatbots or similar generative AI applications. Proficiency in cloud platforms, particularly AWS, including experience with EC2, Lambda, SageMaker, DynamoDB, CloudWatch, and API Gateway. Solid understanding of AI methodologies, including natural language processing (NLP), vector databases, embedding models, and large language model integrations. Experience with leading projects or teams, managing technical deliverables, and ensuring high-quality outcomes. Preferred Qualifications: AWS certifications (e.g., AWS Solutions Architect, AWS Machine Learning Specialty). Familiarity with popular AI/ML frameworks and libraries such as Hugging Face Transformers, TensorFlow, PyTorch, LangChain, or similar. Experience in Agile development methodologies. Excellent communication skills, capable of conveying complex technical concepts clearly and effectively. Strong analytical and problem-solving capabilities, with the ability to navigate ambiguous technical challenges.

Posted 1 month ago

Apply

3.0 years

3 - 6 Lacs

Jaipur

On-site

Job Summary We’re seeking a hands-on GenAI & Computer Vision Engineer with 3–5 years of experience delivering production-grade AI solutions. You must be fluent in the core libraries, tools, and cloud services listed below, and able to own end-to-end model development—from research and fine-tuning through deployment, monitoring, and iteration. In this role, you’ll tackle domain-specific challenges like LLM hallucinations, vector search scalability, real-time inference constraints, and concept drift in vision models. Key Responsibilities Generative AI & LLM Engineering Fine-tune and evaluate LLMs (Hugging Face Transformers, Ollama, LLaMA) for specialized tasks Deploy high-throughput inference pipelines using vLLM or Triton Inference Server Design agent-based workflows with LangChain or LangGraph, integrating vector databases (Pinecone, Weaviate) for retrieval-augmented generation Build scalable inference APIs with FastAPI or Flask, managing batching, concurrency, and rate-limiting Computer Vision Development Develop and optimize CV models (YOLOv8, Mask R-CNN, ResNet, EfficientNet, ByteTrack) for detection, segmentation, classification, and tracking Implement real-time pipelines using NVIDIA DeepStream or OpenCV (cv2); optimize with TensorRT or ONNX Runtime for edge and cloud deployments Handle data challenges—augmentation, domain adaptation, semi-supervised learning—and mitigate model drift in production MLOps & Deployment Containerize models and services with Docker; orchestrate with Kubernetes (KServe) or AWS SageMaker Pipelines Implement CI/CD for model/version management (MLflow, DVC), automated testing, and performance monitoring (Prometheus + Grafana) Manage scalability and cost by leveraging cloud autoscaling on AWS (EC2/EKS), GCP (Vertex AI), or Azure ML (AKS) Cross-Functional Collaboration Define SLAs for latency, accuracy, and throughput alongside product and DevOps teams Evangelize best practices in prompt engineering, model governance, data privacy, and interpretability Mentor junior engineers on reproducible research, code reviews, and end-to-end AI delivery Required Qualifications You must be proficient in at least one tool from each category below: LLM Frameworks & Tooling: Hugging Face Transformers, Ollama, vLLM, or LLaMA Agent & Retrieval Tools: LangChain or LangGraph; RAG with Pinecone, Weaviate, or Milvus Inference Serving: Triton Inference Server; FastAPI or Flask Computer Vision Frameworks & Libraries: PyTorch or TensorFlow; OpenCV (cv2) or NVIDIA DeepStream Model Optimization: TensorRT; ONNX Runtime; Torch-TensorRT MLOps & Versioning: Docker and Kubernetes (KServe, SageMaker); MLflow or DVC Monitoring & Observability: Prometheus; Grafana Cloud Platforms: AWS (SageMaker, EC2/EKS) or GCP (Vertex AI, AI Platform) or Azure ML (AKS, ML Studio) Programming Languages: Python (required); C++ or Go (preferred) Additionally: Bachelor’s or Master’s in Computer Science, Electrical Engineering, AI/ML, or a related field 3–5 years of professional experience shipping both generative and vision-based AI models in production Strong problem-solving mindset; ability to debug issues like LLM drift, vector index staleness, and model degradation Excellent verbal and written communication skills Typical Domain Challenges You’ll Solve LLM Hallucination & Safety: Implement grounding, filtering, and classifier layers to reduce false or unsafe outputs Vector DB Scaling: Maintain low-latency, high-throughput similarity search as embeddings grow to millions Inference Latency: Balance batch sizing and concurrency to meet real-time SLAs on cloud and edge hardware Concept & Data Drift: Automate drift detection and retraining triggers in vision and language pipelines Multi-Modal Coordination: Seamlessly orchestrate data flow between vision models and LLM agents in complex workflows About Company Hi there! We are Auriga IT. We power businesses across the globe through digital experiences, data and insights. From the apps we design to the platforms we engineer, we're driven by an ambition to create world-class digital solutions and make an impact. Our team has been part of building the solutions for the likes of Zomato, Yes Bank, Tata Motors, Amazon, Snapdeal, Ola, Practo, Vodafone, Meesho, Volkswagen, Droom and many more. We are a group of people who just could not leave our college-life behind and the inception of Auriga was solely based on a desire to keep working together with friends and enjoying the extended college life. Who Has not Dreamt of Working with Friends for a Lifetime Come Join In https://www.aurigait.com/ -https://aurigait.com/https://aurigait.com

Posted 1 month ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

● Minimum of (3+) years of experience in AI-based application development. ● Fine-tune pre-existing models to improve performance and accuracy. ● Experience with TensorFlow or PyTorch, Scikit-learn, or similar ML frameworks and familiarity with APIs like OpenAI or vertex AI ● Experience with NLP tools and libraries (e.g., NLTK, SpaCy, GPT, BERT). ● Implement frameworks like LangChain, Anthropics Constitutional AI, OpenAIs, Hugging Face, and Prompt Engineering techniques to build robust and scalable AI applications. ● Evaluate and analyze RAG solution and Utilise the best-in-class LLM to define customer experience solutions (Fine tune Large Language models (LLM)). ● Architect and develop advanced generative AI solutions leveraging state-of-the-art language models (LLMs) such as GPT, LLaMA, PaLM, BLOOM, and others. ● Strong understanding and experience with open-source multimodal LLM models to customize and create solutions. ● Explore and implement cutting-edge techniques like Few-Shot Learning, Reinforcement Learning, Multi-Task Learning, and Transfer Learning for AI model training and fine-tuning. ● Proficiency in data preprocessing, feature engineering, and data visualization using tools like Pandas, NumPy, and Matplotlib. ● Optimize model performance through experimentation, hyperparameter tuning, and advanced optimization techniques. ● Proficiency in Python with the ability to get hands-on with coding at a deep level. ● Develop and maintain APIs using Python's FastAPI, Flask, or Django for integrating AI capabilities into various systems. ● Ability to write optimized and high-performing scripts on relational databases (e.g., MySQL, PostgreSQL) or non-relational database (e.g., MongoDB or Cassandra) ● Enthusiasm for continuous learning and professional developement in AI and leated technologies. ● Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. ● Knowledge of cloud services like AWS, Google Cloud, or Azure. ● Proficiency with version control systems, especially Git. ● Familiarity with data pre-processing techniques and pipeline development for Al model training. ● Experience with deploying models using Docker, Kubernetes ● Experience with AWS Bedrock, and Sagemaker is a plus ● Strong problem-solving skills with the ability to translate complex business problems into Al solutions.

Posted 1 month ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title: Data Engineer (Dataiku) Location: Bangalore, Chennai, Noida, Hyderabad Experience: 7-10 Years Job Summary We are seeking an experienced Data Engineer with strong expertise in Dataiku , AWS cloud technologies , and Large Language Models (LLMs) . The ideal candidate will play a key role in building scalable data pipelines, deploying ML/LLM-based solutions, and driving analytics initiatives across the organization using modern data engineering practices. Required Skills & Experience 5+ years of experience in data engineering or related roles. Hands-on experience with Dataiku DSS (Data Science Studio) for building data pipelines and analytical workflows. Good understanding of LLMs, prompt engineering, and generative AI use cases. Strong command of AWS services – S3, Lambda, Glue, Redshift, SageMaker, etc. Proficient in Python, SQL, and data transformation techniques. Experience with REST APIs, CI/CD pipelines, and containerization (Docker, Kubernetes) is a plus. Familiarity with data modeling, ETL best practices, and cloud security standards.

Posted 1 month ago

Apply

3.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

ABOUT US: The vision from the start has been to create a state-of-the-art infrastructure of the workplace with the implementation of all the tools for employees and clients makes Bytes Technolab a growth hacker. This has really helped the dev team in adapting to the existing & upcoming technologies & platforms to create top-notch software solutions for businesses, startups, and enterprises. Our core value lies with 100% integrity in communication, workflow, methodology, and flexible collaboration. With the client-first approach, we are offering flexible models of engagement that can help our clients in the best way possible. Bytes Technolab is confident that this approach would help us develop user-centric, applicable, advanced, secure, and scalable software solutions. Our team is fully committed to adding value at every stage of your journey with us, from initial engagement to delivery and beyond. Role Description: 3+ years of professional experience in Machine Learning and Artificial Intelligence. Strong proficiency in Python programming and its libraries for ML and AI (NumPy, Pandas, scikit-learn, etc.). Hands-on experience with ML/AI frameworks like PyTorch, TensorFlow, Keras, Facenet, OpenCV, and other relevant libraries. Proven ability to work with GPU acceleration for deep learning model development and optimization (using CUDA, cuDNN). Strong understanding of neural networks, computer vision, and other AI technologies. Solid experience working with Large Language Models (LLMs) such as GPT, BERT, LLaMA, including fine-tuning, prompt engineering, and embedding-based retrieval (RAG). Working knowledge of Agentic Architectures, including designing and implementing LLM-powered agents with planning, memory, and tool-use capabilities. Familiar with frameworks like LangChain, AutoGPT, BabyAGI, and custom agent orchestration pipelines. Solid problem-solving skills and the ability to translate business requirements into ML/AI/LLM solutions. Experience in deploying ML/AI models on cloud platforms (AWS SageMaker, Azure ML, Google AI Platform). Proficiency in building and managing ETL pipelines, data preprocessing, and feature engineering. Experience with MLOps tools and frameworks such as MLflow, Kubeflow, or TensorFlow Extended (TFX). Expertise in optimizing ML/AI models for performance and scalability across diverse hardware architectures. Experience with Natural Language Processing (NLP) and foundational knowledge of Reinforcement Learning. Familiarity with data versioning tools like DVC or Delta Lake. Skilled in containerization and orchestration tools such as Docker and Kubernetes for scalable deployments. Proficient in model evaluation, A/B testing, and establishing continuous training pipelines. Experience working in Agile/Scrum environments with cross-functional teams. Strong understanding of ethical AI principles, model fairness, and bias mitigation techniques. Familiarity with CI/CD pipelines for machine learning workflows. Ability to effectively communicate complex ML, AI, and LLM/Agentic concepts to both technical and non-technical stakeholders. We are hiring professionals with 3+ years of experience in IT Services. Kindly share your updated CV at freny.darji@bytestechnolab.com

Posted 1 month ago

Apply

12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Kidde Global Solutions: Kidde Global Solutions is a world leader in fire & life safety solutions tailored for complex commercial facilities to homes. Through iconic, industry-defining brands including Kidde, Kidde Commercial, Edwards, GST, Badger, Gloria and Aritech, we provide residential and commercial customers with advanced solutions and services to protect people and property in a wide range of applications, all around the globe. Role: AI and Automation Architect Location: Hyderabad Employment Type: Full-time Experience: 12-18 years Position Overview: We are seeking an innovative and experienced AI and Automation Architect to lead the design and development of intelligent AI and automation solutions by integrating RPA (Robotic Process Automation) bots with AI-driven technologies . The Architect will work closely with cross-functional teams to identify opportunities for automation, design scalable solutions, and drive business efficiency through cutting-edge AI-powered bots. Key Responsibilities: Automation Strategy and Design AI and BOT Integration Solution Development and Deployment using MLOps on AWS Cloud Platform Stakeholder Collaboration Governance and Compliance Innovation and Continuous Improvement Qualifications: Skills and Competencies: Develop and implement enterprise-wide intelligent automation strategies by integrating RPA with AI capabilities (e.g., natural language processing, machine learning). Analyze business processes to identify automation opportunities and recommend solutions. Define and establish automation frameworks and best practices. Architect and deploy AI-enabled bots that integrate with enterprise platforms like Service Now, Workday etc. Collaborate with AI engineers and data scientists to leverage machine learning models in automation workflows. Oversee the development and deployment of bots, ensuring they adhere to quality and security standards. Partner with business teams, IT departments, and process owners to understand automation needs and deliver impactful solutions. Establish MLOps / AIOps frameworks to ensure compliance with organizational and regulatory standards. Ensure automated solutions are secure, auditable, and aligned with data privacy laws (e.g., GDPR, CCPA). Stay updated on emerging technologies in AI and automation and assess their applicability to the organization. Drive continuous improvement in automation processes, leveraging feedback and analytics for optimization. Education: Bachelor’s or Master’s degree in Computer Science, Information Technology, with a specialization in Data Science, ML or AI. Experience: 12+ years of experience in automation, including 3+ years in an architect or leadership role. Platform Development and Evangelism: Build scalable AI platforms that are customer-facing. Evangelize the platform with customers and internal stakeholders. Ensure platform scalability, reliability, and performance to meet business needs. Machine Learning Pipeline Design: Design ML pipelines for experiment management, model management, feature management, and model retraining. Implement A/B testing of models. Design APIs for model inferencing at scale. Proven expertise with MLflow, SageMaker, Vertex AI, and Azure AI. LLM Serving and GPU Architecture: Serve as an SME in LLM serving paradigms. Possess deep knowledge of GPU architectures. Expertise in distributed training and serving of large language models. Proficient in model and data parallel training using frameworks like DeepSpeed and service frameworks like vLLM. Model Fine-Tuning and Optimization: Demonstrate proven expertise in model fine-tuning and optimization techniques. Achieve better latencies and accuracies in model results. Reduce training and resource requirements for fine-tuning LLM and LVM models. LLM Models and Use Cases: Have extensive knowledge of different LLM models. Provide insights on the applicability of each model based on use cases. Proven experience in delivering end-to-end solutions from engineering to production for specific customer use cases. DevOps and LLMOps Proficiency: Proven expertise in DevOps and LLMOps practices.

Posted 1 month ago

Apply

0 years

0 Lacs

Thiruvananthapuram, Kerala, India

On-site

Job Requirements Build , train, and implement Python machine learning models for diverse applications by utilizing Python's vast libraries and frameworks, including TensorFlow, PyTorch, and scikit-learn, to develop strong and effective AI solutions. Work Experience Proficiency in python and various ML libraries like TensorFlow, Pytorch, Numpy, Pandas etc. Knowledge of Object-Oriented Analysis and Design, Software Design Patterns and Java coding principles Good understanding of ML and deep learning techniques and algorithms. Knowledge on MLOps, DevOps and Cloud Platforms like AWS SageMaker would be good to have. Experience with Elasticsearch for efficient data indexing, search, and retrieval. Data Handling techniques: Cleaning and preprocessing, knowledge of databases and DB integration is good to have

Posted 1 month ago

Apply

0.0 - 10.0 years

0 Lacs

Bengaluru, Karnataka

Remote

Location: Bangalore - Karnataka, India - EOIZ Industrial Area Job Family: Engineering Worker Type Reference: Regular - Permanent Pay Rate Type: Salary Career Level: T3(B) Job ID: R-44637-2025 Description & Requirements Job Description Introduction: A Career at HARMAN Digital Transformation Solutions (DTS) We’re a global, multi-disciplinary team that’s putting the innovative power of technology to work and transforming tomorrow. At HARMAN DTS, you solve challenges by creating innovative solutions. Combine the physical and digital, making technology a more dynamic force to solve challenges and serve humanity’s needs Java Microservices Java Developer with experience in microservices deployment, automation, and system lifecycle management(security, and infrastructure management) Required Skills: Java, hibernate, SAML/OpenSAML REST APIs Docker PostgreSQL (PSQL) Familiar with git hub workflow. Good to Have: Go (for automation and bootstrapping) RAFT Consensus Algorithm HashiCorp Vault Key Responsibilities: Service Configuration & Automation: Configure and bootstrap services using the Go CLI. Develop and maintain Go workflow templates for automating Java-based microservices. Deployment & Upgrade Management: Manage service upgrade workflows and apply Docker-based patches. Implement and manage OS-level patches as part of the system lifecycle. Enable controlled deployments and rollbacks to minimize downtime. Network & Security Configuration: Configure and update FQDN, proxy settings, and SSL/TLS certificates. Set up and manage syslog servers for logging and monitoring. Manage appliance users, including root and SSH users, ensuring security compliance. Scalability & Performance Optimization: Implement scale-up and scale-down mechanisms for resource optimization. Ensure high availability and performance through efficient resource management. Lifecycle & Workflow Automation: Develop automated workflows to support service deployment, patching, and rollback. Ensure end-to-end lifecycle management of services and infrastructure. What You Will Do To perform in-depth analysis of data and machine learning models to identify insights and areas of improvement. Develop and implement models using both classical machine learning techniques and modern deep learning approaches. Deploy machine learning models into production, ensuring robust MLOps practices including CI/CD pipelines, model monitoring, and drift detection. Conduct fine-tuning and integrate Large Language Models (LLMs) to meet specific business or product requirements. Optimize models for performance and latency, including the implementation of caching strategies where appropriate. Collaborate cross-functionally with data scientists, engineers, and product teams to deliver end-to-end ML solutions. What You Need to Be Successful Utilized various statistical techniques to derive important insights and trends. Proven experience in machine learning model development and analysis using classical and neural networks based approaches. Strong understanding of LLM architecture, usage, and fine-tuning techniques. Solid understanding of statistics, data preprocessing, and feature engineering. Proficient in Python and popular ML libraries (scikit-learn, PyTorch, TensorFlow, etc.). Strong debugging and optimization skills for both training and inference pipelines. Familiarity with data formats and processing tools (Pandas, Spark, Dask). Experience working with transformer-based models (e.g., BERT, GPT) and Hugging Face ecosystem. Bonus Points if You Have Experience with MLOps tools (e.g., MLflow, Kubeflow, SageMaker, or similar). Experience with monitoring tools (Prometheus, Grafana, or custom solutions for ML metrics). Familiarity with cloud platforms (Sagemaker, AWS, GCP, Azure) and containerization (Docker, Kubernetes). Hands-on experience with MLOps practices and tools for deployment, monitoring, and drift detection. Exposure to distributed training and model parallelism techniques. Prior experience in AB testing ML models in production. What Makes You Eligible Bachelor’s or master’s degree in computer science, Artificial Intelligence, or a related field. 5-10 years relevant and Proven experience in developing and deploying generative AI models and agents in a professional setting. What We Offer Flexible work environment, allowing for full-time remote work globally for positions that can be performed outside a HARMAN or customer location Access to employee discounts on world-class Harman and Samsung products (JBL, HARMAN Kardon, AKG, etc.) Extensive training opportunities through our own HARMAN University Competitive wellness benefits Tuition reimbursement “Be Brilliant” employee recognition and rewards program An inclusive and diverse work environment that fosters and encourages professional and personal development You Belong Here HARMAN is committed to making every employee feel welcomed, valued, and empowered. No matter what role you play, we encourage you to share your ideas, voice your distinct perspective, and bring your whole self with you – all within a support-minded culture that celebrates what makes each of us unique. We also recognize that learning is a lifelong pursuit and want you to flourish. We proudly offer added opportunities for training, development, and continuing education, further empowering you to live the career you want. About HARMAN: Where Innovation Unleashes Next-Level Technology Ever since the 1920s, we’ve been amplifying the sense of sound. Today, that legacy endures, with integrated technology platforms that make the world smarter, safer, and more connected. Across automotive, lifestyle, and digital transformation solutions, we create innovative technologies that turn ordinary moments into extraordinary experiences. Our renowned automotive and lifestyle solutions can be found everywhere, from the music we play in our cars and homes to venues that feature today’s most sought-after performers, while our digital transformation solutions serve humanity by addressing the world’s ever-evolving needs and demands. Marketing our award-winning portfolio under 16 iconic brands, such as JBL, Mark Levinson, and Revel, we set ourselves apart by exceeding the highest engineering and design standards for our customers, our partners and each other. If you’re ready to innovate and do work that makes a lasting impact, join our talent community today ! HARMAN is an Equal Opportunity/Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or Protected Veterans status. HARMAN offers a great work environment, challeng Important Notice: Recruitment Scams Please be aware that HARMAN recruiters will always communicate with you from an '@harman.com' email address. We will never ask for payments, banking, credit card, personal financial information or access to your LinkedIn/email account during the screening, interview, or recruitment process. If you are asked for such information or receive communication from an email address not ending in '@harman.com' about a job with HARMAN, please cease communication immediately and report the incident to us through: harmancareers@harman.com. HARMAN is proud to be an Equal Opportunity / Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, gender (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics.

Posted 1 month ago

Apply

0.0 - 3.0 years

0 Lacs

Tamil Nadu

On-site

Job Information Job Type Part time Work Experience 10-15 years Industry Technology Date Opened 06/24/2025 City Chennai City Corporation State/Province Tamil Nadu Country India Zip/Postal Code 600015 Job Description Objective of the Role Provide strategic and hands-on guidance to help architect, validate, and prototype an AI-driven solution. The focus is on creating scalable, secure, and production-ready AI capabilities that align with our product vision. Key Responsibilities · Act as the technical architect and advisor for our AI initiatives · Design AI/ML system architecture , including model pipelines, data flow, APIs, and infrastructure · Evaluate use cases and guide algorithm/model selection (NLP, GenAI, computer vision, or predictive analytics as relevant) · Collaborate with our in-house engineering team to build scalable PoCs/MVPs · Define best practices for model deployment, retraining, and performance optimization · Advise on MLOps , data strategy , and cloud integration (AWS/GCP preferred) · Provide weekly check-ins , code reviews, and documentation support Requirements · 8+ years in AI/ML engineering, including at least 2–3 years in architectural roles · Proven experience working in a product company environment or consulting for one · Deep knowledge of machine learning frameworks (TensorFlow, PyTorch, Hugging Face, LangChain) · Experience with Generative AI , LLMs , or NLP models preferred · Exposure to cloud-native AI deployment (Docker, Kubernetes, SageMaker, Vertex AI, etc.) · Strong communication skills and ability to work in cross-functional teams · Past experience working in contract or freelance consulting engagements

Posted 1 month ago

Apply

4.0 years

0 Lacs

Greater Kolkata Area

On-site

Job Title : Lead Data Scientist Location : Multiple Locations (Hybrid/On-site) Employment Type : Full-Time Experience Required : 4 8 Years About The Role We are looking for a highly motivated and technically skilled Lead Data Scientist to join our advanced analytics and AI team. The ideal candidate should possess strong hands-on experience in Python, machine learning frameworks, cloud platforms (AWS, Azure, GCP), and end-to-end project deployment. This is a leadership role requiring technical expertise, business understanding, and mentorship capabilities. Key Skills & Experience Python : 5+ Years (Intermediate) R Language : 3+ Years (Intermediate) Machine Learning : 6+ Years (Intermediate) Data Analysis : 6+ Years (Intermediate) TensorFlow & PyTorch : 3+ Years (Intermediate) SciKit-Learn : 3+ Years (Intermediate) Cloud Platforms : AWS, Azure, GCP 3+ Years each (Intermediate) Artificial Intelligence : 2+ Years (Intermediate) Roles & Responsibilities Data Science Leadership : Lead and manage end-to-end data science projects from exploration to production. Translate business objectives into data-driven solutions and actionable insights. Mentor junior data scientists and analysts, ensuring best practices in modeling and analysis. Model Development & Deployment : Design, build, and optimize machine learning and AI models using Python, TensorFlow, PyTorch, and scikit-learn. Deploy models in production environments using cloud services (AWS, Azure, GCP). Evaluate model performance using statistical methods and fine-tune as needed. Data Engineering & Analysis : Perform data cleaning, transformation, and exploratory data analysis (EDA). Work with large-scale structured and unstructured datasets. Collaborate with data engineers to ensure smooth data pipeline integration. Cloud Integration : Utilize cloud services for model training, hosting, and monitoring (AWS Sagemaker, Azure ML Studio, GCP AI Platform). Implement scalable and secure data science solutions using cloud-native tools. Cross-functional Collaboration : Work closely with product managers, software engineers, and stakeholders to align on goals and deliverables. Present complex models and findings to non-technical stakeholders in a clear and effective manner. Preferred Qualifications Bachelors or Masters in Computer Science, Data Science, Mathematics, or related field. Prior experience leading data science teams or projects. Strong communication, documentation, and stakeholder management skills. Why Join Us ? Work on cutting-edge AI/ML projects that create real-world impact. Collaborate with a high-performing and dynamic team. Flexible work locations and career advancement opportunities. (ref:hirist.tech)

Posted 1 month ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

About Xebia Xebia is a trusted advisor in the modern era of digital transformation, serving hundreds of leading brands worldwide with end-to-end IT solutions. The company has experts specializing in t echnology consulting, software engineering, AI, digital products and platforms, data, cloud, intelligent automation, agile transformation, and industry digitization. In addition to providing high-quality digital consulting and state-of-the-art software development, Xebia has a host of standardized solutions that substantially reduce the time-to-market for businesses. Xebia also offers a diverse portfolio of training courses to help support forward-thinking organizations as they look to upskill and educate their workforce to capitalize on the latest digital capabilities. The company has a strong presence across 16 countries with development centres across the US, Latin America, Western Europe, Poland, the Nordics, the Middle East, and Asia Pacific. Job Title: Generative AI Engineer Exp: 8 - 12 yrs Location: Bengaluru, Chennai, Gurgaon & Pune Job Summary: We are seeking a highly skilled Generative AI Engineer with hands-on experience in developing and deploying cutting-edge AI solutions using AWS, Amazon Bedrock, and agentic AI frameworks. The ideal candidate will have a strong background in machine learning and prompt engineering, with a passion for building intelligent, scalable, and secure GenAI applications. Key Responsibilities: Design, develop, and deploy Generative AI models and pipelines for real-world use cases. Build and optimize solutions using AWS AI/ML services , including Amazon Bedrock , SageMaker, and related cloud-native tools. Develop and orchestrate Agentic AI systems , integrating autonomous agents with structured workflows and dynamic decision-making. Collaborate with cross-functional teams including data scientists, cloud engineers, and product managers to translate business needs into GenAI solutions. Implement prompt engineering, fine-tuning, and retrieval-augmented generation (RAG) techniques to optimize model performance. Ensure robustness, scalability, and compliance in GenAI workloads deployed in production environments. Required Skills & Qualifications: Strong experience with Generative AI models (e.g., GPT, Claude, Mistral, etc.) Hands-on experience with Amazon Bedrock and other AWS AI/ML services. Proficiency in building and managing Agentic AI systems using frameworks like LangChain, AutoGen, or similar. Solid understanding of cloud-native architectures and ML Ops on AWS. Proficiency in Python and relevant GenAI/ML libraries (Transformers, PyTorch, LangChain, etc.) Familiarity with security, cost, and governance best practices for GenAI on cloud. Preferred Qualifications: AWS certifications (e.g., AWS Certified Machine Learning Specialty ) Experience with LLMOps tools and vector databases (e.g., Pinecone, FAISS, Weaviate) Background in NLP, knowledge graphs, or conversational AI. Why Join Us? Work on cutting-edge AI technologies that are transforming industries. Collaborative and innovative environment. Opportunities for continuous learning and growth.

Posted 1 month ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Years of exp :10 - 15 yrs Location : Noida Join us as Cloud Engineer Lead at Dailoqa , where you will be responsible for operationalizing cutting-edge machine learning and generative AI solutions, ensuring scalable, secure, and efficient deployment across infrastructure. You will work closely with data scientists, ML engineers, and business stakeholders to build and maintain robust MLOps pipelines, enabling rapid experimentation and reliable production implementation of AI models, including LLMs and real-time analytics systems. To be successful as Cloud Engineer you should have experience with: Cloud sourcing, networks, VMs, performance, scaling, availability, storage, security, access management Deep expertise in one or more cloud platforms: AWS, Azure, GCP Strong experience in containerization and orchestration (Docker, Kubernetes, Helm) Familiarity with CI/CD tools: GitHub Actions, Jenkins, Azure DevOps, ArgoCD, etc. Proficiency in scripting languages (Python, Bash, PowerShell) Knowledge of MLOps tools such as MLflow, Kubeflow, SageMaker, Vertex AI, or Azure ML Strong understanding of DevOps principles applied to ML workflows. Key Responsibilities may include: Design and implement scalable, cost-optimized, and secure infrastructure for AI-driven platforms. Implement infrastructure as code using tools like Terraform, ARM, or Cloud Formation. Automate infrastructure provisioning, CI/CD pipelines, and model deployment workflows. Ensure version control, repeatability, and compliance across all infrastructure components. Set up monitoring, logging, and alerting frameworks using tools like Prometheus, Grafana, ELK, or Azure Monitor. Optimize performance and resource utilization of AI workloads including GPU-based training/inference Experience with Snowflake, Databricks for collaborative ML development and scalable data processing. Understanding model interpretability, responsible AI, and governance. Contributions to open-source MLOps tools or communities. Strong leadership, communication, and cross-functional collaboration skills. Knowledge of data privacy, model governance, and regulatory compliance in AI systems. Exposure to LangChain, Vector DBs (e. g. , FAISS, Pinecone), and retrieval-augmented generation (RAG) pipelines.

Posted 1 month ago

Apply

7.0 - 12.0 years

22 - 25 Lacs

India

On-site

TECHNICAL ARCHITECT Key Responsibilities 1. Designing technology systems: Plan and design the structure of technology solutions, and work with design and development teams to assist with the process. 2. Communicating: Communicate system requirements to software development teams, and explain plans to developers and designers. They also communicate the value of a solution to stakeholders and clients. 3. Managing Stakeholders: Work with clients and stakeholders to understand their vision for the systems. Should also manage stakeholder expectations. 4. Architectural Oversight: Develop and implement robust architectures for AI/ML and data science solutions, ensuring scalability, security, and performance. Oversee architecture for data-driven web applications and data science projects, providing guidance on best practices in data processing, model deployment, and end-to-end workflows. 5. Problem Solving: Identify and troubleshoot technical problems in existing or new systems. Assist with solving technical problems when they arise. 6. Ensuring Quality: Ensure if systems meet security and quality standards. Monitor systems to ensure they meet both user needs and business goals. 7. Project management: Break down project requirements into manageable pieces of work, and organise the workloads of technical teams. 8. Tool & Framework Expertise: Utilise relevant tools and technologies, including but not limited to LLMs, TensorFlow, PyTorch, Apache Spark, cloud platforms (AWS, Azure, GCP), Web App development frameworks and DevOps practices. 9. Continuous Improvement: Stay current on emerging technologies and methods in AI, ML, data science, and web applications, bringing insights back to the team to foster continuous improvement. Technical Skills 1. Proficiency in AI/ML frameworks such as TensorFlow, PyTorch, Keras, and scikit-learn for developing machine learning and deep learning models. 2. Knowledge or experience working with self-hosted or managed LLMs. 3. Knowledge or experience with NLP tools and libraries (e.g., SpaCy, NLTK, Hugging Face Transformers) and familiarity with Computer Vision frameworks like OpenCV and related libraries for image processing and object recognition. 4. Experience or knowledge in back-end frameworks (e.g., Django, Spring Boot, Node.js, Express etc.) and building RESTful and GraphQL APIs. 5. Familiarity with microservices, serverless, and event-driven architectures. Strong understanding of design patterns (e.g., Factory, Singleton, Observer) to ensure code scalability and reusability. 6. Proficiency in modern front-end frameworks such as React, Angular, or Vue.js, with an understanding of responsive design, UX/UI principles, and state management (e.g., Redux) 7. In-depth knowledge of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra), as well as caching solutions (e.g., Redis, Memcached). 8. Expertise in tools such as Apache Spark, Hadoop, Pandas, and Dask for large-scale data processing. 9. Understanding of data warehouses and ETL tools (e.g., Snowflake, BigQuery, Redshift, Airflow) to manage large datasets. 10. Familiarity with visualisation tools (e.g., Tableau, Power BI, Plotly) for building dashboards and conveying insights. 11. Knowledge of deploying models with TensorFlow Serving, Flask, FastAPI, or cloud-native services (e.g., AWS SageMaker, Google AI Platform). 12. Familiarity with MLOps tools and practices for versioning, monitoring, and scaling models (e.g., MLflow, Kubeflow, TFX). 13. Knowledge or experience in CI/CD, IaC and Cloud Native toolchains. 14. Understanding of security principles, including firewalls, VPC, IAM, and TLS/SSL for secure communication. 15. Knowledge of API Gateway, service mesh (e.g., Istio), and NGINX for API security, rate limiting, and traffic management. Experience Required Technical Architect with 7 - 12 years of experience Salary 22-25 LPA Job Types: Full-time, Permanent Pay: ₹2,200,000.00 - ₹2,500,000.00 per year Location Type: In-person Work Location: In person

Posted 1 month ago

Apply

5.0 years

5 - 6 Lacs

Bengaluru

On-site

Company Description At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. Job Description Responsibilities: Research, design, develop, implement and test econometric, statistical, optimization and machine learning models. Design, write and test modules for Nielsen analytics platforms using Python, R, SQL and/or Spark. Utilize advanced computational/statistics libraries including Spark MLlib, Scikit-learn, SciPy, StatsModels. Collaborate with cross functional Data Science, Product, and Technology teams to integrate best practices from across the organization Provide leadership and guidance for the team in the of adoption of new tools and technologies to improve our core capabilities Execute and refine the roadmap to upgrade the modeling/forecasting/control functions of the team to improve upon the core service KPI’s Ensure product quality, stability, and scalability by facilitating code reviews and driving best practices like modular code, unit tests, and incorporating CI/CD workflows Explain complex data science (e.g. model-related) concepts in simple terms to non-technical internal and external audiences Qualifications Key Skills: 5+ years of professional work experience in Statistics, Data Science, and/or related disciplines, with focus on delivering analytics software solutions in a production environment Strong programming skills in Python with experience in NumPy, Pandas, SciPy and Scikit-learn. Hands-on experience with deep learning frameworks (PyTorch, TensorFlow, Keras). Solid understanding of Machine learning domains such as Computer Vision, Natural Language Processing and classical Machine Learning. Proficiency in SQL and NoSQL databases for large-scale data manipulation Experience with cloud-based ML services (AWS SageMaker, Databricks, GCP AI, Azure ML). Knowledge of model deployment (FastAPI, Flask, TensorRT, ONNX) MLOps tools (MLflow, Kubeflow, Airflow) and containerization. Preferred skills: Understanding of LLM fine-tuning, tokenization, embeddings, and multimodal learning. Familiarity with vector databases (FAISS, Pinecone) and retrieval-augmented generation (RAG). Familiarity with advertising intelligence, recommender systems, and ranking models. Knowledge of CI/CD for ML workflows, and software development best practices. Additional Information Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels.

Posted 1 month ago

Apply

5.0 years

5 - 9 Lacs

Bengaluru

On-site

Company Description Version 1 are a true global leader in business transformation. For nearly three decades, we have been strategically partnering with customers to go beyond expectations through the power of cutting-edge technology and expert teams. Our deep expertise in cloud, data and AI, application modernisation, and service delivery management has redefined businesses globally, helping shape the future for large public sector organisations and major global, private brands. We put users and user-centric design at the heart of everything we do, enabling our customers to exceed expectations for their customers. Our approach is underpinned by the Version 1 Strength in Balance model – a balanced focus across our customers, our people and a strong organisation. This model is guided by core values that are embedded in every aspect of what we do. Our customers’ need for transformation is our driving force. We enable them to accelerate their journey to their digital future with our deep expertise and innovative approach. Our global technology partners – Oracle, Microsoft, AWS, Red Hat, and Snowflake – help us tackle any challenge by leveraging a technology-driven approach. Our people unlock our potential. They immerse themselves into the world of our customers to truly understand the unique challenges they face. Our teams, made up of highly skilled, passionate individuals, act with agility and integrity. We continually invest in their development and foster a culture that encourages collaboration and innovation. This is a reflection of our Strength in Balance model, which emphasises a balanced focus on our customers, our people, and a strong organisation. Through our comprehensive range of Managed Service offerings, we take ownership of the tasks that distract Customers from what really matters; driving their business objectives and strategic initiatives. We enable them to save time, and reduce costs and risk, by continually improving your technology estates, ensuring they drive value for their business. Go beyond simply ‘keeping the lights on’ and embrace the potential of our ASPIRE Managed Services that place AI, continuous improvement and business innovation at the heart of everything we do. From operational maintenance through to optimisation, we are trusted managed service experts with a sustainable, value-led approach and a wealth of industry sector expertise and experience. Job Description Onsite role, India Delivery Centre / Belfast / Dublin Full time position, 3-5 days per week in office (not shift) Department: ASPIRE Managed Services Practice: Services Reliability Group Vetting Requirements: N/A Role Summary: Our ASPIRE Global Service Centre is the central hub of our Service Management operations. Beyond a traditional Service Desk, it stands as the central authority and shared service delivery hub, orchestrating all operational workflows, processes, procedures, and tooling. It’s a core delivery component of the Version 1 ASPIRE Managed Services offering that places AI, continuous improvement and business innovation at the heart of everything Version 1 does. With a focus on supporting self-service and automation, we utilise the best digital capabilities of the ServiceNow ITSM tooling product to provide the very best Experience to our Customers. We are seeking an experienced and results-driven AI and Automation Lead who will be responsible for driving the strategic implementation and operational excellence of automation and artificial intelligence initiatives for ASPIRE Managed Services. This role leads the identification, design, and deployment of intelligent automation solutions to improve operational efficiency and productivity, enhance decision making, scale operations and deliver a competitive advantage in the market. Key Responsibilities: Develop and execute the ASPIRE Managed Services automation and AI strategy aligned with SRG and EA Practice goals Identify opportunities for AI and automation across all Managed Service functions, tooling and processes Champion a culture of innovation and continuous improvement through emerging technologies Lead end-to-end delivery of automation and AI projects, including planning, development, testing, deployment, and monitoring Establish governance frameworks and best practices for AI and automation initiatives Oversee the design and implementation of AI models, RPA (Robotic Process Automation), and intelligent workflows Ensure solutions are scalable, secure, and compliant with data privacy and ethical standards Evaluate and select appropriate tools, platforms, and vendors Collaborate with business units to understand pain points and co-create solutions Communicate complex technical concepts to non-technical stakeholders Monitor performance and continuously optimise solutions. Delivery of measurable business value through automation and AI Development of internal capabilities and knowledge sharing across teams Qualifications Skills, Education & Qualifications: Proven experience (5 years +) leading automation and AI projects in a complex, multi-client or enterprise-scale managed services environment, with demonstrable delivery of measurable business outcomes Strong technical expertise in Artificial Intelligence and Machine Learning, including: Supervised/unsupervised learning, deep learning, and natural language processing (NLP) Model development using frameworks such as TensorFlow, PyTorch, or scikit-learn Experience deploying AI models in production environments using MLOps principles (e.g., MLflow, Azure ML, SageMaker). Hands-on experience with automation and orchestration technologies, such as: Robotic Process Automation (RPA) platforms: UiPath, Blue Prism, Automation Anywhere IT process automation (ITPA) tools: ServiceNow Workflow/Orchestration, Microsoft Power Automate, Ansible, Terraform Integration using APIs and event-driven architectures (e.g., Kafka, Azure Event Grid) Proficiency in cloud-native AI and automation services in one of or more of public cloud platforms: Azure (Cognitive Services, Synapse, Logic Apps, Azure OpenAI) AWS (SageMaker, Lambda, Textract, Step Functions) GCP (Vertex AI, AutoML, Cloud Functions) Strong project delivery experience using modern methodologies: Agile/Scrum and DevOps for iterative development and deployment CI/CD pipeline integration for automation and ML model lifecycle management Use of tools like Git, Jenkins, and Azure DevOps In-depth knowledge of data architecture, governance, and AI ethics, including: Data privacy and security principles (e.g., GDPR, ISO 27001) Responsible AI practices: bias detection, explainability (e.g., SHAP, LIME), model drift monitoring Excellent stakeholder engagement and communication skills, with the ability to: Translate complex AI and automation concepts into business value Influence cross-functional teams and executive leadership Promote a culture of innovation, experimentation, and continuous learning Excellent leadership and team management skills Strong communication, interpersonal, and problem-solving abilities Strategic thinking and decision-making Adaptability to evolving technologies and processes Bachelor's degree in Computer Science, Information Technology, or a related field, or equivalent experience Additional Information At Version 1, we believe in providing our employees with a comprehensive benefits package that prioritises their well-being, professional growth, and financial stability. One of our standout advantages is the ability to work with a hybrid schedule along with business travel, allowing our employees to strike a balance between work and life. We prioritise the health and safety of our employees, providing private medical and life insurance coverage, as well as free eye tests and contributions towards glasses. Our team members can also stay ahead of the curve with incentivized certifications and accreditations, including AWS, Microsoft, Oracle, and Red Hat. Our employee-designed Profit Share scheme divides a portion of our company's profits each quarter amongst employees. We are dedicated to helping our employees reach their full potential, offering Pathways Career Development Quarterly, a programme designed to support professional growth.

Posted 1 month ago

Apply

2.0 years

5 - 9 Lacs

Bengaluru

On-site

Company Description Version 1 are a true global leader in business transformation. For nearly three decades, we have been strategically partnering with customers to go beyond expectations through the power of cutting-edge technology and expert teams. Our deep expertise in cloud, data and AI, application modernisation, and service delivery management has redefined businesses globally, helping shape the future for large public sector organisations and major global, private brands. We put users and user-centric design at the heart of everything we do, enabling our customers to exceed expectations for their customers. Our approach is underpinned by the Version 1 Strength in Balance model – a balanced focus across our customers, our people and a strong organisation. This model is guided by core values that are embedded in every aspect of what we do. Our customers’ need for transformation is our driving force. We enable them to accelerate their journey to their digital future with our deep expertise and innovative approach. Our global technology partners – Oracle, Microsoft, AWS, Red Hat, and Snowflake – help us tackle any challenge by leveraging a technology-driven approach. Our people unlock our potential. They immerse themselves into the world of our customers to truly understand the unique challenges they face. Our teams, made up of highly skilled, passionate individuals, act with agility and integrity. We continually invest in their development and foster a culture that encourages collaboration and innovation. This is a reflection of our Strength in Balance model, which emphasises a balanced focus on our customers, our people, and a strong organisation. Through our comprehensive range of Managed Service offerings, we take ownership of the tasks that distract Customers from what really matters; driving their business objectives and strategic initiatives. We enable them to save time, and reduce costs and risk, by continually improving your technology estates, ensuring they drive value for their business. Go beyond simply ‘keeping the lights on’ and embrace the potential of our ASPIRE Managed Services that place AI, continuous improvement and business innovation at the heart of everything we do. From operational maintenance through to optimisation, we are trusted managed service experts with a sustainable, value-led approach and a wealth of industry sector expertise and experience. Job Description Onsite role, India Delivery Centre / Belfast / Dublin Full time position, 3-5 days per week in office (not shift) Department: ASPIRE Managed Services Practice: Services Reliability Group Vetting Requirements: N/A Role Summary: Our ASPIRE Global Service Centre is the central hub of our Service Management operations. Beyond a traditional Service Desk, it stands as the central authority and shared service delivery hub, orchestrating all operational workflows, processes, procedures, and tooling. It’s a core delivery component of the Version 1 ASPIRE Managed Services offering that places AI, continuous improvement and business innovation at the heart of everything Version 1 does. With a focus on supporting self-service and automation, we utilise the best digital capabilities of the ServiceNow ITSM tooling product to provide the very best Experience to our Customers. We are seeking an experienced and results-driven AI and Automation Technician who will be responsible for the delivery and ongoing management of automation and artificial intelligence initiatives for ASPIRE Managed Services. This role will primarily be responsible for the design, and deployment of intelligent automation solutions to improve operational efficiency and productivity, enhance decision making, scale operations and deliver a competitive advantage in the market. Key Responsibilities: Identify opportunities for AI and automation across all Managed Service functions, tooling and processes Delivery and technical implementation of automation and AI projects, including development, testing, deployment, and monitoring Ensure solutions are scalable, secure, and compliant with data privacy and ethical standards Evaluate and select appropriate tools, platforms, and vendors Collaborate with business units to understand pain points and co-create solutions Monitor performance and continuously optimise solutions Development of internal capabilities and knowledge sharing across teams Qualifications Skills, Education & Qualifications: Proven experience (2 years +) delivering automation and AI projects in a complex, multi-client or enterprise-scale managed services environment Technical expertise in Artificial Intelligence and Machine Learning, including: Supervised/unsupervised learning, deep learning, and natural language processing (NLP) Model development using frameworks such as TensorFlow, PyTorch, or scikit-learn Experience deploying AI models in production environments using MLOps principles (e.g., MLflow, Azure ML, SageMaker). Hands-on experience with automation and orchestration technologies, such as: Robotic Process Automation (RPA) platforms: UiPath, Blue Prism, Automation Anywhere IT process automation (ITPA) tools: ServiceNow Workflow/Orchestration, Microsoft Power Automate, Ansible, Terraform Integration using APIs and event-driven architectures (e.g., Kafka, Azure Event Grid) Proficiency in cloud-native AI and automation services in one of or more of public cloud platforms: Azure (Cognitive Services, Synapse, Logic Apps, Azure OpenAI) AWS (SageMaker, Lambda, Textract, Step Functions) GCP (Vertex AI, AutoML, Cloud Functions) Good knowledge of data architecture, governance, and AI ethics Excellent stakeholder engagement and communication skills, with the ability to: Translate complex AI and automation concepts into business value Promote a culture of innovation, experimentation, and continuous learning Strong communication, interpersonal, and problem-solving abilities Adaptability to evolving technologies and processes Bachelor's degree in Computer Science, Information Technology, or a related field, or equivalent experience Additional Information At Version 1, we believe in providing our employees with a comprehensive benefits package that prioritises their well-being, professional growth, and financial stability. One of our standout advantages is the ability to work with a hybrid schedule along with business travel, allowing our employees to strike a balance between work and life. We prioritise the health and safety of our employees, providing private medical and life insurance coverage, as well as free eye tests and contributions towards glasses. Our team members can also stay ahead of the curve with incentivized certifications and accreditations, including AWS, Microsoft, Oracle, and Red Hat. Our employee-designed Profit Share scheme divides a portion of our company's profits each quarter amongst employees. We are dedicated to helping our employees reach their full potential, offering Pathways Career Development Quarterly, a programme designed to support professional growth.

Posted 1 month ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

We are looking for a skilled and proactive Full Stack Developer to join our AWAC team, supporting the development and integration of Generative AI applications. In this role, you will collaborate with GenAI engineers, DevOps, and QA teams to build scalable, user facing applications that incorporate LLM capabilities and ensure seamless deployment and testing across environments. Key Responsibilities Design and develop end to end web applications that integrate with Generative AI services and APIs (e.g., Bedrock, SageMaker, LangChain, Vector DBs) Collaborate with GenAI developers to translate AI models and PoCs into production-ready applications. Build intuitive frontends and robust backends using modern frameworks (e.g., React/Angular + Node.js/Java/Python). Develop and manage RESTful APIs, microservices, and serverless functions to support AI workflows. Work closely with DevOps teams to containerize, deploy, and monitor applications using tools like Docker, Kubernetes, Terraform, and CI/CD pipelines. Coordinate with QA teams to define and execute end-to-end testing, including integration, performance, and user acceptance testing. Ensure application scalability, security, and performance optimization. Maintain documentation for architecture, APIs, and deployment processes. Required Skills & Technologies 4 to 6 years of experience in full stack development. Proficiency in JavaScript/TypeScript, React or Angular, and backend technologies like Node.js, Python, or Java. Experience integrating with cloud-based AI/ML services, especially on AWS (e.g., Bedrock, SageMaker, Lambda). Strong understanding of REST APIs, microservices architecture, and serverless computing. Familiarity with DevOps practices: Docker, Kubernetes (EKS/ECS), Git, CI/CD, Terraform. Experience with automated testing frameworks and end-to-end testing strategies. Ability to work in cross-functional teams and manage multiple integration points. Preferred Qualifications Experience working with Generative AI applications or integrating with LLM APIs. Exposure to LangChain, Vector DBs (e.g OpenSearch), and prompt engineering concepts. Bachelors degree in Computer Science, Engineering, or a related field. Familiarity with Agile/Scrum methodologies.

Posted 1 month ago

Apply

1.0 years

0 Lacs

India

Remote

🚀 Job Title: AI Engineer (Full Stack / Model Deployment Specialist) Location: Remote (India preferred) Type: Full-Time (6-Month Fixed Contract) Experience Level: 1+ Years Salary: Competitive (based on experience) Potential: High-performing candidates may be offered a permanent role after the contract 🧩 About Us We are a dynamic collaboration between Funding Bay , Effer Ventures , and FBX Capital Partners, three industry leaders combining forces to deliver financial, compliance, and strategic growth solutions to businesses across the UK. We’re looking for an AI Engineer who can bridge the gap between machine learning and production-ready applications. If you love optimizing models, deploying them in real-world environments, and know your way around modern web stacks, this role is for you. 🔧 What You’ll Do End-to-End Ownership of ML Models: From training and evaluation to optimization and deployment. Deploy ML Models using AWS services (EC2, Lambda, S3, SageMaker, or custom Docker setups). Optimize Model Performance: Ensure fast inference, low memory usage, and high-quality results. Integrate AI into MERN Stack Applications: Build APIs and interfaces to expose your models to the frontend. Collaborate Cross-Functionally with frontend, product, and design teams. Build scalable and secure pipelines for data ingestion, model serving, and monitoring. Optimize for Speed & Usability : Ensure both backend inference and frontend UI are responsive and seamless. ✅ What We’re Looking For Proficient in MERN Stack (MongoDB, Express.js, React, Node.js) Strong Python skills , especially for AI/ML (NumPy, Pandas, scikit-learn, TensorFlow or PyTorch etc) Hands-on with Model Optimization : Quantization, pruning, distillation, or ONNX deployment is a plus Solid AWS Experience: EC2, S3, IAM, Lambda, API Gateway, CloudWatch, etc. Experience with Docker & CI/CD pipelines (e.g., GitHub Actions, Jenkins) Comfortable building and consuming REST/GraphQL APIs Familiar with ML deployment tools like FastAPI, Flask, TorchServe, or SageMaker endpoints Good understanding of performance profiling, logging, and model monitoring ⭐ Nice to Have Experience with LangChain , LLMs , or NLP pipelines Startup or fast-paced team background Open-source contributions or live-deployed AI projects 🌱 Why Join Us? Build & deploy real AI products that go live Work in a growth-focused, high-ownership environment 6-month contract with the potential for a Permanent full-time Flexible work culture & flat hierarchy Learn fast and build faster with founders and builders Take ownership of core parts of the AI stack Competitive compensation based on experience 📬 To Apply Send us: Your resume A link to your GitHub or portfolio A short paragraph about a project where you deployed an optimized AI model 📧 Email: developer@fundingbay.co.uk or directly apply

Posted 1 month ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Title: Cloud Engineer – AWS, CI/CD & Infrastructure Automation Department: Information Technology / Research Computing Location: Bangalore/Kochi Shift: Need clarity Job Type: Full-Time Reports To: Director of IT Infrastructure / Head of Research Computing Position Summary: DBiz.ai is seeking a dedicated and technically proficient Cloud Engineer to support our growing cloud infrastructure needs across academic, research, and administrative domains. The ideal candidate will have strong experience with AWS core services , CI/CD pipelines using GitHub , and Infrastructure as Code (IaC) to help modernize and automate our cloud environments. Key Responsibilities: Design, implement, and manage AWS-based cloud infrastructure to support academic and research computing needs. Develop and maintain CI/CD pipelines for deploying applications and services using GitHub Actions or similar tools. Automate infrastructure provisioning and configuration using IaC tools such as Terraform or AWS CloudFormation. Design and implement solutions using AWS Machine Learning (SageMaker, Bedrock), data analytics (Redshift), and data processing tools (Glue, Step Functions) to support automation and intelligent decision-making Collaborate with faculty, researchers, and IT staff to support cloud-based research workflows and data pipelines. Ensure cloud environments are secure, scalable, and cost-effective. Monitor system performance and troubleshoot issues related to cloud infrastructure and deployments. Document cloud architecture, workflows, and best practices for internal knowledge sharing and compliance. Required Qualifications: Bachelor’s degree in Computer Science, Information Technology, or a related field. Strong experience with AWS core services (EC2, S3, IAM, VPC, Lambda, CloudWatch, etc.). Proficiency in GitHub and building CI/CD pipelines . Hands-on experience with Infrastructure as Code tools (Terraform, CloudFormation, etc.). Familiarity with scripting languages (e.g., Python, Bash). Exposure to AWS Machine Learning services (e.g., SageMaker, Bedrock), Data Analytics tools (e.g., Redshift), and Data Processing and Orchestration services (e.g., Glue, Step Functions). Strong understanding of cloud security, networking, and architecture principles. Excellent communication and collaboration skills, especially in academic or research settings. Preferred Qualifications: AWS Certification (e.g., AWS Certified Solutions Architect – Associate). Experience supporting research computing environments or academic IT infrastructure . Familiarity with containerization (Docker, Kubernetes) and hybrid cloud environments. Experience working in a university or public sector environment.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies