Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Asset Wealth Management & RiverSource Operation Group is seeking a Senior Business Systems Analyst /Lead Business Systems Analyst (Individual Contributor) who can take the lead on critical Business Intelligence solutions. Senior Business Systems Analyst /Lead Business Systems Analyst (Individual Contributor) should be fully capable of delivering on the design, development, documentation, testing, and modification of existing and new Business Intelligence solutions, championing standard techniques, procedures and criteria. Participate in architecture design, performance monitoring. Must be able to communicate well, relate and provide technical expertise to the business. Responsibilities Technology Delivery: Participates end to end requirement gathering and converting those into actionable items. Delivery: Translate business and user requirements into system requirements / design for the technology organization and manage delivery within budget, scope and planned schedule. Partner with the business and project team to gather requirements and fully understand project goals, then use this information to effectively plan and lead the development process. Effective decision making, and analytical skill sets to bridge knowledge gaps, generate credibility, trigger conversation and ultimately create long-term growth opportunities for business. Proactively review plan and execute corrective action in response to production support issues as required. Review & Documentation Review process to ensure development work adheres to standards and specifications including peer review as well as code review external to development team. Test & Execute Ensure Solutions is effectively tested prior to being released to Production. Respond to all inquiries and issues in a timely manner as BI solution moves through the testing process and into production Required Qualifications Bachelor’s degree in computer science, Engineering, or related field; or equivalent work experience. Strong understanding of object -oriented Programming (OOP) concept. 3 to 5 years of extensive experience in related Data Engineering (SAS, Python, T-SQL, AWS Athena, AWS Sagemaker, PowerShell). Import clean, transform, validate data with the purpose of understanding or making conclusion from the data for decision making. Ability to write & execute complex queries in Python & SQL and relational databases (MS SQL). Access existing SAS Code and migrate to python. Develop and implement migration strategies and framework. Create & maintain document for migration process and procedures. Troubleshoot migration related issues. Presents/frame business scenario in ways that are meaningful and depicts their findings in easy -to -understand manner. Good verbal and written communication skills. Strong Quantitative aptitude skills. Participates end to end requirement gathering and convert actionable items into solutions. Collaborate effectively with cross-functional teams to ensure successful project execution. Participate in code review and integration testing to ensure code quality. Monitor the automated preparation of scheduled reports on our tools daily, troubleshoot as needed and fix issues to ensure timely completion. Have experience in Agile methodology. Preferred Qualifications Certification in Base/Advanced SAS would be added advantage. Certification in AWS Certified Data Engineer would be added advantage. About Our Company Ameriprise India LLP has been providing client based financial solutions to help clients plan and achieve their financial objectives for 125 years. We are a U.S. based financial planning company headquartered in Minneapolis with a global presence. The firm’s focus areas include Asset Management and Advice, Retirement Planning and Insurance Protection. Be part of an inclusive, collaborative culture that rewards you for your contributions and work with other talented individuals who share your passion for doing great work. You’ll also have plenty of opportunities to make your mark at the office and a difference in your community. So if you're talented, driven and want to work for a strong ethical company that cares, take the next step and create a career at Ameriprise India LLP. Ameriprise India LLP is an equal opportunity employer. We consider all qualified applicants without regard to race, color, religion, sex, genetic information, age, sexual orientation, gender identity, disability, veteran status, marital status, family status or any other basis prohibited by law. Full-Time/Part-Time Full time Timings (2:00p-10:30p) India Business Unit AWMPO AWMP&S President's Office Job Family Group Business Support & Operations
Posted 1 week ago
8.0 - 10.0 years
30 - 35 Lacs
Bengaluru
Work from Office
Software Development (Tech Lead) Location: Bangalore, India Experience: 8 years/ WFO Company Overview: Omnicom Global Solutions (OGS) is an integral part of Omnicom Group , a leading global marketing and corporate communications company. Omnicoms branded networks and numerous specialty firms provide advertising and communications services to over 5,000 clients in more than 70 countries. Let us build this together! Flywheel operates a leading cloud-based digital commerce platform across the worlds major digital marketplaces. It enables our clients to access near real-time performance measurement and improve sales, share, and profit. Through our expertise, scale, global reach, and highly sophisticated AI and data-powered solutions, we provide differentiated value for both the worlds largest consumer product companies and fast-growing brands. Job Description: We are seeking an experienced and dynamic Software Development Lead to drive end-to-end development, architecture, and team leadership in a fast-paced ecommerce environment. The ideal candidate combines deep technical expertise with strong people leadership and a strategic mindset. You’ll lead a cross-functional team building scalable, performant, and reliable systems that impact millions of users globally. Roles and Responsibilities: Technical Leadership & Architecture Make high-impact architectural decisions and lead the design of large-scale systems. Guide the team in leveraging AWS/cloud infrastructure and scalable platform components. Lead implementation of performance, scalability, and security non-functional requirements (NFRs). Design and implement engineering metrics that demonstrate improvement in team velocity and delivery. Oversee AI/ML system integrations and support production deployment of machine learning models. Engineering Execution Own release quality and delivery timelines; unblock teams and anticipate risks. Balance technical debt with roadmap delivery and foster a culture of ownership and excellence. Support the CI/CD framework and define operational readiness including alerts, monitoring, and rollback plans. Collaborate with Data Science/ML teams to deploy, monitor, and scale intelligent features (e.g., personalization, predictions, anomaly detection). People Leadership & Mentorship Mentor and grow a high-performing engineering team through feedback, coaching, and hands-on guidance. Drive onboarding and succession planning in alignment with long-term team strategy. Evaluate performance and create career growth plans for direct reports. Cross-Functional Collaboration Represent engineering in product reviews and planning forums with PMs, QA, and Design. Communicate technical vision, delivery risks, and trade-offs with business and technical stakeholders. Work with Product and Business Leaders to align team output with organizational goals. Project Management & Delivery Lead planning, estimation, and execution of complex product features or platform initiatives. Manage competing priorities, refine team capacity, and ensure timely and reliable feature rollout. Provide visibility into team performance through clear reporting and delivery metrics. Culture & Continuous Improvement Lead by example in fostering inclusion, feedback, and a growth-oriented team culture. Promote a DevOps mindset: reliability, ownership, automation, and self-service. Identify AI/ML opportunities within the platform and work with products to operationalize them. This may be the right role for you if you have. 8+ years of experience in software engineering, with at least 3 years in technical leadership or management roles. Strong backend expertise in Java or Python; hands-on experience with Spring Boot, Django, or Flask. Deep understanding of cloud architectures (AWS, GCP) and system design for scale. Strong knowledge of frontend frameworks (React/AngularJS) and building web-based SaaS products. Proven ability to guide large systems design, service decomposition, and integration strategies. Experience in applying ML/AI algorithms in production settings (recommendation engines, ranking models, NLP). Familiarity with ML lifecycle tooling such as MLflow, Vertex AI, or SageMaker is a plus. Proficiency in CI/CD practices, infrastructure-as-code, Git workflows, and monitoring tools. Comfortable with Agile development practices and project management tools like JIRA. Excellent analytical and problem-solving skills; capable of navigating ambiguity. Proven leadership in mentoring and team culture development. Desired Skills Experience in ecommerce, digital advertising, or performance marketing domains Exposure to data engineering pipelines or real-time data processing (Kafka, Spark, Airflow). Agile or Scrum certification. Demonstrated success in delivering high-scale, distributed software platforms. What Will Set You Apart: You’re a system thinker who can break down complex challenges and design for resilience. You proactively support cross-team success and remove friction for others. You build high-performing teams through mentoring, clear expectations, and shared ownership. You champion technical quality and foster a team that thrives on accountability and continuous learning.
Posted 1 week ago
0 years
0 Lacs
Tamil Nadu, India
On-site
We are looking for a seasoned Senior MLOps Engineer to join our Data Science team. The ideal candidate will have a strong background in Python development, machine learning operations, and cloud technologies. You will be responsible for operationalizing ML/DL models and managing the end-to-end machine learning lifecycle from model development to deployment and monitoring while ensuring high-quality and scalable solutions. Mandatory Skills: Python Programming: Expert in OOPs concepts and testing frameworks (e.g., PyTest) Strong experience with ML/DL libraries (e.g., Scikit-learn, TensorFlow, PyTorch, Prophet, NumPy, Pandas) MLOps & DevOps: Proven experience in executing data science projects with MLOps implementation CI/CD pipeline design and implementation Docker (Mandatory) Experience with ML lifecycle tracking tools such as MLflow, Weights & Biases (W&B), or cloud-based ML monitoring tools Experience in version control (Git) and infrastructure-as-code (Terraform or CloudFormation) Familiarity with code linting, test coverage, and quality tools such as SonarQube Cloud & Orchestration: Hands-on experience with AWS SageMaker or GCP Vertex AI Proficiency with orchestration tools like Apache Airflow or Astronomer Strong understanding of cloud technologies (AWS or GCP) Software Engineering: Experience in building backend APIs using Flask, FastAPI, or Django Familiarity with distributed systems for model training and inference Experience working with Feature Stores Deep understanding of the ML/DL lifecycle from ideation, experimentation, deployment to model sunsetting Understanding of software development best practices, including automated testing and CI/CD integration Agile Practices: Proficient in working within a Scrum/Agile environment using tools like JIRA Cross-Functional Collaboration: Ability to collaborate effectively with product managers, domain experts, and business stakeholders to align ML initiatives with business goals Preferred Skills: Experience building ML solutions for: (Any One) Sales Forecasting Marketing Mix Modelling Demand Forecasting Certified in machine learning or cloud platforms (e.g., AWS or GCP) Strong communication and documentation skills
Posted 1 week ago
0 years
0 Lacs
Bangalore Urban, Karnataka, India
On-site
Role Title: AI Platform Engineer Location: Bangalore (In Person in office when required) Part of the GenAI COE Team Key Responsibilities Platform Development and Evangelism: Build scalable AI platforms that are customer-facing. Evangelize the platform with customers and internal stakeholders. Ensure platform scalability, reliability, and performance to meet business needs. Machine Learning Pipeline Design: Design ML pipelines for experiment management, model management, feature management, and model retraining. Implement A/B testing of models. Design APIs for model inferencing at scale. Proven expertise with MLflow, SageMaker, Vertex AI, and Azure AI. LLM Serving And GPU Architecture Serve as an SME in LLM serving paradigms. Possess deep knowledge of GPU architectures. Expertise in distributed training and serving of large language models. Proficient in model and data parallel training using frameworks like DeepSpeed and service frameworks like vLLM. Model Fine-Tuning And Optimization Demonstrate proven expertise in model fine-tuning and optimization techniques. Achieve better latencies and accuracies in model results. Reduce training and resource requirements for fine-tuning LLM and LVM models. LLM Models And Use Cases Have extensive knowledge of different LLM models. Provide insights on the applicability of each model based on use cases. Proven experience in delivering end-to-end solutions from engineering to production for specific customer use cases. DevOps And LLMOps Proficiency Proven expertise in DevOps and LLMOps practices. Knowledgeable in Kubernetes, Docker, and container orchestration. Deep understanding of LLM orchestration frameworks like Flowise, Langflow, and Langgraph. Skill Matrix LLM: Hugging Face OSS LLMs, GPT, Gemini, Claude, Mixtral, Llama LLM Ops: ML Flow, Langchain, Langraph, LangFlow, Flowise, LLamaIndex, SageMaker, AWS Bedrock, Vertex AI, Azure AI Databases/Datawarehouse: DynamoDB, Cosmos, MongoDB, RDS, MySQL, PostGreSQL, Aurora, Spanner, Google BigQuery. Cloud Knowledge: AWS/Azure/GCP Dev Ops (Knowledge): Kubernetes, Docker, FluentD, Kibana, Grafana, Prometheus Cloud Certifications (Bonus): AWS Professional Solution Architect, AWS Machine Learning Specialty, Azure Solutions Architect Expert Proficient in Python, SQL, Javascript Email : diksha.singh@aptita.com
Posted 1 week ago
4.0 - 5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: AI/ML Engineer – Python Experience Required: 4 to 5 Years Location: Office Job Type: Full-Time Job Summary: We are looking for a talented and experienced AI/ML Engineer with strong proficiency in Python to join our team. The ideal candidate will have 4–5 years of hands-on experience in developing machine learning models, implementing AI solutions, and deploying them into production environments. Key Responsibilities: Design, develop, and deploy machine learning and deep learning models using Python. Work with large datasets to perform data cleaning, preprocessing, and feature engineering . Collaborate with cross-functional teams to define use cases, build prototypes, and deliver scalable ML solutions. Evaluate and tune model performance using techniques like cross-validation , hyperparameter tuning , etc. Use libraries like scikit-learn, TensorFlow, Keras, PyTorch, Pandas, NumPy , and OpenCV . Deploy ML models using APIs, Docker, cloud platforms (AWS, GCP, Azure) , or MLOps pipelines. Stay updated on the latest AI/ML research and apply relevant innovations to solve business problems. Document model workflows, performance metrics, and code for reproducibility and compliance. Required Skills & Qualifications: Bachelor's/Master’s degree in Computer Science, Artificial Intelligence, Data Science , or related field. 4–5 years of experience in AI/ML development using Python . Solid understanding of ML algorithms (classification, regression, clustering, NLP, computer vision). Experience with model deployment and productionizing AI solutions . Familiarity with cloud platforms (AWS Sagemaker, Azure ML, or GCP AI Platform) is a plus. Strong analytical, problem-solving, and programming skills. Ability to interpret business needs and translate them into technical solutions. Preferred Tools & Frameworks: Python , scikit-learn , TensorFlow , Keras , PyTorch Pandas , NumPy , Matplotlib , Seaborn Flask/FastAPI for deploying models MLflow , Docker , Git , Jupyter Notebooks Exposure to NLP , Computer Vision , or Reinforcement Learning is a plus Certifications (Optional but Preferred): TensorFlow Developer Certificate AWS Certified Machine Learning – Specialty Google Professional ML Engineer Microsoft Certified: Azure AI Engineer Associate
Posted 1 week ago
7.0 years
0 Lacs
Bharuch, Gujarat
On-site
Role: Sr Data Scientist – Digital & Analytics Experience: 7+ Years | Industry: Exposure to manufacturing, energy, supply chain or similar Location: On-Site @ Bharuch, Gujarat (6 days/week, Mon-Sat working) Perks: Work with Client Directly & Monthly renumeration for lodging Mandatory Skills: Exp. In full scale implementation from requirement gathering till project delivery (end to end). EDA, ML Techniques (supervised and unsupervised), Python (Pandas, Scikit-learn, Pyomo, XGBoost, etc.), cloud ML tooling (Azure ML, AWS Sage maker, etc.), plant control systems (DCS, SCADA, OPC UA), historian databases (PI, Aspen IP.21), and time-series data, optimization models (LP, MILP, MINLP). We are seeking a highly capable and hands-on Sr Data Scientist to drive data science solution development for chemicals manufacturing environment. This role is ideal for someone with a strong product mindset and a proven ability to work independently, while mentoring a small team. You will play a pivotal role in developing advanced analytics and AI/ML solutions for operations, production, quality, energy optimization, and asset performance, delivering tangible business impact. Responsibilities: 1. Data Science Solution Development • Design and develop predictive and prescriptive models for manufacturing challenges such as process optimization, yield prediction, quality forecasting, downtime prevention, and energy usage minimization. • Perform robust exploratory data analysis (EDA) and apply advanced statistical and machine learning techniques (supervised and unsupervised). • Translate physical and chemical process knowledge into mathematical features or constraints in models. • Deploy models into production environments (on-prem or cloud) with high robustness and monitoring. 2. Team Leadership & Management • Lead a compact data science pod (2-3 members), assigning responsibilities, reviewing work, and mentoring junior data scientists or interns. • Own the entire data science lifecycle: problem framing, model development, and validation, deployment, monitoring, and retraining protocols. 3. Stakeholder Engagement & Collaboration • Work directly with Process Engineers, Plant Operators, DCS system owners, and Business Heads to identify pain points and convert them into use-cases. • Collaborate with Data Engineers and IT to ensure data pipelines and model interfaces are robust, secure, and scalable. • Act as a translator between manufacturing business units and technical teams to ensure alignment and impact. 4. Solution Ownership & Documentation • Independently manage and maintain use-cases through versioned model management, robust documentation, and logging. • Define and monitor model KPIs (e.g., drift, accuracy, business impact) post-deployment and lead remediation efforts. Required Skills: 1. 7+ years of experience in Data Science roles, with a strong portfolio of deployed use-cases in manufacturing, energy, or process industries. 2. Proven track record of end-to-end model delivery (from data prep to business value realization). 3. Master’s or PhD in Data Science, Computer Science Engineering, Applied Mathematics, Chemical Engineering, Mechanical Engineering, or a related quantitative discipline. 4. Expertise in Python (Pandas, Scikit-learn, Pyomo, XGBoost, etc.), and experience with cloud ML tooling (Azure ML, AWS Sagemaker, etc.). 5. Familiarity with plant control systems (DCS, SCADA, OPC UA), historian databases (PI, Aspen IP.21), and time-series data. 6. Experience in developing optimization models (LP, MILP, MINLP) for process or resource allocation problems is a strong plus. Job Types: Full-time, Contractual / Temporary Contract length: 6-12 months Pay: Up to ₹200,000.00 per month Work Location: In person
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
At our organization, we prioritize people and are dedicated to providing cutting-edge AI solutions with integrity and passion. We are currently seeking a Senior AI Developer who is proficient in AI model development, Python, AWS, and scalable tool-building. In this role, you will play a key part in designing and implementing AI-driven solutions, developing AI-powered tools and frameworks, and integrating them into enterprise environments, including mainframe systems. Your responsibilities will include developing and deploying AI models using Python and AWS for enterprise applications, building scalable AI-powered tools, designing and optimizing machine learning pipelines, implementing NLP and GenAI models, developing Retrieval-Augmented Generation (RAG) systems, maintaining AI frameworks and APIs, architecting cloud-based AI solutions using AWS services, writing high-performance Python code, and ensuring the scalability, security, and performance of AI solutions in production. To qualify for this role, you should have at least 5 years of experience in AI/ML development, expertise in Python and AWS, a strong background in machine learning and deep learning, experience in LLMs, NLP, and RAG systems, hands-on experience in building and deploying AI models, proficiency in cloud-based AI solutions, experience in developing AI-powered tools and frameworks, knowledge of mainframe integration and enterprise AI applications, and strong coding skills with a focus on software development best practices. Preferred qualifications include familiarity with MLOps, CI/CD pipelines, and model monitoring, a background in developing AI-based enterprise tools and automation, and experience with vector databases and AI-powered search technologies. Additionally, you will benefit from health insurance, accident insurance, and a competitive salary based on various factors including location, education, qualifications, experience, technical skills, and business needs. You will also be expected to actively participate in monthly team meetings, team-building efforts, technical discussions, peer reviews, contribute to the OP-Wiki/Knowledge Base, and provide status reports to OP Account Management as required. OP is a technology consulting and solutions company that offers advisory and managed services, innovative platforms, and staffing solutions across various fields such as AI, cybersecurity, and enterprise architecture. Our team is comprised of dynamic, creative thinkers who are dedicated to delivering quality work. As a member of the OP team, you will have access to industry-leading consulting practices, strategies, technologies, innovative training, and education. We are looking for a technology leader with a strong track record of technical excellence and a focus on process and methodology.,
Posted 1 week ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Associate Project Manager – AI/ML Experience: 8+ years (including 3+ years in project management) Notice Period: Immediate to 15 days Location: Coimbatore / Chennai 🔍 Job Summary We are seeking experienced Associate Project Managers with a strong foundation in AI/ML project delivery. The ideal candidate will have a proven track record of managing cross-functional teams, delivering complex software projects, and driving AI/ML initiatives from conception to deployment. This role requires a blend of project management expertise and technical understanding of machine learning systems, data pipelines, and model lifecycle management. ✅ Required Experience & Skills 📌 Project Management Minimum 3+ years of project management experience, including planning, tracking, and delivering software projects. Strong experience in Agile, Scrum, and SDLC/Waterfall methodologies. Proven ability to manage multiple projects and stakeholders across business and technical teams. Experience in budgeting, vendor negotiation, and resource planning. Proficiency in tools like MS Project, Excel, PowerPoint, ServiceNow, SmartSheet, and Lucidchart. 🤖 AI/ML Technical Exposure (Must-Have) Exposure to AI/ML project lifecycle: data collection, model development, training, validation, deployment, and monitoring. Understanding of ML frameworks (e.g., TensorFlow, PyTorch, Scikit-learn) and data platforms (e.g., Azure ML, AWS SageMaker, Databricks). Familiarity with MLOps practices, model versioning, and CI/CD pipelines for ML. Experience working with data scientists, ML engineers, and DevOps teams to deliver AI/ML solutions. Ability to translate business problems into AI/ML use cases and manage delivery timelines. 🧩 Leadership & Communication Strong leadership, decision-making, and organizational skills. Excellent communication and stakeholder management abilities. Ability to influence and gain buy-in from executive sponsors and cross-functional teams. Experience in building and maintaining relationships with business leaders and technical teams. 🎯 Roles & Responsibilities Lead AI/ML and software development projects from initiation through delivery. Collaborate with data science and engineering teams to define project scope, milestones, and deliverables. Develop and maintain detailed project plans aligned with business goals and technical feasibility. Monitor progress, manage risks, and ensure timely delivery of AI/ML models and software components. Coordinate cross-functional teams and ensure alignment between business, data, and engineering stakeholders. Track project metrics, ROI, and model performance post-deployment. Ensure compliance with data governance, security, and ethical AI standards. Drive continuous improvement in project execution and delivery frameworks. Stay updated on AI/ML trends and contribute to strategic planning for future initiatives.
Posted 1 week ago
5.0 years
0 Lacs
India
Remote
Job Title: AI/ML Engineer Location: 100% Remote Job Type: Full-Time About the Role: We are seeking a highly skilled and motivated AI/ML Engineer to design, develop, and deploy cutting-edge ML models and data-driven solutions. You will work closely with data scientists, software engineers, and product teams to bring AI-powered products to life and scale them effectively. Key Responsibilities: Design, build, and optimize machine learning models for classification, regression, recommendation, and NLP tasks. Collaborate with data scientists to transform prototypes into scalable, production-ready models. Deploy, monitor, and maintain ML pipelines in production environments. Perform data preprocessing, feature engineering, and selection from structured and unstructured data. Implement model performance evaluation metrics and improve accuracy through iterative tuning. Work with cloud platforms (AWS, Azure, GCP) and MLOps tools to manage model lifecycle. Maintain clear documentation and collaborate cross-functionally across teams. Stay updated with the latest ML/AI research and technologies to continuously enhance our solutions. Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or a related field. 5+ years of experience in ML model development and deployment. Proficient in Python and libraries such as scikit-learn, TensorFlow, PyTorch, pandas, NumPy, etc. Strong understanding of machine learning algorithms, statistical modeling, and data analysis. Experience with building and maintaining ML pipelines using tools like MLflow, Kubeflow, or Airflow. Familiarity with containerization (Docker), version control (Git), and CI/CD for ML models. Experience with cloud services such as AWS SageMaker, GCP Vertex AI, or Azure ML.
Posted 1 week ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: Data Scientist Location: Gurugram Experience: 5–10 years (flexible based on expertise) Employment Type: Full-Time About the Role: We are looking for a highly skilled and innovative Data Scientist with deep expertise in Machine Learning, AI, and Cloud Technologies to join our dynamic analytics team. The ideal candidate will have hands-on experience in NLP, LLMs, Computer Vision , and advanced statistical techniques, along with the ability to lead cross-functional teams and drive data-driven strategies in a fast-paced environment. Key Responsibilities: Develop and deploy end-to-end machine learning pipelines including data preprocessing, modeling, evaluation, and production deployment. Work on cutting-edge AI/ML applications such as LLM-finetuning, NLP, Computer Vision, Hybrid Recommendation Systems , and RAG/CAG techniques . Leverage platforms like AWS (SageMaker, EC2) and Databricks for scalable model development and deployment. Handle data at scale using Spark, Python, SQL , and integrate with NoSQL and Vector Databases (Neo4j, Cassandra) . Design interactive dashboards and visualizations using Tableau for actionable insights. Collaborate with cross-functional stakeholders to translate business problems into analytical solutions. Guide data curation efforts and ensure high-quality training datasets for supervised and unsupervised learning. Lead initiatives around AutoML, XGBoost, Topic Modeling (LDA/LSA), Doc2Vec , and Object Detection & Tracking . Drive agile practices including Sprint Planning, Resource Allocation, and Change Management . Communicate results and recommendations effectively to executive leadership and business teams. Mentor junior team members and foster a culture of continuous learning and innovation. Technical Skills Required: Programming: Python, SQL, Spark Machine Learning & AI: NLP, LLMs, Deep Learning, Computer Vision, Hybrid Recommenders Techniques: RAG, CAG, LLM-Finetuning, Statistical Modeling, AutoML, Doc2Vec Data Platforms: AWS (SageMaker, EC2), Databricks Databases: SQL, NoSQL, Neo4j, Cassandra, Vector DBs Visualization Tools: Tableau Certifications (Preferred): IBM Data Science Specialization Deep Learning Nanodegree (Udacity) SAFe® DevOps Practitioner Certified Agile Scrum Master Professional Competencies: Proven experience in team leadership, stakeholder management , and strategic planning . Strong cross-functional collaboration and ability to drive alignment across product, engineering, and analytics teams. Excellent problem-solving, communication, and decision-making skills. Ability to manage conflict resolution, negotiation , and performance optimization within teams.
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Ciklum is looking for an Expert Data Scientist to join our team full-time in India. We are a custom product engineering company that supports both multinational organizations and scaling startups to solve their most complex business challenges. With a global team of over 4,000 highly skilled developers, consultants, analysts and product owners, we engineer technology that redefines industries and shapes the way people live. About the role: As an Expert Data Scientist, become a part of a cross-functional development team engineering experiences of tomorrow. Responsibilities: Development of prototype solutions, mathematical models, algorithms, machine learning techniques, and robust analytics to support analytic insights and visualization of complex data sets Work on exploratory data analysis so you can navigate a dataset and come out with broad conclusions based on initial appraisals Provide optimization recommendations that drive KPIs established by product, marketing, operations, PR teams, and others Interacts with engineering teams and ensures that solutions meet customer requirements in terms of functionality, performance, availability, scalability, and reliability Work directly with business analysts and data engineers to understand and support their use cases Work with stakeholders throughout the organization to identify opportunities for leveraging company data to drive business solutions Drive innovation by exploring new experimentation methods and statistical techniques that could sharpen or speed up our product decision-making processes Cross-train other team members on technologies being developed, while also continuously learning new technologies from other team members Contribute to the Unit activities and community building, participate in conferences, and provide excellence in exercise and best practices Support marketing & sales activities, customer meetings and digital services through direct support for sales opportunities & providing thought leadership & content creation for the service Requirements: We know that sometimes, you can’t tick every box. We would still love to hear from you if you think you’re a good fit! General technical requirements: BSc, MSc, or PhD in Mathematics, Statistics, Computer Science, Engineering, Operations Research, Econometrics, or related fields Strong knowledge of Probability Theory, Statistics, and a deep understanding of the Mathematics behind Machine Learning Proficiency with CRISP-ML(Q) or TDSP methodologies for addressing commercial problems through data science solutions Hands-on experience with various machine learning techniques, including but not limited to: Regression Classification Clustering Dimensionality reduction Proficiency in Python for developing machine learning models and conducting statistical analyses Strong understanding of data visualization tools and techniques (e.g., Python libraries such as Matplotlib, Seaborn, Plotly, etc.) and the ability to present data effectively Specific technical requirements: Proficiency in SQL for data processing, data manipulation, sampling, and reporting Experience working with imbalanced datasets and applying appropriate techniques Experience with time series data, including preprocessing, feature engineering, and forecasting Experience with outlier detection and anomaly detection Experience working with various data types: text, image, and video data Familiarity with AI/ML cloud implementations (AWS, Azure, GCP) and cloud-based AI/ML services (e.g., Amazon SageMaker, Azure ML) Domain experience: Experience with analyzing medical signals and images Expertise in building predictive models for patient outcomes, disease progression, readmissions, and population health risks Experience in extracting insights from clinical notes, medical literature, and patient-reported data using NLP and text mining techniques Familiarity with survival or time-to-event analysis Expertise in designing and analyzing data from clinical trials or research studies Experience in identifying causal relationships between treatments and outcomes, such as propensity score matching or instrumental variable techniques Understanding of healthcare regulations and standards like HIPAA, GDPR (for healthcare data), and FDA regulations for medical devices and AI in healthcare Expertise in handling sensitive healthcare data in a secure, compliant way, understanding the complexities of patient consent, de-identification, and data sharing Familiarity with decentralized data models such as federated learning to build models without transferring patient data across institutions Knowledge of interoperability standards such as HL7, SNOMED, FHIR, or DICOM Ability to work with clinicians, researchers, health administrators, and policy makers to understand problems and translate data into actionable healthcare insights Good to have skills: Experience with MLOps, including integration of machine learning pipelines into production environments, Docker, and containerization/orchestration (e.g., Kubernetes) Experience in deep learning development using TensorFlow or PyTorch libraries Experience with Large Language Models (LLMs) and Generative AI applications Advanced SQL proficiency, with experience in MS SQL Server or PostgreSQL Familiarity with platforms like Databricks and Snowflake for data engineering and analytics Experience working with Big Data technologies (e.g., Hadoop, Apache Spark) Familiarity with NoSQL databases (e.g., columnar or graph databases like Cassandra, Neo4j) Business-related requirements: Proven experience in developing data science solutions that drive measurable business impact, with a strong track record of end-to-end project execution Ability to effectively translate business problems into data science problems and create solutions from scratch using machine learning and statistical methods Excellent project management and time management skills, with the ability to manage complex, detailed work and effectively communicate progress and results to stakeholders at all levels Desirable: Research experience with peer-reviewed publications Recognized achievements in data science competitions, such as Kaggle Certifications in cloud-based machine learning services (AWS, Azure, GCP) What`s in it for you? Care: your mental and physical health is our priority. We ensure comprehensive company-paid medical insurance, as well as financial and legal consultation Tailored education path: boost your skills and knowledge with our regular internal events (meetups, conferences, workshops), Udemy licence, language courses and company-paid certifications Growth environment: share your experience and level up your expertise with a community of skilled professionals, locally and globally Flexibility: hybrid work mode at Chennai or Pune Opportunities: we value our specialists and always find the best options for them. Our Resourcing Team helps change a project if needed to help you grow, excel professionally and fulfil your potential Global impact: work on large-scale projects that redefine industries with international and fast-growing clients Welcoming environment: feel empowered with a friendly team, open-door policy, informal atmosphere within the company and regular team-building events About us: At Ciklum, we are always exploring innovations, empowering each other to achieve more, and engineering solutions that matter. With us, you’ll work with cutting-edge technologies, contribute to impactful projects, and be part of a One Team culture that values collaboration and progress. India is a strategic innovation hub for Ciklum, with growing teams in Chennai and Pune leading advancements in EdgeTech, AR/VR, IoT, and beyond. Join us to collaborate on game-changing solutions and take your career to the next level. Want to learn more about us? Follow us on Instagram , Facebook , LinkedIn . Explore, empower, engineer with Ciklum! Interested already? We would love to get to know you! Submit your application. We can’t wait to see you at Ciklum.
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology Your Role And Responsibilities An AI Data Scientist at IBM is not just a job title – it’s a mindset. You’ll leverage the watsonx,AWS Sagemaker,Azure Open AI platform to co-create AI value with clients, focusing on technology patterns to enhance repeatability and delight clients. We are seeking an experienced and innovative AI Data Scientist to be specialized in foundation models and large language models. In this role, you will be responsible for architecting and delivering AI solutions using cutting-edge technologies, with a strong focus on foundation models and large language models. You will work closely with customers, product managers, and development teams to understand business requirements and design custom AI solutions that address complex challenges. Experience with tools like Github Copilot, Amazon Code Whisperer etc. is desirable. Success is our passion, and your accomplishments will reflect this, driving your career forward, propelling your team to success, and helping our clients to thrive. Day-to-Day Duties Proof of Concept (POC) Development: Develop POCs to validate and showcase the feasibility and effectiveness of the proposed AI solutions. Collaborate with development teams to implement and iterate on POCs, ensuring alignment with customer requirements and expectations. Help in showcasing the ability of Gen AI code assistant to refactor/rewrite and document code from one language to another, particularly COBOL to JAVA through rapid prototypes/ PoC Documentation and Knowledge Sharing: Document solution architectures, design decisions, implementation details, and lessons learned. Create technical documentation, white papers, and best practice guides. Contribute to internal knowledge sharing initiatives and mentor new team members. Industry Trends and Innovation: Stay up to date with the latest trends and advancements in AI, foundation models, and large language models. Evaluate emerging technologies, tools, and frameworks to assess their potential impact on solution design and implementation Preferred Education Master's Degree Required Technical And Professional Expertise Strong programming skills, with proficiency in Python and experience with AI frameworks such as TensorFlow, PyTorch, Keras or Hugging Face. Understanding in the usage of libraries such as SciKit Learn, Pandas, Matplotlib, etc. Familiarity with cloud platforms (e.g. Kubernetes, AWS, Azure, GCP) and related services is a plus. Experience and working knowledge in COBOL & JAVA would be preferred Having experience in Code generation, code matching & code translation leveraging LLM capabilities would be a Big plus (e.g. Amazon Code Whisperer, Github Copilot etc.) Soft Skills: Excellent interpersonal and communication skills. Engage with stakeholders for analysis and implementation. Commitment to continuous learning and staying updated with advancements in the field of AI. Growth mindset: Demonstrate a growth mindset to understand clients' business processes and challenges. Experience in python and pyspark will be added advantage Preferred Technical And Professional Experience Experience: Proven experience in designing and delivering AI solutions, with a focus on foundation models, large language models, exposure to open source, or similar technologies. Experience in natural language processing (NLP) and text analytics is highly desirable. Understanding of machine learning and deep learning algorithms. Strong track record in scientific publications or open-source communities Experience in full AI project lifecycle, from research and prototyping to deployment in production environments
Posted 1 week ago
3.0 - 4.0 years
0 Lacs
New Delhi, Delhi, India
On-site
Company Description The Association of Business Women in Commerce & Industry (ABWCI) is a global chamber dedicated to empowering women in business. We foster equity and inclusive prosperity through a supportive ecosystem. Our commitment includes providing access to investment capital, trade networks, and entrepreneurial education. ABWCI advocates for policies that create women-centric entrepreneurial ecosystems, aiming to foster a thriving society. Role Description This is a full-time, on-site role for an AI Full Stack Developer located in New Delhi. The AI Full Stack Developer will be responsible for both front-end and back-end web development tasks. The developer will work on designing, implementing, and maintaining web applications, ensuring seamless integration of AI functionalities. Day-to-day activities include coding, debugging, collaborating with team members, and staying updated with emerging technologies. Qualifications Front-End Development skills, including proficiency in HTML, CSS, and JavaScript Back-End Web Development skills, with experience in server-side languages and database management Proficiency in HTML, CSS, JavaScript/TypeScript, TailwindCSS or Bootstrap Proficiency in Node.js , Express.js , FastAPI , or Django Strong command of SQL and NoSQL databases (PostgreSQL, MongoDB, Redis) Knowledge of ETL pipelines , data preprocessing , and streaming data API design and consumption (REST/GraphQL) Integrate ML models using APIs or containers (FastAPI, Flask, BentoML) Familiarity with Python ML tools: TensorFlow, PyTorch, or Scikit-learn 3-4 years of expertise in Full-Stack Development , encompassing both front-end and back-end development including AI integration. Strong experience in Software Development, including knowledge of software development lifecycle and best practices Experience with AI technologies and their integration into web applications is a plus Proficiency with AWS , GCP , or Azure for full-stack & AI workloads Experience using cloud-native AI tools (e.g., AWS SageMaker, Google Vertex AI) API integrations with OpenAI , Anthropic , or Hugging Face Hub Hands-on experience with Docker , Kubernetes , and container orchestration Implementing secure authentication (OAuth2, JWT, SSO) Understanding of rate limiting, load balancing, and threat mitigation, performance tuning, caching (Redis, CDN), and lazy-loading for scalability Strong experience in developing and deploying AI/ML systems using frameworks like TensorFlow and PyTorch, with expertise in NLP (Transformers, spaCy), deep learning, LLMs (e.g., GPT, Claude), RAG pipelines, embedding models, and vector databases such as FAISS and Pinecone Unit, integration, and E2E testing with tools like Jest , PyTest , Selenium Consume and deploy ML/AI models via APIs or microservices Excellent problem-solving abilities and attention to detail Ability to work collaboratively in a team and communicate effectively Bachelor's degree in Computer Science, Engineering, or related field Bonus/Preferred Qualification Experience with LLMs and RAG (Retrieval-Augmented Generation) systems Contributions to open-source ML projects Certification in AI/ML or Cloud Platforms Experience in AI model serving , monitoring , and MLOps Perks & Benefits Competitive Salary Opportunity to work on cutting-edge AI-driven products You will be allowed to use co-pilots for development, we just want the work done as quick as possible! Location In-office, New Delhi All in all, we want some one who can develop websites and digital products from scratch and integrate AI system to introduce scalable and predictive systems with modern machine learning, front end and back end development, as a whole Full Stack Engineer with modern AI deployment skills. Join us, if you want to make an impact in a new world of opportunities!
Posted 1 week ago
0 years
5 - 8 Lacs
Hyderābād
On-site
Ready to shape the future of work? At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that’s shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Assistant Vice President– Generative AI – Systems Architect Role Overview: We are looking for an experienced Systems Architect with extensive experience in designing and scaling Generative AI systems to production. This role requires an individual with deep expertise in system architecture, software engineering, data platforms, and AI infrastructure, who can bridge the gap between data science, engineering and business. You will be responsible for end-to-end architecture of Gen.AI systems including model lifecycle management, inference, orchestration, pipelines Key Responsibilities: Architect and design end-to-end systems for production-grade Generative AI applications (e.g., LLM-based chatbots, copilots, content generation tools). Define and oversee system architecture covering data ingestion, model training/fine-tuning, inferencing, and deployment pipelines. Establish architectural tenets like modularity, scalability, reliability, observability, and maintainability. Collaborate with data scientists, ML engineers, platform engineers, and product managers to align architecture with business and AI goals. Choose and integrate foundation models (open source or proprietary) using APIs, model hubs, or fine-tuned versions. Evaluate and design solutions based on architecture patterns such as Retrieval-Augmented Generation (RAG), Agentic AI, Multi-modal AI, and Federated Learning. Design secure and compliant architecture for enterprise settings, including data governance, auditability, and access control. Lead system design reviews and define non-functional requirements (NFRs), including latency, availability, throughput, and cost. Work closely with MLOps teams to define the CI /CD processes for model and system updates. Contribute to the creation of reference architectures, design templates, and reusable components. Stay abreast of the latest advancements in GenAI , system design patterns, and AI platform tooling. Qualifications we seek in you! Minimum Qualifications Proven experience designing and implementing distributed systems, cloud-native architectures, and microservices. Deep understanding of Generative AI architectures, including LLMs, diffusion models, prompt engineering, and model fine-tuning. Strong experience with at least one cloud platform (AWS, GCP, or Azure) and services like SageMaker, Vertex AI, or Azure ML. Experience with Agentic AI systems or orchestrating multiple LLM agents. Experience with multimodal systems (e.g., combining image, text, video, and speech models). Knowledge of semantic search, vector databases, and retrieval techniques in RAG. Familiarity with Zero Trust architecture and advanced enterprise security practices. Experience in building developer platforms/toolkits for AI consumption. Contributions to open-source AI system frameworks or thought leadership in GenAI architecture. Hands-on experience with tools and frameworks like LangChain , Hugging Face, Ray, Kubeflow, MLflow , or Weaviate /FAISS. Knowledge of data pipelines, ETL/ELT, and data lakes/warehouses (e.g., Snowflake, BigQuery , Delta Lake). Solid grasp of DevOps and MLOps principles, including containerization (Docker), orchestration (Kubernetes), CI/CD pipelines, and model monitoring. Familiarity with system design tradeoffs in latency vs cost vs scale for GenAI workloads. Preferred Qualifications: Bachelor’s or Master’s degree in computer science, Engineering, or related field. Experience in software/system architecture, with experience in GenAI /AI/ML. Proven experience designing and implementing distributed systems, cloud-native architectures, and microservices. Strong interpersonal and communication skills; ability to collaborate and present to technical and executive stakeholders. Certifications in cloud platforms (e.g., AWS Certified Solutions Architect, Microsoft Certified: Azure Solutions Architect Expert, Google Cloud Professional Data Engineer). Familiarity with data governance and security best practices. Why join Genpact? Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Assistant Vice President Primary Location India-Hyderabad Schedule Full-time Education Level Master's / Equivalent Job Posting Jul 21, 2025, 2:57:18 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time
Posted 1 week ago
10.0 years
2 - 11 Lacs
Bengaluru
On-site
Join our Team About this opportunity: We are looking for a Senior Machine Learning Engineer with 10+ years of experience to design, build, and deploy scalable machine learning systems in production. This is not a data science role — we are seeking an engineering-focused individual who can partner with data scientists to productionize models, own ML pipelines end-to-end, and drive reliability, automation, and performance of our ML infrastructure. You’ll work on mission-critical systems where robustness, monitoring, and maintainability are key. You should be experienced with modern MLOps tools, cloud platforms, containerization, and model serving at scale. What you will do: Design and build robust ML pipelines and services for training, validation, and model deployment. Work closely with data scientists, solution architects, DevOps engineers, etc. to align the components and pipelines with project goals and requirements. Communicate deviation from target architecture (if any). Cloud Integration: Ensuring compatibility with cloud services of AWS, and Azure for enhanced performance and scalability Build reusable infrastructure components using best practices in DevOps and MLOps. Security and Compliance: Adhering to security standards and regulatory compliance, particularly in handling confidential and sensitive data. Network Security: Design optimal network plan for given Cloud Infrastructure under the E// network security guidelines Monitor model performance in production and implement drift detection and retraining pipelines. Optimize models for performance, scalability, and cost (e.g., batching, quantization, hardware acceleration). Documentation and Knowledge Sharing: Creating detailed documentation and guidelines for the use and modification of the developed components. The skills you bring: Strong programming skills in Python Deep experience with ML frameworks (TensorFlow, PyTorch, Scikit-learn, XGBoost). Hands-on with MLOps tools like MLflow, Airflow, TFX, Kubeflow, or BentoML. Experience deploying models using Docker and Kubernetes. Strong knowledge of cloud platforms (AWS/GCP/Azure) and ML services (e.g., SageMaker, Vertex AI). Proficiency with data engineering tools (Spark, Kafka, SQL/NoSQL). Solid understanding of CI/CD, version control (Git), and infrastructure as code (Terraform, Helm). Experience with monitoring/logging (Prometheus, Grafana, ELK). Good-to-Have Skills Experience with feature stores (Feast, Tecton) and experiment tracking platforms. Knowledge of edge/embedded ML, model quantization, and optimization. Familiarity with model governance, security, and compliance in ML systems. Exposure to on-device ML or streaming ML use cases. Experience leading cross-functional initiatives or mentoring junior engineers. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Bangalore Req ID: 770160
Posted 1 week ago
2.0 years
5 - 10 Lacs
Bengaluru
On-site
The Company Metropolis is an artificial intelligence company that uses computer vision technology to enable frictionless, checkout-free experiences in the real world. Today, we are reimagining parking to enable millions of consumers to just "drive in and drive out." We envision a future where people transact in the real world with a speed, ease and convenience that is unparalleled, even online. Tomorrow, we will power checkout-free experiences anywhere you go to make the everyday experiences of living, working and playing remarkable - giving us back our most valuable asset, time. Overview: We are looking for a highly motivated and analytical Data Scientist to join our growing data team. You will play a key role in extracting insights from large datasets, building predictive models, and supporting data-driven decision-making across departments. Location: Bangalore Experience : 3+ Key Responsibilities: Collect, process, and analyze large datasets from multiple sources. Build and deploy machine learning models to solve business problems. Design and implement A/B tests and statistical analyses. Collaborate with cross-functional teams (product, engineering, marketing) to define analytics requirements. Communicate complex data insights in a clear and actionable manner to stakeholders. Develop dashboards and visualizations to monitor key metrics. Stay current with the latest trends and technologies in data science and AI. Required Skills & Qualifications: Bachelor's/Master's degree in Computer Science, Mathematics, Statistics, or related field Proven experience (2+ years) as a Data Scientist or Data Analyst Strong knowledge of Python/R and SQL Hands-on experience with machine learning frameworks (e.g., scikit learn, tensroflow, pytorch) Experience with big data tools (e.g., Spark, Hadoop) is a plus Familiarity with data visualization tools Strong analytical, problem-solving, and communication skills Preferred: Experience with cloud platforms preferably AWS (S3, Sagemaker, Airflow etc) Strong SQL skills, with experience in Snowflake, MySQL, and PostgreSQL. Familiarity with data visualization tools (e.g., Tableau, Power BI, Looker). When you join Metropolis, you'll join a team of world-class product leaders and engineers, building an ecosystem of technologies at the intersection of parking, mobility, and real estate. Our goal is to build an inclusive culture where everyone has a voice and the best idea wins. You will play a key role in building and maintaining this culture as our organization grows. Metropolis Technologies is an equal opportunity employer. We make all hiring decisions based on merit, qualifications, and business needs, without regard to race, color, religion, sex (including gender identity, sexual orientation, or pregnancy), national origin, disability, veteran status, or any other protected characteristic under federal, state, or local law.
Posted 1 week ago
10.0 years
2 - 10 Lacs
Calcutta
On-site
Join our Team About this opportunity: We are looking for a Senior Machine Learning Engineer with 10+ years of experience to design, build, and deploy scalable machine learning systems in production. This is not a data science role — we are seeking an engineering-focused individual who can partner with data scientists to productionize models, own ML pipelines end-to-end, and drive reliability, automation, and performance of our ML infrastructure. You’ll work on mission-critical systems where robustness, monitoring, and maintainability are key. You should be experienced with modern MLOps tools, cloud platforms, containerization, and model serving at scale. What you will do: Design and build robust ML pipelines and services for training, validation, and model deployment. Work closely with data scientists, solution architects, DevOps engineers, etc. to align the components and pipelines with project goals and requirements. Communicate deviation from target architecture (if any). Cloud Integration: Ensuring compatibility with cloud services of AWS, and Azure for enhanced performance and scalability Build reusable infrastructure components using best practices in DevOps and MLOps. Security and Compliance: Adhering to security standards and regulatory compliance, particularly in handling confidential and sensitive data. Network Security: Design optimal network plan for given Cloud Infrastructure under the E// network security guidelines Monitor model performance in production and implement drift detection and retraining pipelines. Optimize models for performance, scalability, and cost (e.g., batching, quantization, hardware acceleration). Documentation and Knowledge Sharing: Creating detailed documentation and guidelines for the use and modification of the developed components. The skills you bring: Strong programming skills in Python Deep experience with ML frameworks (TensorFlow, PyTorch, Scikit-learn, XGBoost). Hands-on with MLOps tools like MLflow, Airflow, TFX, Kubeflow, or BentoML. Experience deploying models using Docker and Kubernetes. Strong knowledge of cloud platforms (AWS/GCP/Azure) and ML services (e.g., SageMaker, Vertex AI). Proficiency with data engineering tools (Spark, Kafka, SQL/NoSQL). Solid understanding of CI/CD, version control (Git), and infrastructure as code (Terraform, Helm). Experience with monitoring/logging (Prometheus, Grafana, ELK). Good-to-Have Skills Experience with feature stores (Feast, Tecton) and experiment tracking platforms. Knowledge of edge/embedded ML, model quantization, and optimization. Familiarity with model governance, security, and compliance in ML systems. Exposure to on-device ML or streaming ML use cases. Experience leading cross-functional initiatives or mentoring junior engineers. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Bangalore Req ID: 770160
Posted 1 week ago
3.0 years
0 Lacs
Greater Chennai Area
On-site
Job ID: 39582 Position Summary A rewarding career at HID Global beckons you! We are looking for an AI/ML Engineer , who is responsible for designing, developing, and deploying advanced AI/ML solutions to solve complex business challenges. This role requires expertise in machine learning, deep learning, MLOps, and AI model optimization , with a focus on building scalable, high-performance AI systems. As an AI/ML Engineer , you will work closely with data engineers, software developers, and business stakeholders to integrate AI-driven insights into real-world applications. You will be responsible for model development, system architecture, cloud deployment, and ensuring responsible AI adoption . We are a leading company in the trusted source for innovative HID Global Human Resources products, solutions and services that help millions of customers around the globe create, manage and use secure identities. Roles & Responsibilities: Design, develop, and deploy robust & scalable AI/ML models in Production environments. Collaborate with business stakeholders to identify AI/ML opportunities and define measurable success metrics. Design and build Retrieval-Augmented Generation (RAG) pipelines integrating vector stores, semantic search, and document parsing for domain-specific knowledge retrieval. Integrate Multimodal Conversational AI platforms (MCP) including voice, vision, and text to deliver rich user interactions. Drive innovation through PoCs, benchmarking, and experiments with emerging models and architectures. Optimize models for performance, latency and scalability. Build data pipelines and workflows to support model training and evaluation. Conduct research & experimentation on the state-of-the-art techniques (DL, NLP, Time series, CV) Partner with MLOps and DevOps teams to implement best practices in model monitoring, version and re-training. Lead code reviews, architecture discussions and mentor junior & peer engineers. Architect and implement end-to-end AI/ML pipelines, ensuring scalability and efficiency. Deploy models in cloud-based (AWS, Azure, GCP) or on-premises environments using tools like Docker, Kubernetes, TensorFlow Serving, or ONNX Ensure data integrity, quality, and preprocessing best practices for AI/ML model development. Ensure compliance with AI ethics guidelines, data privacy laws (GDPR, CCPA), and corporate AI governance. Work closely with data engineers, software developers, and domain experts to integrate AI into existing systems. Conduct AI/ML training sessions for internal teams to improve AI literacy within the organization. Strong analytical and problem solving mindset. Technical Requirements: Strong expertise in AI/ML engineering and software development. Strong experience with RAG architecture, vector databases Proficiency in Python and hands-on experience in using ML frameworks (tensorflow, pytorch, scikit-learn, xgboost etc) Familiarity with MCPs like Google Dialogflow, Rasa, Amazon Lex, or custom-built agents using LLM orchestration. Cloud-based AI/ML experience (AWS Sagemaker, Azure ML, GCP Vertex AI, etc.). Solid understanding of AI/ML life cycle – Data preprocessing, feature engineering, model selection, training, validation and deployment. Experience in production grade ML systems (Model serving, APIs, Pipelines) Familiarity with Data engineering tools (SPARK, Kafka, Airflow etc) Strong knowledge of statistical modeling, NLP, CV, Recommendation systems, Anomaly detection and time series forecasting. Hands-on in Software engineering with knowledge of version control, testing & CI/CD Hands-on experience in deploying ML models in production using Docker, Kubernetes, TensorFlow Serving, ONNX, and MLflow. Experience in MLOps & CI/CD for ML pipelines, including monitoring, retraining, and model drift detection. Proficiency in scaling AI solutions in cloud environments (AWS, Azure & GCP). Experience in data preprocessing, feature engineering, and dimensionality reduction. Exposure to Data privacy, Compliance and Secure ML practices Education and/or Experience: Graduation or master’s in computer science or information technology or AI/ML/Data science 3+ years of hands-on experience in AI/ML development/deployment and optimization Experience in leading AI/ML teams and mentoring junior engineers.
Posted 1 week ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
vConstruct , a Pune based Construction Technology company seeks a Project Manager for their AI/ML team which is a tight-knit group of Data Scientists and AI Engineers along with Data Analysists and Data Engineers that support all data aspects of DPR business. This team drives up with passion of Transforming this Construction Industry by the use of cutting edge technology enabled with strong AI/ ML Solutions. Here, each and every data point starting from Pre-Construction to Post-Construction process are touch-based to leverage this data for getting predictable outcomes and automating processes for better efficiency. This team helps on various domains of construction such as – Scheduling, Planning, Supply Chain, Safety, Quality Control, Site Management, Financials, & Virtual Design Construction in every phase of projects. Gain the opportunity for working over Computer Vision, Predictive, Gen AI & LLM related use cases with variety of tech stack included for implementation. Majorly, MS Azure AI Services, Open AI, CV Libraries and ML for predictive outcomes with Stats as base. Glimpse of Projects: - Creation of GPT for Construction Industry on Construction Data (Unstructured & Structured Data). - Use Computer Vision to help Object Detection on 2D Drawings and 3D BIM Models - Detailed use of Stats frameworks for Predicting Losses on Construction Site Data or to predict win/ loss of opportunities with clients. Predict Quantities required for similar type of sites etc. - Automation AI has deep integration with other groups of Construction Operations to automate our current work processes and make them more efficient using pre-built Models, Azure ML Services or other ML Techniques. Be a part of this vision of “Transforming Construction Industry by Use of Technology” and play vital role with talented team within vConstruct. If you're interested in contributing to construction technology within the AI/ML space, gaining exposure to the latest technologies, and working directly with end customers, please apply via the link. Location : Pune: Magarpatta or Baner Key Responsibilities: Project Planning & Execution Define and manage AI/ML project scope, timelines, deliverables, and risk profiles. Build detailed project roadmaps with clear milestones and sprint plans. Oversee the complete AI lifecycle — from use-case discovery to model deployment. Cross-Functional Collaboration Act as the strategic link between data scientists, ML engineers, ML Ops, domain SMEs, and Initiative Leads. Facilitate agile ceremonies including sprint planning, retrospectives, and stakeholder reviews using GitHub or Azure DevOps. Risk & Dependency Management Anticipate and mitigate risks around data readiness, infrastructure, and cross-team dependencies. Manage change control, escalations, and course corrections proactively. Governance & Documentation Maintain complete project documentation including use case briefs, dashboards, decision logs, and ROI reports. Ensure compliance with internal standards, policies, and AI governance frameworks. Qualification: Bachelor’s or Master’s in engineering, Data Science, Computer Science, or related field. Certifications in PMP, PMI-ACP, or Agile/Scrum methodologies. Relevant PM Degrees in case any. Familiarity with cloud-based AI platforms (Azure ML, AWS Sagemaker, etc.) 8-10 years of professional experience with atleast 4 years of experience in data science, machine learning, and analytics, with a proven track recordof delivering impactful data-driven solutions. Experience working in dynamic, fast-paced environments, with the ability to manage multiple projects simultaneously as well as should be able to do production or work on project personally as required. Proven Experience, Strong business acumen and ability to translate technical insights into actionable business strategies & communicate to stakeholders in business language. Experience working with construction-related data or similar industries (e.g., engineering, manufacturing) is a plus. Technical Skills: Understanding of Python & SQL will be good to have. Proficiency in ML, AI & Statistics frameworks and libraries (e.g., TensorFlow, PyTorch, scikit-learn). Hands-on experience with data visualization tools (e.g., Tableau, Power BI, Matplotlib). Familiarity with cloud platforms (AWS, Azure preferably), knowledge about Azure ML Services & Snowflake would be plus one Good to have knowledge of NLP(Natural Language Processing) and Computer Vision. About vConstruct : vConstruct specializes in providing high quality Building Information Modeling and Construction Technology services geared towards construction projects. vConstruct is a wholly owned subsidiary of DPR Construction. vConstruct has 100+ team members working on Software Development, Data Analytics, Data Engineering and AI/ML. We have matured Data Science practice and growing at accelerated pace. For more information, please visit www.vconstruct.com About DPR Construction: DPR Construction is a national commercial general contractor and construction manager specializing in technically challenging and sustainable projects for the advanced technology, biopharmaceutical, corporate office, and higher education and healthcare markets. With the purpose of building great things, great teams, great buildings, great relationships—DPR is a truly great company. For more information, please visit www.dpr.com
Posted 1 week ago
10.0 years
0 Lacs
Greater Kolkata Area
On-site
Join our Team About this opportunity: We are looking for a Senior Machine Learning Engineer with 10+ years of experience to design, build, and deploy scalable machine learning systems in production. This is not a data science role — we are seeking an engineering-focused individual who can partner with data scientists to productionize models, own ML pipelines end-to-end, and drive reliability, automation, and performance of our ML infrastructure. You’ll work on mission-critical systems where robustness, monitoring, and maintainability are key. You should be experienced with modern MLOps tools, cloud platforms, containerization, and model serving at scale. What you will do: Design and build robust ML pipelines and services for training, validation, and model deployment. Work closely with data scientists, solution architects, DevOps engineers, etc. to align the components and pipelines with project goals and requirements. Communicate deviation from target architecture (if any). Cloud Integration: Ensuring compatibility with cloud services of AWS, and Azure for enhanced performance and scalability Build reusable infrastructure components using best practices in DevOps and MLOps. Security and Compliance: Adhering to security standards and regulatory compliance, particularly in handling confidential and sensitive data. Network Security: Design optimal network plan for given Cloud Infrastructure under the E// network security guidelines Monitor model performance in production and implement drift detection and retraining pipelines. Optimize models for performance, scalability, and cost (e.g., batching, quantization, hardware acceleration). Documentation and Knowledge Sharing: Creating detailed documentation and guidelines for the use and modification of the developed components. The skills you bring: Strong programming skills in Python Deep experience with ML frameworks (TensorFlow, PyTorch, Scikit-learn, XGBoost). Hands-on with MLOps tools like MLflow, Airflow, TFX, Kubeflow, or BentoML. Experience deploying models using Docker and Kubernetes. Strong knowledge of cloud platforms (AWS/GCP/Azure) and ML services (e.g., SageMaker, Vertex AI). Proficiency with data engineering tools (Spark, Kafka, SQL/NoSQL). Solid understanding of CI/CD, version control (Git), and infrastructure as code (Terraform, Helm). Experience with monitoring/logging (Prometheus, Grafana, ELK). Good-to-Have Skills Experience with feature stores (Feast, Tecton) and experiment tracking platforms. Knowledge of edge/embedded ML, model quantization, and optimization. Familiarity with model governance, security, and compliance in ML systems. Exposure to on-device ML or streaming ML use cases. Experience leading cross-functional initiatives or mentoring junior engineers. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Bangalore Req ID: 770160
Posted 1 week ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
The Company Metropolis is an artificial intelligence company that uses computer vision technology to enable frictionless, checkout-free experiences in the real world. Today, we are reimagining parking to enable millions of consumers to just "drive in and drive out." We envision a future where people transact in the real world with a speed, ease and convenience that is unparalleled, even online. Tomorrow, we will power checkout-free experiences anywhere you go to make the everyday experiences of living, working and playing remarkable - giving us back our most valuable asset, time. Overview: We are looking for a highly motivated and analytical Data Scientist to join our growing data team. You will play a key role in extracting insights from large datasets, building predictive models, and supporting data-driven decision-making across departments. Location: Bangalore Experience : 3+ Key Responsibilities: Collect, process, and analyze large datasets from multiple sources. Build and deploy machine learning models to solve business problems. Design and implement A/B tests and statistical analyses. Collaborate with cross-functional teams (product, engineering, marketing) to define analytics requirements. Communicate complex data insights in a clear and actionable manner to stakeholders. Develop dashboards and visualizations to monitor key metrics. Stay current with the latest trends and technologies in data science and AI. Required Skills & Qualifications: Bachelor’s/Master’s degree in Computer Science, Mathematics, Statistics, or related field Proven experience (2+ years) as a Data Scientist or Data Analyst Strong knowledge of Python/R and SQL Hands-on experience with machine learning frameworks (e.g., scikit learn, tensroflow, pytorch) Experience with big data tools (e.g., Spark, Hadoop) is a plus Familiarity with data visualization tools Strong analytical, problem-solving, and communication skills Preferred: Experience with cloud platforms preferably AWS (S3, Sagemaker, Airflow etc) Strong SQL skills, with experience in Snowflake, MySQL, and PostgreSQL. Familiarity with data visualization tools (e.g., Tableau, Power BI, Looker). When you join Metropolis, you’ll join a team of world-class product leaders and engineers, building an ecosystem of technologies at the intersection of parking, mobility, and real estate. Our goal is to build an inclusive culture where everyone has a voice and the best idea wins. You will play a key role in building and maintaining this culture as our organization grows. Metropolis Technologies is an equal opportunity employer. We make all hiring decisions based on merit, qualifications, and business needs, without regard to race, color, religion, sex (including gender identity, sexual orientation, or pregnancy), national origin, disability, veteran status, or any other protected characteristic under federal, state, or local law.
Posted 1 week ago
10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Join our Team About this opportunity: We are looking for a Senior Machine Learning Engineer with 10+ years of experience to design, build, and deploy scalable machine learning systems in production. This is not a data science role — we are seeking an engineering-focused individual who can partner with data scientists to productionize models, own ML pipelines end-to-end, and drive reliability, automation, and performance of our ML infrastructure. You’ll work on mission-critical systems where robustness, monitoring, and maintainability are key. You should be experienced with modern MLOps tools, cloud platforms, containerization, and model serving at scale. What you will do: Design and build robust ML pipelines and services for training, validation, and model deployment. Work closely with data scientists, solution architects, DevOps engineers, etc. to align the components and pipelines with project goals and requirements. Communicate deviation from target architecture (if any). Cloud Integration: Ensuring compatibility with cloud services of AWS, and Azure for enhanced performance and scalability Build reusable infrastructure components using best practices in DevOps and MLOps. Security and Compliance: Adhering to security standards and regulatory compliance, particularly in handling confidential and sensitive data. Network Security: Design optimal network plan for given Cloud Infrastructure under the E// network security guidelines Monitor model performance in production and implement drift detection and retraining pipelines. Optimize models for performance, scalability, and cost (e.g., batching, quantization, hardware acceleration). Documentation and Knowledge Sharing: Creating detailed documentation and guidelines for the use and modification of the developed components. The skills you bring: Strong programming skills in Python Deep experience with ML frameworks (TensorFlow, PyTorch, Scikit-learn, XGBoost). Hands-on with MLOps tools like MLflow, Airflow, TFX, Kubeflow, or BentoML. Experience deploying models using Docker and Kubernetes. Strong knowledge of cloud platforms (AWS/GCP/Azure) and ML services (e.g., SageMaker, Vertex AI). Proficiency with data engineering tools (Spark, Kafka, SQL/NoSQL). Solid understanding of CI/CD, version control (Git), and infrastructure as code (Terraform, Helm). Experience with monitoring/logging (Prometheus, Grafana, ELK). Good-to-Have Skills Experience with feature stores (Feast, Tecton) and experiment tracking platforms. Knowledge of edge/embedded ML, model quantization, and optimization. Familiarity with model governance, security, and compliance in ML systems. Exposure to on-device ML or streaming ML use cases. Experience leading cross-functional initiatives or mentoring junior engineers. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Bangalore Req ID: 770160
Posted 1 week ago
12.0 years
0 Lacs
Mysore, Karnataka, India
On-site
Job Title : Solution Architect – Application & AI Engineering Experience : 12+ years (Minimum 8 years of hands-on experience) Location : Mysuru, Karnataka Employment Type : Full-time About the Role We are seeking an experienced and forward-thinking Solution Architect with a strong background in application engineering and AI/ML systems. The ideal candidate should have deep technical expertise and hands-on experience in architecting scalable and secure solutions across web, API, database and cloud ecosystems (AWS or Azure). You will lead end-to-end architecture design efforts—transforming business requirements into robust, scalable, and secure digital products, while ensuring modern AI-driven capabilities are leveraged where applicable. Key Responsibilities Design and deliver scalable application architectures across microservices, APIs and backend databases. Collaborate with cross-functional teams to define solution blueprints combining application engineering and AI/ML requirements. Architect and lead implementation strategies for deploying applications on AWS or Azure using services such as ECS, AKS, Lambda, API Gateway, Azure App Services, Cosmos DB, etc. Guide engineering teams in application modernization, including monolith to microservices transitions, containerization and serverless. Define and enforce best practices around security, performance, and maintainability of solutions. Integrate AI/ML solutions (e.g., inference endpoints, custom LLMs, or ML Ops pipelines) within broader enterprise applications. Evaluate and recommend third-party tools, frameworks, or platforms for optimizing application performance and AI integration. Support pre-sales activities and client engagements with architectural diagrams, PoCs, and strategy sessions. Mentor engineering teams and participate in code/design reviews when necessary. Required Skills & Experience 12+ years of total experience in software/application engineering. 8+ years of hands-on experience in designing and developing distributed applications. Strong knowledge in backend technologies like Python, Node.js, or .NET; and API-first design (REST/GraphQL). Strong understanding of relational and NoSQL databases (PostgreSQL, MySQL, MongoDB, DynamoDB, etc.). Experience with DevOps practices, CI/CD pipelines, and infrastructure as code (Terraform, CloudFormation, etc.). Proven experience in architecting and deploying cloud-native applications on AWS and/or Azure. Experience with integrating AI/ML models into production systems, including data pipelines, model inference, and MLOps. Deep understanding of security, authentication (OAuth, JWT), and compliance in cloud-based applications. Familiarity with LLMs, NLP, or generative AI is a strong advantage. Preferred Qualifications Cloud certifications (e.g., AWS Certified Solutions Architect, Azure Solutions Architect Expert). Exposure to AI/ML platforms like Azure AI Studio, Amazon Bedrock, SageMaker, or Hugging Face. Understanding of multi-tenant architecture and SaaS platforms. Experience working in Agile/DevOps teams and with tools like Jira, Confluence, GitHub/GitLab, etc. Why Join Us? Work on innovative and enterprise-scale AI-powered applications. Influence product and architecture decisions with a long-term strategic lens. Collaborate with forward-thinking and cross-disciplinary teams. Opportunity to lead from the front and shape the engineering roadmap.
Posted 1 week ago
0 years
0 Lacs
Bangalore Urban, Karnataka, India
On-site
Role Title: AI Platform Engineer Location: Bangalore (In Person in office when required) Part of the GenAI COE Team Key Responsibilities Platform Development and Evangelism: Build scalable AI platforms that are customer-facing. Evangelize the platform with customers and internal stakeholders. Ensure platform scalability, reliability, and performance to meet business needs. Machine Learning Pipeline Design: Design ML pipelines for experiment management, model management, feature management, and model retraining. Implement A/B testing of models. Design APIs for model inferencing at scale. Proven expertise with MLflow, SageMaker, Vertex AI, and Azure AI. LLM Serving And GPU Architecture Serve as an SME in LLM serving paradigms. Possess deep knowledge of GPU architectures. Expertise in distributed training and serving of large language models. Proficient in model and data parallel training using frameworks like DeepSpeed and service frameworks like vLLM. Model Fine-Tuning And Optimization Demonstrate proven expertise in model fine-tuning and optimization techniques. Achieve better latencies and accuracies in model results. Reduce training and resource requirements for fine-tuning LLM and LVM models. LLM Models And Use Cases Have extensive knowledge of different LLM models. Provide insights on the applicability of each model based on use cases. Proven experience in delivering end-to-end solutions from engineering to production for specific customer use cases. DevOps And LLMOps Proficiency Proven expertise in DevOps and LLMOps practices. Knowledgeable in Kubernetes, Docker, and container orchestration. Deep understanding of LLM orchestration frameworks like Flowise, Langflow, and Langgraph. Skill Matrix LLM: Hugging Face OSS LLMs, GPT, Gemini, Claude, Mixtral, Llama LLM Ops: ML Flow, Langchain, Langraph, LangFlow, Flowise, LLamaIndex, SageMaker, AWS Bedrock, Vertex AI, Azure AI Databases/Datawarehouse: DynamoDB, Cosmos, MongoDB, RDS, MySQL, PostGreSQL, Aurora, Spanner, Google BigQuery. Cloud Knowledge: AWS/Azure/GCP Dev Ops (Knowledge): Kubernetes, Docker, FluentD, Kibana, Grafana, Prometheus Cloud Certifications (Bonus): AWS Professional Solution Architect, AWS Machine Learning Specialty, Azure Solutions Architect Expert Proficient in Python, SQL, Javascript Email : diksha.singh@aptita.com
Posted 1 week ago
2.0 years
0 Lacs
Surat, Gujarat, India
Remote
About devx At devx, we help some of India’s most forward-looking brands unlock growth through AI-powered and cloud-native solutions — in collaboration with AWS. We’re a fast-growing consultancy focused on solving real-world business problems with cutting-edge technology. Role Overview We are looking for a dynamic and customer-centric AWS Solutions Architect to join our team. In this role, you’ll work directly with clients to design scalable, secure, and cost-effective cloud architectures that solve high-impact business challenges. You’ll bridge the gap between business needs and technical execution, becoming a trusted advisor to our clients. Key Responsibilities Engage with clients to understand their business objectives and translate them into cloud-based architectural solutions Design, implement, and document AWS architectures with a strong focus on scalability, security, and performance Create solution blueprints and work closely with engineering teams to ensure successful implementation Conduct workshops, presentations, and technical deep-dives with client teams Stay updated on the latest AWS offerings and best practices, and incorporate them into solution designs. Collaborate with cross-functional teams including sales, product, and engineering to deliver end-to-end solutions. What we are looking for: 2+ years of experience designing and implementing solutions on AWS Strong understanding of core AWS services such as EC2, S3, Lambda, RDS, API Gateway, IAM, VPC, etc. Strong understanding of core AI/ML and Data services such as Bedrock, Sagemaker, Glue, Athena, Kinesis, etc. Strong understanding of core DevOps services such as ECS, EKS, CI/CD Pipeline, Fargate, Lambda etc. Excellent communication and presentation skills in English — both written and verbal Comfortable in client-facing roles, with the ability to lead technical discussions and build credibility with stakeholders Ability to balance technical depth with business context and articulate value to decision-makers Location: Surat, Gujarat No WFH Only apply if you are open to relocate Surat, Gujarat
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough