Jobs
Interviews

1576 Sagemaker Jobs - Page 42

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Role : MLOps Engineer Location - Chennai - CKC Mode of Interview - In Person Data - 7th June 2025 (Saturday) Key Words -Skillset AWS SageMaker, Azure ML Studio, GCP Vertex AI PySpark, Azure Databricks MLFlow, KubeFlow, AirFlow, Github Actions, AWS CodePipeline Kubernetes, AKS, Terraform, Fast API Responsibilities Model Deployment, Model Monitoring, Model Retraining Deployment pipeline, Inference pipeline, Monitoring pipeline, Retraining pipeline Drift Detection, Data Drift, Model Drift Experiment Tracking MLOps Architecture REST API publishing Job Responsibilities Research and implement MLOps tools, frameworks and platforms for our Data Science projects. Work on a backlog of activities to raise MLOps maturity in the organization. Proactively introduce a modern, agile and automated approach to Data Science. Conduct internal training and presentations about MLOps tools’ benefits and usage. Required Experience And Qualifications Wide experience with Kubernetes. Experience in operationalization of Data Science projects (MLOps) using at least one of the popular frameworks or platforms (e.g. Kubeflow, AWS Sagemaker, Google AI Platform, Azure Machine Learning, DataRobot, DKube). Good understanding of ML and AI concepts. Hands-on experience in ML model development. Proficiency in Python used both for ML and automation tasks. Good knowledge of Bash and Unix command line toolkit. Experience in CI/CD/CT pipelines implementation. Experience with cloud platforms - preferably AWS - would be an advantage. Show more Show less

Posted 1 month ago

Apply

15.0 years

0 Lacs

Delhi, India

On-site

About The Role We are seeking a highly experienced Principal Presales Architect with deep expertise in AWS cloud services to lead strategic engagements with enterprise customers. This role is at the intersection of technology leadership and customer engagement, requiring a deep understanding of IaaS, PaaS, SaaS , and data platform services , with a focus on delivering business value through cloud adoption and digital transformation. You will be a key contributor to the sales and solutioning lifecycle, working alongside business development, account executives, product, and engineering teams. This role also involves driving cloud-native architectures , conducting deep technical workshops, and influencing executive stakeholders. Key Responsibilities Presales & Customer Engagement Act as the technical lead in strategic sales opportunities, supporting cloud transformation deals across verticals. Design and present end-to-end cloud solutions tailored to client needs, with a focus on AWS architectures (compute, networking, storage, databases, analytics, security, and DevOps). Deliver technical presentations, POCs, and solution workshops to executive and technical stakeholders. Collaborate with sales teams to develop proposals, RFP responses, solution roadmaps , and TCO/ROI analysis . Drive early-stage discovery sessions to identify business objectives, technical requirements, and success metrics. Own the solution blueprint and ensure alignment across technical, business, and operational teams. Architecture & Technology Leadership Architect scalable, secure, and cost-effective solutions using AWS services including EC2, Lambda, S3, RDS, Redshift, EKS, and others. Lead design of data platforms and AI/ML pipelines , leveraging AWS services like Redshift, SageMaker, Glue, Athena, EMR , and integrating with 3rd party tools when needed. Evaluate and recommend multi-cloud integration strategies (Azure/GCP experience is a strong plus). Guide customers on cloud migration, modernization, DevOps, and CI/CD pipelines . Collaborate with product and delivery teams to align proposed solutions with delivery capabilities and innovations. Stay current with industry trends, emerging technologies , and AWS service releases , integrating new capabilities into customer solutions. Required Skills & Qualifications Technical Expertise 15+ years in enterprise IT or architecture roles, with 10+ years in cloud solutioning/presales , primarily focused on AWS. In-depth knowledge of AWS IaaS/PaaS/SaaS , including services across compute, storage, networking, databases, security, AI/ML, and observability. Hands-on experience in architecting and deploying data lake/data warehouse solutions using Redshift , Glue, Lake Formation, and other data ecosystem components. Proficiency in designing AI/ML solutions using SageMaker , Bedrock, TensorFlow, PyTorch, or equivalent frameworks. Understanding of multi-cloud architectures and hybrid cloud solutions; hands-on experience with Azure or GCP is an advantage. Strong command of solution architecture best practices , cost optimization , cloud security , and compliance frameworks. Presales & Consulting Skills Proven success in technical sales roles involving complex cloud solutions and data platforms . Strong ability to influence C-level executives and technical stakeholders . Excellent communication, presentation, and storytelling skills to articulate complex technical solutions in business terms. Experience with proposal development, RFx responses, and pricing strategy . Strong analytical and problem-solving capabilities with a customer-first mindset. Show more Show less

Posted 1 month ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Summary We are seeking a highly experienced and customer-focused Presales Architect to join our Solution Engineering team. The ideal candidate will have a strong background in AWS IaaS, PaaS, and SaaS services , deep expertise in cloud architecture , and solid exposure to data platforms , including Amazon Redshift , AI/ML workloads , and modern data architectures . Familiarity with Azure and Google Cloud Platform (GCP) is a strong advantage. This role is a strategic blend of technical solutioning , customer engagement , and sales support , playing a critical role in the pre-sales cycle by understanding customer requirements, designing innovative solutions, and aligning them with the company’s service offerings. Key Responsibilities Pre-Sales and Solutioning: Engage with enterprise customers to understand their technical requirements and business objectives. Architect end-to-end cloud solutions on AWS , covering compute, storage, networking, DevOps, and security. Develop compelling solution proposals, high-level designs, and reference architectures that address customer needs. Support RFI/RFP responses, create technical documentation, and deliver presentations and demos to technical and non-technical audiences. Collaborate with Sales, Delivery, and Product teams to ensure alignment of proposed solutions with client expectations. Conduct technical workshops, proof of concepts (PoCs), and technical validations. Technical Expertise Deep hands-on knowledge and architecture experience with AWS services : IaaS: EC2, VPC, S3, EBS, ELB, Auto Scaling, etc. PaaS: RDS, Lambda, API Gateway, Fargate, DynamoDB, Aurora, Step Functions. SaaS & Security: AWS Organizations, IAM, AWS WAF, CloudTrail, GuardDuty. Understanding of multi-cloud strategies ; exposure to Azure and GCP cloud services including hybrid architectures is a plus. Strong knowledge of DevOps practices and tools like Terraform, CloudFormation, Jenkins, GitOps, etc. Proficiency in architecting solutions that meet scalability , availability , and security requirements. Data Platform & AI/ML Experience in designing data lakes , data pipelines , and analytics platforms on AWS. Hands-on expertise in Amazon Redshift , Athena , Glue , EMR , Kinesis , and S3-based architectures . Familiarity with AI/ML solutions using SageMaker , AWS Comprehend , or other ML frameworks. Understanding of data governance , data cataloging , and security best practices for analytics workloads. Required Skills & Qualifications Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field. 10+ years of experience in IT, with 5+ years in cloud architecture and pre-sales roles. AWS Certified Solutions Architect – Professional (or equivalent certification) is preferred. Strong presentation skills and experience interacting with CXOs, Architects, and DevOps teams. Ability to translate technical concepts into business value propositions. Excellent communication, proposal writing, and stakeholder management skills. Nice To Have Experience with Azure (e.g., Synapse, AKS, Azure ML) or GCP (e.g., BigQuery, Vertex AI) . Familiarity with industry-specific solutions (e.g., fintech, healthcare, retail cloud transformations). Exposure to AI/ML MLOps pipelines and orchestration tools like Kubeflow , MLflow , or Airflow . Show more Show less

Posted 1 month ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Description We’re seeking a hands-on AI/ML Engineer with deep expertise in large language models, retrieval-augmented generation (RAG), and cloud-native ML development on AWS. You'll be a key driver in building scalable, intelligent learning systems powered by cutting-edge AI and robust AWS infrastructure. If you’re passionate about combining NLP, deep learning, and real-world application at scale—this is the role for you. 4+ years of specialized experience in AI/ML is required. Core Skills & Technologies LLM Ecosystem & APIs • OpenAI, Anthropic, Cohere • Hugging Face Transformers • LangChain, LlamaIndex (RAG orchestration) Vector Databases & Indexing • FAISS, Pinecone, Weaviate AWS-Native & ML Tooling • Amazon SageMaker (training, deployment, pipelines) • AWS Lambda (event-driven workflows) • Amazon Bedrock (foundation model access) • Amazon S3 (data lakes, model storage) • AWS Step Functions (workflow orchestration) • AWS API Gateway & IAM (secure ML endpoints) • CloudWatch, Athena, DynamoDB (monitoring, analytics, structured storage) Languages & ML Frameworks • Python (primary), PyTorch, TensorFlow • NLP, RAG systems, embeddings, prompt engineering What You’ll Do • Model Development & Tuning o Designs architecture for complex AI systems and makes strategic technical decisions o Evaluates and selects appropriate frameworks, techniques, and approaches o Fine-tune and deploy LLMs and custom models using AWS SageMaker o Build RAG pipelines with LlamaIndex/LangChain and vector search engines • Scalable AI Infrastructure o Architect distributed model training and inference pipelines on AWS o Design secure, efficient ML APIs with Lambda, API Gateway, and IAM • Product Integration o Leads development of novel solutions to challenging problems o Embed intelligent systems (tutoring agents, recommendation engines) into learning platforms using Bedrock, SageMaker, and AWS-hosted endpoints • Rapid Experimentation o Prototype multimodal and few-shot learning workflows using AWS services o Automate experimentation and A/B testing with Step Functions and SageMaker Pipelines • Data & Impact Analysis o Leverage S3, Athena, and CloudWatch to define metrics and continuously optimize AI performance • Cross-Team Collaboration o Work closely with educators, designers, and engineers to deliver AI features that enhance student learning o Mentors junior engineers and provides technical leadership Who You Are • Deeply Technical: Strong foundation in machine learning, deep learning, and NLP/LLMs • AWS-Fluent: Extensive experience with AWS ML services (especially SageMaker, Lambda, and Bedrock) • Product-Minded: You care about user experience and turning ML into real-world value • Startup-Savvy: Comfortable with ambiguity, fast iterations, and wearing many hats • Mission-Aligned: Passionate about education, human learning, and AI for good Bonus Points • Hands-on experience fine-tuning LLMs or building agentic systems using AWS • Open-source contributions in AI/ML or NLP communities • Familiarity with AWS security best practices (IAM, VPC, private endpoints) Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Navi Mumbai, Maharashtra, India

On-site

Position: L3 AWS Cloud Engineer Experience: 5+ Years Location: Mumbai Employment Type: Full-Time Job Summary We are seeking a highly skilled L3 AWS Cloud Engineer with 5+ years of experience to lead the design, implementation, and optimization of complex AWS cloud architectures. The candidate will have deep expertise in hybrid (Onprem to cloud) networking, AWS connectivity, and advanced AWS services like WAF, Shield, Advanced Shield, EKS, Data Services, and CloudFront CDN, ensuring enterprise-grade solutions. Key Responsibilities Architect and implement hybrid cloud solutions integrating on-premises and AWS environments. Design and manage advanced AWS networking (Direct Connect, Transit Gateway, VPN). Lead deployment and management of Kubernetes clusters using AWS EKS. Implement and optimize security solutions using AWS WAF, Shield, and Advanced Shield. Architect data solutions using AWS Data Services (Redshift, Glue, Athena). Optimize content delivery using AWS CloudFront and advanced CDN configurations. Drive automation of cloud infrastructure using IaC (CloudFormation, Terraform, CDK). Provide leadership in incident response, root cause analysis, and performance optimization. Mentor junior engineers and collaborate with cross-functional teams on cloud strategies. Required Skills and Qualifications 5+ years of experience in cloud engineering, with at least 4 years focused on AWS. Deep expertise in hybrid networking and connectivity (Direct Connect, Transit Gateway, Site-to-Site VPN). Advanced knowledge of AWS EKS for container orchestration and management. Proficiency in AWS security services (WAF, Shield, Advanced Shield, GuardDuty). Hands-on experience with AWS Data Services (Redshift, Glue, Athena,). Expertise in optimizing AWS CloudFront for global content delivery. Strong scripting skills (Python, Bash) and IaC expertise (CloudFormation, Terraform, CDK). Experience with advanced monitoring and analytics (CloudWatch,ELK). Experience with multi-region and multi-account AWS architectures. AWS Certified Solutions Architect – Professional Preferred Skills Knowledge of serverless frameworks and event-driven architectures. Familiarity with machine learning workflows on AWS (SageMaker, ML services). Show more Show less

Posted 1 month ago

Apply

10.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Job Summary We are seeking a highly experienced and customer-focused Presales Architect to join our Solution Engineering team. The ideal candidate will have a strong background in AWS IaaS, PaaS, and SaaS services , deep expertise in cloud architecture , and solid exposure to data platforms , including Amazon Redshift , AI/ML workloads , and modern data architectures . Familiarity with Azure and Google Cloud Platform (GCP) is a strong advantage. This role is a strategic blend of technical solutioning , customer engagement , and sales support , playing a critical role in the pre-sales cycle by understanding customer requirements, designing innovative solutions, and aligning them with the company’s service offerings. Key Responsibilities Pre-Sales and Solutioning: Engage with enterprise customers to understand their technical requirements and business objectives. Architect end-to-end cloud solutions on AWS , covering compute, storage, networking, DevOps, and security. Develop compelling solution proposals, high-level designs, and reference architectures that address customer needs. Support RFI/RFP responses, create technical documentation, and deliver presentations and demos to technical and non-technical audiences. Collaborate with Sales, Delivery, and Product teams to ensure alignment of proposed solutions with client expectations. Conduct technical workshops, proof of concepts (PoCs), and technical validations. Technical Expertise Deep hands-on knowledge and architecture experience with AWS services : IaaS: EC2, VPC, S3, EBS, ELB, Auto Scaling, etc. PaaS: RDS, Lambda, API Gateway, Fargate, DynamoDB, Aurora, Step Functions. SaaS & Security: AWS Organizations, IAM, AWS WAF, CloudTrail, GuardDuty. Understanding of multi-cloud strategies ; exposure to Azure and GCP cloud services including hybrid architectures is a plus. Strong knowledge of DevOps practices and tools like Terraform, CloudFormation, Jenkins, GitOps, etc. Proficiency in architecting solutions that meet scalability , availability , and security requirements. Data Platform & AI/ML Experience in designing data lakes , data pipelines , and analytics platforms on AWS. Hands-on expertise in Amazon Redshift , Athena , Glue , EMR , Kinesis , and S3-based architectures . Familiarity with AI/ML solutions using SageMaker , AWS Comprehend , or other ML frameworks. Understanding of data governance , data cataloging , and security best practices for analytics workloads. Required Skills & Qualifications Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field. 10+ years of experience in IT, with 5+ years in cloud architecture and pre-sales roles. AWS Certified Solutions Architect – Professional (or equivalent certification) is preferred. Strong presentation skills and experience interacting with CXOs, Architects, and DevOps teams. Ability to translate technical concepts into business value propositions. Excellent communication, proposal writing, and stakeholder management skills. Nice To Have Experience with Azure (e.g., Synapse, AKS, Azure ML) or GCP (e.g., BigQuery, Vertex AI) . Familiarity with industry-specific solutions (e.g., fintech, healthcare, retail cloud transformations). Exposure to AI/ML MLOps pipelines and orchestration tools like Kubeflow , MLflow , or Airflow . Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

About Beco Beco ( letsbeco.com ) is a fast-growing Mumbai-based consumer-goods company on a mission to replace everyday single-use plastics with planet-friendly, bamboo- and plant-based alternatives. From reusable kitchen towels to biodegradable garbage bags, we make sustainable living convenient, affordable and mainstream. Our founding story began with a Mumbai beach clean-up that opened our eyes to the decades-long life of a single plastic wrapper—sparking our commitment to “Be Eco” every day. Our mission: “To craft, support and drive positive change with sustainable & eco-friendly alternatives—one Beco product at a time.” Backed by marquee climate-focused VCs and now 50 + employees, we are scaling rapidly across India’s top marketplaces, retail chains and D2C channels. Why we’re hiring Sustainability at scale demands operational excellence. As volumes explode, we need data-driven, self-learning systems that eliminate manual grunt work, unlock efficiency and delight customers. You will be the first dedicated AI/ML Engineer at Beco—owning the end-to-end automation roadmap across Finance, Marketing, Operations, Supply Chain and Sales. Responsibilities Partner with functional leaders to translate business pain-points into AI/ML solutions and automation opportunities. Own the complete lifecycle: data discovery, cleaning, feature engineering, model selection, training, evaluation, deployment and monitoring. Build robust data pipelines (SQL/BigQuery, Spark) and APIs to integrate models with ERP, CRM and marketing automation stacks. Stand up CI/CD + MLOps (Docker, Kubernetes, Airflow, MLflow, Vertex AI/SageMaker) for repeatable training and one-click releases. Establish data-quality, drift-detection and responsible-AI practices (bias, transparency, privacy). Mentor analysts & engineers; evangelise a culture of experimentation and “fail-fast” learning—core to Beco’s GSD (“Get Sh#!t Done”) values. Must-have Qualifications 3 + years hands-on experience delivering ML, data-science or intelligent-automation projects in production. Proficiency in Python (pandas, scikit-learn, PyTorch/TensorFlow) and SQL; solid grasp of statistics, experimentation and feature engineering. Experience building and scaling ETL/data pipelines on cloud (GCP, AWS or Azure). Familiarity with modern Gen-AI & NLP stacks (OpenAI, Hugging Face, RAG, vector databases). Track record of collaborating with cross-functional stakeholders and shipping iteratively in an agile environment. Nice-to-haves Exposure to e-commerce or FMCG supply-chain data. Knowledge of finance workflows (Reconciliation, AR/AP, FP&A) or RevOps tooling (HubSpot, Salesforce). Experience with vision models (Detectron2, YOLO) and edge deployment. Contributions to open-source ML projects or published papers/blogs. What Success Looks Like After 1 Year 70 % reduction in manual reporting hours across finance and ops. Forecast accuracy > 85 % at SKU level, slashing stock-outs by 30 %. AI chatbot resolves 60 % of tickets end-to-end, with CSAT > 4.7/5. At least two new data-products launched that directly boost topline or margin. Life at Beco Purpose-driven team obsessed with measurable climate impact. Entrepreneurial, accountable, bold” culture—where winning minds precede outside victories. Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You’ll Be Doing... Join Verizon as we continue to grow our industry-leading network to improve the ways people, businesses, and things connect. We are looking for an experienced, talented and motivated AI&ML Engineer to lead AI Industrialization for Verizon. You will also serve as a subject matter expert regarding the latest industry knowledge to improve the Home Product and solutions and/or processes related to Machine Learning, Deep Learning, Responsible AI, Gen AI, Natural Language Processing, Computer Vision and other AI practices. Deploying machine learning models - On Prem, Cloud and Kubernetes environments Driving data-derived insights across the business domain by developing advanced statistical models, machine learning algorithms and computational algorithms based on business initiatives. Creating and implementing data and ML pipelines for model inference, both in real-time and in batches. Architecting, designing, and implementing large-scale AI/ML systems in a production environment. Monitor the performance of data pipelines and make improvements as necessary What We’re Looking For... You have strong analytical skills and are eager to work in a collaborative environment with global teams to drive ML applications in business problems, develop end-to-end analytical solutions, and communicate insights and findings to leadership. You work independently and are always willing to learn new technologies. You thrive in a dynamic environment and can interact with various partners and multi-functional teams to implement data science-driven business solutions. You'll Need To Have Bachelor's degree with four or more years of relevant work experience. Expertise in advanced analytics/ predictive modelling in a consulting role. Experience with all phases of end-to-end Analytics project Hands-on programming expertise in Python (with libraries like NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch) , R (for specific data analysis tasks) Knowledge of Machine Learning Algorithms - Linear Regression , Logistic Regression ,Decision Trees ,Random Forests ,Support Vector Machines (SVMs) ,Neural Networks (Deep Learning) ,Bayesian Networks Data Engineering - Data Cleaning and Preprocessing ,Feature Engineering ,Data Transformation , Data Visualization Cloud Platforms - AWS SageMaker ,Azure Machine Learning ,Cloud AI Platform Even better if you have one or more of the following: Advanced degree in Computer Science, Data Science, Machine Learning, or a related field. Knowledge on Home domain with key areas like Smart Home, Digital security and wellbeing Experience with stream-processing systems: Spark-Streaming, Storm etc. #TPDNONCDIOREF Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Show more Show less

Posted 1 month ago

Apply

6.0 - 8.0 years

6 - 8 Lacs

Hyderābād

On-site

Senior Data Scientist – Enterprise Analytics Want to be part of the Data & Analytics organization, whose strategic goal is to create a world-class Data & Analytics company by building, embedding, and maturing a data-driven culture across Thomson Reuters. We are looking for a highly motivated individual with strong organizational and technical skills for the position of Senior Data Scientist. You will play a critical role working on cutting edge of analytics, leveraging predictive models, machine learning and generative AI to drive business insights and facilitating informed decision-making and help Thomson Reuters rapidly scale data-driven initiatives. About the Role In this opportunity as Senior Data Scientist, you will: Engage with stakeholders, business analysts and project team to understand the data requirements. Work in multiple business domain areas including Customer Service, Finance, Sales and Marketing. Design analytical frameworks to provide insights into a business problem. Explore and visualize multiple data sets to understand data available and prepare data for problem solving. Build machine learning models and/or statistical solutions. Build predictive models, generative AI solutions. Use Natural Language Processing to extract insight from text. Design database models (if a data mart or operational data store is required to aggregate data for modeling). Design visualizations and build dashboards in Tableau and/or PowerBI. Extract business insights from the data and models. Present results to stakeholders (and tell stories using data) using power point and/or dashboards. About You You're a fit for the role of Senior Data Scientist if your background includes: Experience- 6-8 Years in the field of Machine Learning & AI Must have a minimum of 3 years of experience working in the data science domain Degree preferred in a quantitative field (Computer Science, Statistics, etc.) Both technical and business acumen is required Technical skills Proficient in machine learning, statistical modelling, data science and generative AI techniques Highly proficient in Python and SQL Experience with Tableau and/or PowerBI Has worked with Amazon Web Services and Sagemaker Ability to build data pipelines for data movement using tools such as Alteryx, GLUE Experience Predictive analytics for customer retention, upsell/cross sell products and new customer acquisition, Customer Segmentation, Recommendation engines (customer and AWS Personalize), POC’s in building Generative AI solutions (GPT, Llama etc.,) Hands on with Prompt Engineering Experience in Customer Service, Finance, Sales and Marketing Additional Technical skills include Familiarity with Natural Language Processing including Feature Extraction techniques, Word Embeddings, Topic Modeling, Sentiment Analysis, Classification, Sequence Models and Transfer Learning Knowledgeable of AWS APIs for Machine Learning Has worked with Snowflake extensively. Good presentation skills and the ability to tell stories using data and Powerpoint/Dashboard Visualizations. Ability to communicate complex results in a simple and concise manner at all levels within the organization. Consulting Experience with a premier consulting firm. #LI-SS5 What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here . Learn more on how to protect yourself from fraudulent job postings here . More information about Thomson Reuters can be found on thomsonreuters.com.

Posted 1 month ago

Apply

3.0 years

6 - 8 Lacs

Hyderābād

On-site

The Data Scientist organization within the Data and Analytics division is responsible for designing and implementing a unified data strategy that enables the efficient, secure, and governed use of data across the organization. We aim to create a trusted and customer-centric data ecosystem, built on a foundation of data quality, security, and openness, and guided by the Thomson Reuters Trust Principles. Our team is dedicated to developing innovative data solutions that drive business value while upholding the highest standards of data management and ethics. About the role: Work with low to minimum supervision to solve business problems using data and analytics. Work in multiple business domain areas including Customer Experience and Service, Operations, Finance, Sales and Marketing. Work with various business stakeholders, to understand and document requirements. Design an analytical framework to provide insights into a business problem. Explore and visualize multiple data sets to understand data available for problem solving. Build end to end data pipelines to handle and process data at scale. Build machine learning models and/or statistical solutions. Build predictive models. Use Natural Language Processing to extract insight from text. Design database models (if a data mart or operational data store is required to aggregate data for modeling). Design visualizations and build dashboards in Tableau and/or PowerBI Extract business insights from the data and models. Present results to stakeholders (and tell stories using data) using power point and/or dashboards. Work collaboratively with other team members. About you: Overall 3+ years' experience in technology roles. Must have a minimum of 1 years of experience working in the data science domain. Has used frameworks/libraries such as Scikit-learn, PyTorch, Keras, NLTK. Highly proficient in Python. Highly proficient in SQL. Experience with Tableau and/or PowerBI. Has worked with Amazon Web Services and Sagemaker. Ability to build data pipelines for data movement using tools such as Alteryx, GLUE, Informatica. Proficient in machine learning, statistical modelling, and data science techniques. Experience with one or more of the following types of business analytics applications: Predictive analytics for customer retention, cross sales and new customer acquisition. Pricing optimization models. Segmentation. Recommendation engines. Experience in one or more of the following business domains Customer Experience and Service. Finance. Operations. Good presentation skills and the ability to tell stories using data and PowerPoint/Dashboard Visualizations. Excellent organizational, analytical and problem-solving skills. Ability to communicate complex results in a simple and concise manner at all levels within the organization. Ability to excel in a fast-paced, startup-like environment. #LI-SS5 What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here . Learn more on how to protect yourself from fraudulent job postings here . More information about Thomson Reuters can be found on thomsonreuters.com.

Posted 1 month ago

Apply

1.0 - 2.0 years

2 - 8 Lacs

Hyderābād

On-site

Job Title: AI/ML Associate Engineer Job Type: Full-Time. Immediate Joiners Only! Location: Hyderabad Desired Experience: 1-2 years of experience in AI/ML, OR, Trained freshers (with NO experience) required to have demonstrable hands-on project exposure in AI/ML. Education: Bachelor’s or Master’s degree in Computer Science, Data Science, Artificial Intelligence, or a related field. Certifications in AI/ML or Data Science are a strong plus. Job Overview We are seeking a highly motivated and technically skilled Junior AI and ML Engineer/Developer to join our team in a SaaS product-based company. The role is ideal for freshers trained in AI/ML with hands-on project experience or professionals with 1-2 years of experience. The candidate will contribute to developing, implementing, and optimizing AI/ML solutions to enhance the intelligence and functionality of SaaS products. Key Responsibilities Develop and implement machine learning models, including supervised, unsupervised, and deep learning techniques. Build scalable AI/ML pipelines for tasks such as natural language processing (NLP), computer vision, recommendation systems, and predictive analytics. Work with programming languages like Python or R , leveraging AI/ML libraries such as TensorFlow, PyTorch, Keras, and Scikit-learn . Pre-process and analyse datasets using techniques like feature engineering, scaling, and data augmentation. Deploy AI/ML models on cloud platforms (e.g., AWS SageMaker, Google AI Platform, Azure ML ) and ensure optimal performance. Manage datasets using SQL and NoSQL databases, applying efficient data handling and querying techniques. Utilize version control tools like Git to maintain code integrity and collaboration. Collaborate with product and development teams to align AI/ML solutions with business objectives. Document technical workflows, algorithms, and experiment results for reproducibility. Stay up to date with the latest advancements in AI/ML to propose and implement innovative solutions. Key Skills Proficiency in Python or R , with experience in AI/ML frameworks like TensorFlow, PyTorch, Scikit-learn, and Keras . Strong understanding of machine learning algorithms, NLP , and computer vision . Experience with data pre-processing techniques such as feature scaling, normalization, and handling missing data. Familiarity with cloud platforms for AI/ML deployment ( AWS, Google Cloud, Azure ). Database management skills in SQL and familiarity with NoSQL databases like MongoDB. Knowledge of version control systems like Git . Exposure to tools like Docker or Kubernetes for deploying AI/ML models in production. Strong foundation in mathematics and statistics, including linear algebra, probability, and optimization techniques. Excellent analytical and problem-solving skills, with a detail-oriented mindset. Qualifications Bachelor’s or Master’s degree in Computer Science, Data Science, Artificial Intelligence, or a related field. 1-2 years of experience in AI/ML, OR, Trained freshers required to have demonstrable hands-on project exposure in AI/ML. Certifications in AI/ML or Data Science are a strong plus. Knowledge or experience in a SaaS product-based environment is an advantage. What We Offer : Opportunity to work on cutting-edge AI/ML projects in a fast-paced SaaS environment. Collaborative and innovative workplace with mentorship and career growth opportunities. Competitive salary and benefits package.

Posted 1 month ago

Apply

5.0 years

0 Lacs

Indore, Madhya Pradesh, India

Remote

AI/ML Expert – PHP Integration (Remote / India Preferred) Experience: 2–5 years in AI/ML with PHP integration About Us: We’re the team behind Wiser – AI-Powered Product Recommendations for Shopify , helping over 5,000+ merchants increase AOV and conversions through personalized upsell and cross-sell experiences. We’re now scaling our recommendation engine further and are looking for an AI/ML expert who can help us take Wiser to the next level with smarter, faster, and more contextual product recommendations. Role Overview: As an AI/ML Engineer, you will: Develop and optimize product recommendation algorithms based on customer behavior, sales data, and store context. Train models using behavioral and transactional data across multiple Shopify stores. Build and test ML pipelines that can scale across thousands of stores. Integrate AI outputs into our PHP-based system (Laravel/Symfony preferred). Work closely with product and backend teams to improve real-time recommendations, ranking logic, and personalization scores. Responsibilities: Analyze large datasets from Shopify stores (products, orders, sessions) Build models for: Product similarity User-based & item-based collaborative filtering Popularity-based + contextual hybrid models Improve existing recommendation logic (e.g., Frequently Bought Together, Complete the Look) Implement real-time or near real-time prediction logic Ensure AI output integrates smoothly into PHP backend APIs Document logic and performance of models for internal review Requirements: 2–5 years of experience in machine learning, AI, or data science Strong Python skills (scikit-learn, TensorFlow, PyTorch, Pandas, NumPy) Experience building recommendation systems or working with eCommerce data Experience integrating AI models with PHP/Laravel applications Familiarity with Shopify ecosystem and personalization is a bonus Ability to explain ML logic to non-technical teams Bonus: Experience with AWS, S3, SageMaker, or model hosting APIs What You’ll Get: Opportunity to shape AI in one of the fastest-growing Shopify apps Work on a product used by 4,500+ stores globally Direct collaboration with founders & product team Competitive pay + growth opportunities Show more Show less

Posted 1 month ago

Apply

15.0 - 25.0 years

8 - 10 Lacs

Thiruvananthapuram

On-site

15 - 25 Years 1 Opening Trivandrum Role description Job Title: AI Architect Location: Kochi/Trivandrum Experience: 8-15 Years About the Role: We are seeking a highly experienced and visionary AI Architect to lead the design and implementation of cutting-edge AI solutions that will drive our company's AI transformation. This role is critical in bridging the gap between business needs, technical feasibility, and the successful deployment of AI initiatives across our product and delivery organizations. Key Responsibilities: AI Strategy & Roadmap: Define and drive the AI architectural strategy and roadmap, ensuring alignment with overall business objectives and the company's AI transformation goals. Solution Design & Architecture: Lead the end-to-end architectural design of complex AI/ML solutions, including data pipelines, model training, deployment, and monitoring. Technology Evaluation & Selection: Evaluate and recommend appropriate AI technologies, platforms, and tools (e.g., machine learning frameworks, cloud AI services, MLOps platforms) to support scalable and robust AI solutions. Collaboration & Leadership: Partner closely with product teams, delivery organizations, and data scientists to translate business requirements into technical specifications and architectural designs. Provide technical leadership and guidance to development teams. Best Practices & Governance: Establish and enforce AI development best practices, coding standards, and governance policies to ensure high-quality, secure, and compliant AI solutions. Scalability & Performance: Design AI solutions with scalability, performance, and reliability in mind, anticipating future growth and evolving business needs. Innovation & Research: Stay abreast of the latest advancements in AI, machine learning, and related technologies, identifying opportunities for innovation and competitive advantage. Mentorship & Upskilling: Mentor and upskill internal teams on AI architectural patterns, emerging technologies, and best practices. Key Requirements: 8-15 years of experience in architecting and implementing complex AI/ML solutions, with a strong focus on enterprise-grade systems. Deep understanding of machine learning algorithms, deep learning architectures, and natural language processing (NLP) techniques. Proven experience with major cloud AI platforms (e.g., AWS SageMaker, Azure ML, Google AI Platform) and MLOps principles. Strong proficiency in programming languages commonly used in AI development (e.g., Python, Java). Experience with big data technologies (e.g., Spark, Hadoop) and data warehousing solutions. Demonstrated ability to lead cross-functional technical teams and drive successful project outcomes. Excellent communication, presentation, and stakeholder management skills, with the ability to articulate complex technical concepts to diverse audiences. Bachelor’s or master’s degree in computer science, AI, Machine Learning, or a related quantitative field. Good to Have: Prior experience in an AI leadership or principal architect role within an IT services or product development company. Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes) for deploying AI models. Experience with responsible AI principles, ethics, and bias mitigation strategies. Contributions to open-source AI projects or relevant publications. Key Skills: AI Architecture, Machine Learning, Deep Learning, NLP, Cloud AI Platforms, MLOps, Data Engineering, Python, Solution Design, Technical Leadership, Scalability, Performance Optimization. Skills Data Science,Artificial Intelligence,Data Engineering About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 1 month ago

Apply

0 years

0 Lacs

Gurgaon

On-site

Experience in AWS SageMaker development, pipelines, real-time and batch transform jobs Expertise in AWS, Terraform / Cloud formation for IAC Experience in AWS networking concepts Experience in coding skills python, TensorFlow, pytorch or scikit-learn. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 1 month ago

Apply

5.0 - 7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: Software Engineer – Backend SOL00054 Job Type: Full Time Location: Hyderabad, Telangana Experience Required: 5-7 Years CTC : 13 - 17LPA Job Description : Our client, headquartered in the USA with offices globally is looking for a Backend Software Engineer to join our team responsible for building the core backend infrastructure for our MLOps platform on AWS . The systems you help build will enable feature engineering, model deployment, and model inference at scale – in both batch and online modes. You will collaborate with a distributed cross-functional team to design and build scalable, reliable systems for machine learning workflows. Key Responsibilities: Design, develop, and maintain backend components of the MLOps platform hosted on AWS . Build and enhance RESTful APIs and microservices using Python frameworks like Flask , Django , or FastAPI . Work with WSGI/ASGI web servers such as Gunicorn and Uvicorn . Implement scalable and performant solutions using concurrent programming (AsyncIO) . Develop automated unit and functional tests to ensure code reliability. Collaborate with DevOps engineers to integrate CI/CD pipelines and ensure smooth deployments. Participate in on-call rotation to support production issues and ensure high system availability. Mandatory Skills: · Strong backend development experience using Python with Flask , Django , or FastAPI . · Experience working with WSGI/ASGI web servers (e.g., Gunicorn, Uvicorn). · Hands-on experience with AsyncIO or other asynchronous programming models in Python. · Proficiency with unit and functional testing frameworks . · Experience working with AWS (or at least one public cloud platform). · Familiarity with CI/CD practices and tooling. Nice to have Skills: · Experience developing Kafka client applications in Python. · Familiarity with MLOps platforms like AWS SageMaker , Kubeflow , or MLflow . · Exposure to Apache Spark or similar big data processing frameworks. · Experience with Docker and container platforms such as AWS ECS or EKS . · Familiarity with Terraform , Jenkins , or other DevOps/IaC tools. · Knowledge of Python packaging (Wheel, PEX, Conda). · Experience with metaprogramming in Python. · Education: · Bachelor’s degree in Computer Science, Engineering, or a related field. Show more Show less

Posted 1 month ago

Apply

8.0 - 15.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Role Description Job Title: AI Architect Location: Kochi/Trivandrum Experience 8-15 Years About the Role: We are seeking a highly experienced and visionary AI Architect to lead the design and implementation of cutting-edge AI solutions that will drive our company's AI transformation. This role is critical in bridging the gap between business needs, technical feasibility, and the successful deployment of AI initiatives across our product and delivery organizations. Key Responsibilities AI Strategy & Roadmap: Define and drive the AI architectural strategy and roadmap, ensuring alignment with overall business objectives and the company's AI transformation goals. Solution Design & Architecture Lead the end-to-end architectural design of complex AI/ML solutions, including data pipelines, model training, deployment, and monitoring. Technology Evaluation & Selection Evaluate and recommend appropriate AI technologies, platforms, and tools (e.g., machine learning frameworks, cloud AI services, MLOps platforms) to support scalable and robust AI solutions. Collaboration & Leadership Partner closely with product teams, delivery organizations, and data scientists to translate business requirements into technical specifications and architectural designs. Provide technical leadership and guidance to development teams. Best Practices & Governance Establish and enforce AI development best practices, coding standards, and governance policies to ensure high-quality, secure, and compliant AI solutions. Scalability & Performance Design AI solutions with scalability, performance, and reliability in mind, anticipating future growth and evolving business needs. Innovation & Research Stay abreast of the latest advancements in AI, machine learning, and related technologies, identifying opportunities for innovation and competitive advantage. Mentorship & Upskilling Mentor and upskill internal teams on AI architectural patterns, emerging technologies, and best practices. Key Requirements 8-15 years of experience in architecting and implementing complex AI/ML solutions, with a strong focus on enterprise-grade systems. Deep understanding of machine learning algorithms, deep learning architectures, and natural language processing (NLP) techniques. Proven experience with major cloud AI platforms (e.g., AWS SageMaker, Azure ML, Google AI Platform) and MLOps principles. Strong proficiency in programming languages commonly used in AI development (e.g., Python, Java). Experience with big data technologies (e.g., Spark, Hadoop) and data warehousing solutions. Demonstrated ability to lead cross-functional technical teams and drive successful project outcomes. Excellent communication, presentation, and stakeholder management skills, with the ability to articulate complex technical concepts to diverse audiences. Bachelor’s or master’s degree in computer science, AI, Machine Learning, or a related quantitative field. Good To Have Prior experience in an AI leadership or principal architect role within an IT services or product development company. Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes) for deploying AI models. Experience with responsible AI principles, ethics, and bias mitigation strategies. Contributions to open-source AI projects or relevant publications. Key Skills: AI Architecture, Machine Learning, Deep Learning, NLP, Cloud AI Platforms, MLOps, Data Engineering, Python, Solution Design, Technical Leadership, Scalability, Performance Optimization. Skills Data Science,Artificial Intelligence,Data Engineering Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Role Description Job Title: Cloud AI/ML Engineer – Generative AI (AWS) About The Role We are seeking a skilled and forward-thinking Cloud AI/ML Engineer to lead the design, development, and support of scalable, secure, and high-performance generative AI applications on AWS . You’ll operate at the crossroads of cloud engineering and artificial intelligence, enabling rapid and reliable delivery of cutting-edge AI solutions using services like Amazon Bedrock and SageMaker . This is an opportunity to join a collaborative team driving innovation in AI infrastructure, with a strong focus on automation, security, observability, and performance optimization. Roles And Responsibilities AI/ML Integration Utilize Amazon Bedrock for leveraging foundation models and Amazon SageMaker for training and deploying custom models. Design and maintain scalable generative AI applications using AWS-native AI/ML tools and services. Deployment & Operations Build and manage CI/CD pipelines to automate infrastructure provisioning and model lifecycle workflows. Monitor infrastructure and model performance using Amazon CloudWatch and other observability tools. Ensure production-grade availability, fault tolerance, and performance of deployed AI systems. Security & Compliance Enforce security best practices using IAM, data encryption, and access control policies. Maintain compliance with relevant organizational, legal, and industry-specific data protection standards. Collaboration & Support Partner with data scientists, ML engineers, and product teams to translate requirements into resilient cloud-native solutions. Diagnose and resolve issues related to model behavior, infrastructure health, and AWS service usage. Optimization & Documentation Continuously assess and optimize model performance, infrastructure cost, and resource utilization. Document deployment workflows, architectural decisions, and operational runbooks for team-wide reference. Mentorship & Guidance Mentor peers and junior engineers by sharing knowledge of AWS services and generative AI best practices. Must-Have Skills & Experience Expertise in AWS services, particularly SageMaker, Bedrock, EC2, IAM, and related cloud-native tools. Strong coding skills in Python, with experience in developing AI applications. Hands-on experience with Docker for containerization and familiarity with Kubernetes for orchestration. Proven experience building and maintaining CI/CD pipelines for AI/ML workloads. Knowledge of data security, access control, and monitoring within cloud environments. Experience managing cloud-based data flows and infrastructure for ML workflows. Good-to-Have (Preferred) Skills AWS certifications, such as: AWS Certified Machine Learning – Specialty AWS Certified DevOps Engineer Understanding of responsible AI practices, particularly in generative model deployment. Experience in cost optimization, auto-scaling, and resource management for production AI workloads. Familiarity with tools like Terraform, CloudFormation, or Pulumi for infrastructure as code (IaC). Exposure to multi-cloud or hybrid cloud strategies involving AI/ML services. Skills Aws,Python,Docker,Kubernets Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Design, implement, and manage cloud infrastructure on AWS using Infrastructure as Code (IaC) tools such as Terraform or AWS CloudFormation. Maintain and enhance CI/CD pipelines using tools like GitHub Actions, AWS CodePipeline, Jenkins, or ArgoCD. Ensure platform reliability, scalability, and high availability across development, staging, and production environments. Automate operational tasks, environment provisioning, and deployments using scripting languages such as Python, Bash, or PowerShell. Enable and maintain Amazon SageMaker environments for scalable ML model training, hosting, and pipelines. Integrate AWS Bedrock to provide foundation model access for generative AI applications, ensuring security and cost control. Manage and publish curated infrastructure templates through AWS Service Catalogue to enable consistent and compliant provisioning. Collaborate with security and compliance teams to implement best practices around IAM, encryption, logging, monitoring, and cost optimization. Implement and manage observability tools like Amazon CloudWatch, Prometheus/Grafana, or ELK for monitoring and alerting. Support container orchestration environments using EKS (Kubernetes), ECS, or Fargate. Contribute to incident response, post-mortems, and continuous improvement of the platform operational excellence. Required Skills & Qualifications Bachelors degree in Computer Science, Engineering, or related field (or equivalent experience). 5plus years of hands on experience with AWS cloud services. Strong experience with Terraform, AWS CDK, or CloudFormation. Proficiency in Linux system administration and networking fundamentals. Solid understanding of IAM policies, VPC design, security groups, and encryption. Experience with Docker and container orchestration using Kubernetes (EKS preferred). Hands-on experience with CI/CD tools and version control (Git). Experience with monitoring, logging, and alerting systems. Strong troubleshooting skills and ability to work independently or in a team. Preferred Qualifications (Nice To Have) AWS Certification (e.g., AWS Certified DevOps Engineer, Solutions Architect Associate/Professional). Experience with serverless technologies like AWS Lambda, Step Functions, and EventBridge. Experience supporting machine learning or big data workloads on AWS. Show more Show less

Posted 1 month ago

Apply

15.0 years

0 Lacs

Nagpur, Maharashtra, India

On-site

Job description Job Title: Tech Lead (AI/ML) – Machine Learning & Generative AI Location: Nagpur (Hybrid / On-site) Experience: 8–15 years Employment Type: Full-time Job Summary: We are seeking a highly experienced Python Developer with a strong background in traditional Machine Learning and growing proficiency in Generative AI to join our AI Engineering team. This role is ideal for professionals who have delivered scalable ML solutions and are now expanding into LLM-based architectures, prompt engineering, and GenAI productization. You’ll be working at the forefront of applied AI, driving both model performance and business impact across diverse use cases. Key Responsibilities: Design and develop ML-powered solutions for use cases in classification, regression, recommendation, and NLP. Build and operationalize GenAI solutions, including fine-tuning, prompt design, and RAG implementations using models such as GPT, LLaMA, Claude, or Gemini. Develop and maintain FastAPI-based services that expose AI models through secure, scalable APIs. Lead data modeling, transformation, and end-to-end ML pipelines, from feature engineering to deployment. Integrate with relational (MySQL) and vector databases (e.g., ChromaDB, FAISS, Weaviate) to support semantic search, embedding stores, and LLM contexts. Mentor junior team members and review code, models, and system designs for robustness and maintainability. Collaborate with product, data science, and infrastructure teams to translate business needs into AI capabilities. Optimize model and API performance, ensuring high availability, security, and scalability in production environments. Core Skills & Experience: Strong Python programming skills with 5+ years of applied ML/AI experience. Demonstrated experience building and deploying models using TensorFlow, PyTorch, scikit-learn, or similar libraries. Practical knowledge of LLMs and GenAI frameworks, including Hugging Face, OpenAI, or custom transformer stacks. Proficient in REST API design using FastAPI and securing APIs in production environments. Deep understanding of MySQL (query performance, schema design, transactions). Hands-on with vector databases and embeddings for search, retrieval, and recommendation systems. Strong foundation in software engineering practices: version control (Git), testing, CI/CD. Preferred/Bonus Experience: Deployment of AI solutions on cloud platforms (AWS, GCP, Azure). Familiarity with MLOps tools (MLflow, Airflow, DVC, SageMaker, Vertex AI). Experience with Docker, Kubernetes, and container orchestration. Understanding of prompt engineering, tokenization, LangChain, or multi-agent orchestration frameworks. Exposure to enterprise-grade AI applications in BFSI, healthcare, or regulated industries is a plus. What We Offer: Opportunity to work on a cutting-edge AI stack integrating both classical ML and advanced GenAI. High autonomy and influence in architecting real-world AI solutions. A dynamic and collaborative environment focused on continuous learning and innovation. Show more Show less

Posted 1 month ago

Apply

7.0 - 9.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Candidate Profile Previous experience in building data science / algorithms based products is big advantage. Experience in handling healthcare data is desired. Educational Qualification Bachelors / Masters in computer science / Data Science or related subjects from reputable institution Typical Experience 7-9 years experience of industry experience in developing data science models and solutions. Able to quickly pick up new programming languages, technologies, and frameworks Strong understanding of data structures and algorithms Proven track record of implementing end to end data science modelling projects, providing the guidance and thought leadership to the team. Strong experience in a consulting environment with a do it yourself attitude. Primary Responsibility As a Data science lead you will be responsible to lead a team of analysts and data scientists / engineers and deliver end to end solutions for pharmaceutical clients. Is expected to participate in client proposal discussions with senior stakeholders and provide thought leadership for technical solution. Should be expert in all phases of model development (EDA, Hypothesis, Feature creation, Dimension reduction, Data set clean-up, Training models, Model selection, Validation and Deployment) Should have deep understanding of statistical & machine learning methods ((logistic regression, SVM, decision tree, random forest, neural network), Regression (linear regression, decision tree, random forest, neural network), Classical optimisation (gradient descent etc), Must have thorough mathematical knowledge of correlation/causation, classification, recommenders, probability, stochastic processes, NLP, and how to implement them to a business problem. Should be able to help implement ML models in a optimized , sustainable framework. Expected to gain business understanding in health care domain order to come up with relevant analytics use cases. (E.g. HEOR / RWE / Claims Data Analysis) Expected to keep the team up to date on latest and great in the world on ML and AI. Technical Skill and Expertise Expert level proficiency in programming language Python/SQL. Working knowledge of Relational SQL and NoSQL databases, including Postgres, Redshift Extensive knowledge of predictive & machine learning models in order to lead the team in implementation of such techniques in real world scenarios. Working knowledge of NLP techniques and using BERT transformer models to solve complicated text heavy data structures. Working knowledge of Deep learning & unsupervised learning. Well versed with Data structures, Pre-processing , Feature engineering, Sampling techniques. Good statistical knowledge to be able analyse data. Exposure to open source tools & working on cloud platforms like AWS and Azure and being able to use their tools like Athena, Sagemaker, machine learning libraries is a must. Exposure to AI tools LLM models Llama (ChatGPT, Bard) and prompt engineering is added advantage Exposure to visualization tools like Tableau, PowerBI is an added advantage Don't meet every job requirement? That's okay! Our company is dedicated to building a diverse, inclusive, and authentic workplace. If you're excited about this role, but your experience doesn't perfectly fit every qualification, we encourage you to apply anyway. You may be just the right person for this role or others. Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

India

Remote

AWS Data Engineer Location: Remote (India) Experience: 3+ Years Employment Type: Full-Time About the Role: We are seeking a talented AWS Data Engineer with at least 3 years of hands-on experience in building and managing data pipelines using AWS services. This role involves working with large-scale data, integrating multiple data sources (including sensor/IoT data), and enabling efficient, secure, and analytics-ready solutions. Experience in the energy industry or working with time-series/sensor data is a strong plus. Key Responsibilities: Build and maintain scalable ETL/ELT data pipelines using AWS Glue, Redshift, Lambda, EMR, S3, and Athena Process and integrate structured and unstructured data, including sensor/IoT and real-time streams Optimize pipeline performance and ensure reliability and fault tolerance Collaborate with cross-functional teams including data scientists and analysts Perform data transformations using Python, Pandas, and SQL Maintain data integrity, quality, and security across the platform Use Terraform and CI/CD tools (e.g., Azure DevOps) for infrastructure and deployment automation Support and monitor pipeline workflows, troubleshoot issues, and implement fixes Contribute to the adoption of emerging tools like AWS Bedrock, Textract, Rekognition, and GenAI solutions Required Skills and Qualifications: Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field 3+ years of experience in data engineering using AWS Strong skills in: AWS Glue, Redshift, S3, Lambda, EMR, Athena Python, Pandas, SQL RDS, Postgres, SAP HANA Solid understanding of data modeling, warehousing, and pipeline orchestration Experience with version control (Git) and infrastructure as code (Terraform) Preferred Skills: Experience working with energy sector dat a or IoT/sensor-based data Exposure to machine learnin g tools and frameworks (e.g., SageMaker, TensorFlow, Scikit-learn) Familiarity with big data technologie s like Apache Spark, Kafka Experience with data visualization tool s (Tableau, Power BI, AWS QuickSight) Awareness of data governance and catalog tool s such as AWS Data Quality, Collibra, and AWS Databrew AWS Certifications (Data Analytics, Solutions Architect Show more Show less

Posted 1 month ago

Apply

10.0 years

0 Lacs

India

Remote

Full Stack AI Developer – LLM & Workflow Automation 📍 Remote | 🕒 8–10 Years | 🧠 Python, React, LLMs, n8n 🧩 About the Role: We’re hiring a Full Stack AI Developer to build next-gen applications that combine intelligent chatbots, LLM workflows, and seamless UI/UX interfaces. You’ll own features end-to-end, from backend APIs and AI integrations to frontend web experiences. 🎯 Key Responsibilities: Design, develop, and deploy full-stack AI-powered applications Build responsive web interfaces using React (or Angular/Vue) Integrate LLMs (OpenAI, Claude, etc.) for smart assistants, summarization, etc. Create and orchestrate automation flows with n8n Build and maintain APIs using Python (FastAPI/Django) Deploy solutions in cloud-native environments (AWS, Azure) Work with cross-functional teams on feature delivery, testing, and scaling ✅ Must-Have Skills: 8–10 years of full-stack development experience Strong in Python for backend and AI integrations (FastAPI, Django) Proficient in React.js (or Angular/Vue) for building modern UIs Hands-on experience with n8n automation workflows Experience integrating LLMs (OpenAI, LangChain, GPT, Claude) REST APIs, webhooks, and third-party integrations Cloud platforms (AWS, Azure, or GCP) CI/CD pipelines, Docker, and SQL/NoSQL databases 🌟 Nice to Have: Experience with MLOps (MLflow, SageMaker, Kubeflow) Familiarity with RAG pipelines, vector DBs (FAISS, Pinecone) Semantic Kernel or multi-agent LLM frameworks Azure certifications This is a remote offshore position, with exciting long-term projects and the chance to work with a dynamic, global tech team. To apply: Send your resume to info@ribbitzllc.com Show more Show less

Posted 1 month ago

Apply

4.0 - 6.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Job Title: Sr. Data Engineer Location: Office-Based (Ahmedabad, India) About Hitech Hitech is a leading provider of Data, Engineering Services, and Business Process Solutions. With robust delivery centers in India and global sales offices in the USA, UK, and the Netherlands, we enable digital transformation for clients across industries including Manufacturing, Real Estate, and e-Commerce. Our Data Solutions practice integrates automation, digitalization, and outsourcing to deliver measurable business outcomes. We are expanding our engineering team and looking for an experienced Sr. Data Engineer to design scalable data pipelines, support ML model deployment, and enable insight-driven decisions. Position Summary We are seeking a Data Engineer / Lead Data Engineer with deep experience in data architecture, ETL pipelines, and advanced analytics support. This role is crucial for designing robust pipelines to process structured and unstructured data, integrate ML models, and ensure data reliability. The ideal candidate will be proficient in Python, R, SQL, and cloud-based tools, and possess hands-on experience in creating end-to-end data engineering solutions that support data science and analytics teams. Key Responsibilities Design and optimize data pipelines to ingest, transform, and load data from diverse sources. Build programmatic ETL pipelines using SQL and related platforms. Understand complex data structures and perform data transformation effectively. Develop and support ML models such as Random Forest, SVM, Clustering, Regression, etc. Create and manage scalable, secure data warehouses and data lakes. Collaborate with data scientists to structure data for analysis and modeling. Define solution architecture for layered data stacks ensuring high data quality. Develop design artifacts including data flow diagrams, models, and functional documents. Work with technologies such as Python, R, SQL, MS Office, and SageMaker. Conduct data profiling, sampling, and testing to ensure reliability. Collaborate with business stakeholders to identify and address data use cases. Qualifications & Experience 4 to 6 years of experience in data engineering, ETL development, or database administration. Bachelor’s degree in Mathematics, Computer Science, or Engineering (B.Tech/B.E.). Postgraduate qualification in Data Science or related discipline preferred. Strong proficiency in Python, SQL, Advanced MS Office tools, and R. Familiarity with ML concepts and integrating models into pipelines. Experience with NoSQL systems like MongoDB, Cassandra, or HBase. Knowledge of Snowflake, Databricks, and other cloud-based data tools. ETL tool experience and understanding of data integration best practices. Data modeling skills for relational and NoSQL databases. Knowledge of Hadoop, Spark, and scalable data processing frameworks. Experience with SciKit, TensorFlow, Pytorch, GPT, PySpark, etc. Ability to build web scrapers and collect data from APIs. Experience with Airflow or similar tools for pipeline automation. Strong SQL performance tuning skills in large-scale environments. What We Offer Competitive compensation package based on skills and experience. Opportunity to work with international clients and contribute to high-impact data projects. Continuous learning and professional growth within a tech-forward organization. Collaborative and inclusive work environment. If you're passionate about building data-driven infrastructure to fuel analytics and AI applications, we look forward to connecting with you. Anand Soni Hitech Digital Solutions Show more Show less

Posted 1 month ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka

On-site

- 3+ years of non-internship professional software development experience - 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience - Experience programming with at least one software programming language AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services. At AWS AI, we want to make it easy for our customers to train their deep learning workload in the cloud. With Amazon SageMaker Training, we are building customer-facing services to empower data scientists and software engineers in their deep learning endeavors. As our customers rapidly adopt LLMs and Generative AI for their business, we’re building the next-generation AI platform to accelerate their development. We’re seeking a dedicated software engineer to drive building our next-generation AI compute platform that’s optimized for LLMs and distributed training.At AWS AI, we want to make it easy for our customers to train their deep learning workload in the cloud. With Amazon SageMaker Training, we are building customer-facing services to empower data scientists and software engineers in their deep learning endeavors. As our customers rapidly adopt LLMs and Generative AI for their business, we’re building the next-generation AI platform to accelerate their development. We’re seeking a dedicated software engineer to drive building our next-generation AI compute platform that’s optimized for LLMs and distributed training. Key job responsibilities As an SDE, you will be responsible for designing, developing, testing, and deploying distributed machine learning systems and large-scale solutions for our world-wide customer base. In this, you will collaborate closely with a team of ML scientists and customers to influence our overall strategy and define the team’s roadmap. You'll assist in gathering and analyzing business and functional requirements, and translate requirements into technical specifications for robust, scalable, supportable solutions that work well within the overall system architecture. You will also drive the system architecture, spearhead best practices that enable a quality product, and help coach and develop junior engineers. A successful candidate will have an established background in engineering large scale software systems, a strong technical ability, great communication skills, and a motivation to achieve results in a fast paced environment. About You: You are passionate about building platform and products for large scale deep learning model training (100+ billion parameter GPT, 1000s of GPU devices). You have a proven track record of bringing innovative research to customers. You are able to thrive and succeed in an entrepreneurial environment and not be hindered by ambiguity or competing priorities. Ownership, delivering results, thinking big and analytical leadership are essential to success in this role. You have solid experience in multi-threaded asynchronous C++ or Go development. You have prior experience in one of: resource orchestrators like slurm/kubernetes, high performance computing, building scalable systems, experience in large language model training. This is a great team to come to have a huge impact on AWS and the world's customers we serve! A day in the life Every day will bring new and exciting challenges on the job while you: * Build and improve next-generation AI platform * Collaborate with internal engineering teams, leading technology companies around the world and open source community - PyTorch, NVIDIA/GPU * Create innovative products to run at scale on the AI platform, and see them launched in high volume production About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS ? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience Bachelor's degree in computer science or equivalent Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 1 month ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

The Data Scientist organization within the Data and Analytics division is responsible for designing and implementing a unified data strategy that enables the efficient, secure, and governed use of data across the organization. We aim to create a trusted and customer-centric data ecosystem, built on a foundation of data quality, security, and openness, and guided by the Thomson Reuters Trust Principles. Our team is dedicated to developing innovative data solutions that drive business value while upholding the highest standards of data management and ethics. About the role: Work with low to minimum supervision to solve business problems using data and analytics. Work in multiple business domain areas including Customer Experience and Service, Operations, Finance, Sales and Marketing. Work with various business stakeholders, to understand and document requirements. Design an analytical framework to provide insights into a business problem. Explore and visualize multiple data sets to understand data available for problem solving. Build end to end data pipelines to handle and process data at scale. Build machine learning models and/or statistical solutions. Build predictive models. Use Natural Language Processing to extract insight from text. Design database models (if a data mart or operational data store is required to aggregate data for modeling). Design visualizations and build dashboards in Tableau and/or PowerBI Extract business insights from the data and models. Present results to stakeholders (and tell stories using data) using power point and/or dashboards. Work collaboratively with other team members. About you: Overall 3+ years' experience in technology roles. Must have a minimum of 1 years of experience working in the data science domain. Has used frameworks/libraries such as Scikit-learn, PyTorch, Keras, NLTK. Highly proficient in Python. Highly proficient in SQL. Experience with Tableau and/or PowerBI. Has worked with Amazon Web Services and Sagemaker. Ability to build data pipelines for data movement using tools such as Alteryx, GLUE, Informatica. Proficient in machine learning, statistical modelling, and data science techniques. Experience with one or more of the following types of business analytics applications: Predictive analytics for customer retention, cross sales and new customer acquisition. Pricing optimization models. Segmentation. Recommendation engines. Experience in one or more of the following business domains Customer Experience and Service. Finance. Operations. Good presentation skills and the ability to tell stories using data and PowerPoint/Dashboard Visualizations. Excellent organizational, analytical and problem-solving skills. Ability to communicate complex results in a simple and concise manner at all levels within the organization. Ability to excel in a fast-paced, startup-like environment. What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com. Show more Show less

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies