Jobs
Interviews

1499 Vertex Jobs - Page 13

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Job Title: GCP Data Engineer Location: Remote Experience Required: 8 Years Position Type: Freelance / Contract As a Senior Data Engineer with a focus on pipeline migration from SAS to Google Cloud Platform (GCP) technologies, you will tackle intricate problems and create value for our business by designing and deploying reliable, scalable solutions tailored to the company’s data landscape. You will lead the development of custom-built data pipelines on the GCP stack, ensuring seamless migration of existing SAS pipelines. Additionally, you will mentor junior engineers, define standards and best practices, and contribute to strategic planning for data initiatives. Responsibilities: ● Lead the design, development, and implementation of data pipelines on the GCP stack, with a focus on migrating existing pipelines from SAS to GCP technologies. ● Develop modular and reusable code to support complex ingestion frameworks, simplifying the process of loading data into data lakes or data warehouses from multiple sources. ● Mentor and guide junior engineers, providing technical oversight and fostering their professional growth. ● Work closely with analysts, architects, and business process owners to translate business requirements into robust technical solutions. ● Utilize your coding expertise in scripting languages (Python, SQL, PySpark) to extract, manipulate, and process data effectively. ● Leverage your expertise in various GCP technologies, including BigQuery, GCP Workflows, Dataflow, Cloud Scheduler, Secret Manager, Batch, Cloud Logging, Cloud SDK, Google Cloud Storage, IAM, and Vertex AI, to enhance data warehousing solutions. ● Lead efforts to maintain high standards of development practices, including technical design, solution development, systems configuration, testing, documentation, issue identification, and resolution, writing clean, modular, and sustainable code. ● Understand and implement CI/CD processes using tools like Pulumi, GitHub, Cloud Build, Cloud SDK, and Docker.

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

🚀 We’re Hiring: Generative AI Architect | Noida (Hybrid) Experience: 10+ years (with 2–3 years in GenAI/LLMs) Type: Full-Time | Hybrid (Noida) Are you passionate about shaping the future of AI? We’re on the lookout for a Generative AI Architect to lead the design, development, and deployment of next-generation GenAI solutions that power enterprise-scale applications. This is your opportunity to work at the intersection of AI innovation and real-world impact—building intelligent systems using cutting-edge models like GPT, Claude, LLaMA, and Mistral. 🔍 What You’ll Do: Architect and implement secure, scalable GenAI solutions using LLMs. Design state-of-the-art RAG pipelines using LangChain, LlamaIndex, FAISS, etc. Lead prompt engineering and build reusable modules (e.g., chatbots, summarizers). Deploy on cloud-native platforms: AWS Bedrock, Azure OpenAI, GCP Vertex AI . Integrate GenAI into enterprise products in collaboration with cross-functional teams. Drive MLOps best practices for CI/CD, monitoring, and observability. Explore the frontier of GenAI—multi-agent systems, fine-tuning, and autonomous agents. Ensure compliance, security, and data governance in all AI systems. ✅ What We’re Looking For: 8+ years in AI/ML, with 2–3 years in LLMs or GenAI. Proficiency in Python , Transformers, LangChain, OpenAI SDKs. Experience with Vector Databases (Pinecone, Weaviate, FAISS). Hands-on with cloud platforms: AWS, Azure, GCP. Knowledge of LLM orchestration (LangGraph, AutoGen, CrewAI). Familiarity with tools like MLflow, Docker, Kubernetes, FastAPI. Strong understanding of GenAI evaluation metrics (BERTScore, BLEU, GPTScore). Excellent communication and architectural leadership skills. 🌟 Nice to Have: Experience fine-tuning open-source LLMs (LoRA, QLoRA). Exposure to multi-modal AI systems (text-image, speech). Domain knowledge in BFSI, Healthcare, Legal, or EdTech. Published research or open-source contributions in GenAI. 📍 Location: Noida (Hybrid) 🌐 Apply Now and be part of the GenAI transformation. Let’s build the future—one intelligent system at a time. 💡 #GenerativeAI #LLM #AIArchitect #MachineLearning #LangChain #VertexAI #GenAIJobs #PromptEngineering #Hiring #TechJobs #AIInnovation

Posted 2 weeks ago

Apply

0.0 years

0 - 0 Lacs

Chhawni, Indore, Madhya Pradesh

On-site

Job Title: Peon Location: Db vertex technologies Chhawni, Indore Job Type: Full-time Job Responsibilities: Handling office cleaning and maintenance Assisting in document filing and movement Serving tea, coffee, and refreshments Running office errands as needed Helping staff with basic tasks Ensuring a clean and organized workspace Requirements: Hardworking and punctual Honest and responsible Ability to follow instructions Benefits: Fixed salary Friendly work environment Other perks as per company policy If you meet the above requirements and are ready to start your career in a dynamic role, we encourage you to apply! Job Type: Full-time, Fresher Pay: ₹7,000.00 - ₹10,000.00 per month Schedule: Day shift Ability to commute/relocate: Chhawni, Indore, Madhya Pradesh: Reliably commute or planning to relocate before starting work (Preferred) Work Location: In person Job Types: Full-time, Fresher Pay: ₹7,000.00 - ₹10,000.00 per month Schedule: Day shift Work Location: In person Job Types: Permanent, Fresher Pay: ₹7,000.00 - ₹10,000.00 per month Work Location: In person

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: Senior Test Automation Lead – Playwright (AI/ML Focus) Location: Hyderabad Job Type: Full-Time Experience Required: 8+ years in Software QA/Testing, 3+ years in Test Automation using Playwright, 2+ years in AI/ML project environments --- About the Role: We are seeking a passionate and technically skilled Senior Test Automation Lead with deep experience in Playwright-based frameworks and a solid understanding of AI/ML-driven applications. In this role, you will lead the automation strategy and quality engineering practices for next-generation AI products that integrate large-scale machine learning models, data pipelines, and dynamic, intelligent UIs. You will define, architect, and implement scalable automation solutions across AI-enhanced features such as recommendation engines, conversational UIs, real-time analytics, and predictive workflows, ensuring both functional correctness and intelligent behavior consistency. --- Key Responsibilities: Test Automation Framework Design & Implementation · Design and implement robust, modular, and extensible Playwright automation frameworks using TypeScript/JavaScript. · Define automation design patterns and utilities that can handle complex AI-driven UI behaviors (e.g., dynamic content, personalization, chat interfaces). · Implement abstraction layers for easy test data handling, reusable components, and multi-browser/platform execution. AI/ML-Specific Testing Strategy · Partner with Data Scientists and ML Engineers to understand model behaviors, inference workflows, and output formats. · Develop strategies for testing non-deterministic model outputs (e.g., chat responses, classification labels) using tolerance ranges, confidence intervals, or golden datasets. · Design tests to validate ML integration points: REST/gRPC API calls, feature flags, model versioning, and output accuracy. · Include bias, fairness, and edge-case validations in test suites where applicable (e.g., fairness in recommendation engines or NLP sentiment analysis). End-to-End Test Coverage · Lead the implementation of end-to-end automation for: o Web interfaces (React, Angular, or other SPA frameworks) o Backend services (REST, GraphQL, WebSockets) o ML model integration endpoints (real-time inference APIs, batch pipelines) · Build test utilities for mocking, stubbing, and simulating AI inputs and datasets. CI/CD & Tooling Integration · Integrate automation suites into CI/CD pipelines using GitHub Actions, Jenkins, GitLab CI, or similar. · Configure parallel execution, containerized test environments (e.g., Docker), and test artifact management. · Establish real-time dashboards and historical reporting using tools like Allure, ReportPortal, TestRail, or custom Grafana integrations. Quality Engineering & Leadership · Define KPIs and QA metrics for AI/ML product quality: functional accuracy, model regression rates, test coverage %, time-to-feedback, etc. · Lead and mentor a team of automation and QA engineers across multiple projects. · Act as the Quality Champion across the AI platform by influencing engineering, product, and data science teams on quality ownership and testing best practices. Agile & Cross-Functional Collaboration · Work in Agile/Scrum teams; participate in backlog grooming, sprint planning, and retrospectives. · Collaborate across disciplines: Frontend, Backend, DevOps, MLOps, and Product Management to ensure complete testability. · Review feature specs, AI/ML model update notes, and data schemas for impact analysis. --- Required Skills and Qualifications: Technical Skills: · Strong hands-on expertise with Playwright (TypeScript/JavaScript). · Experience building custom automation frameworks and utilities from scratch. · Proficiency in testing AI/ML-integrated applications: inference endpoints, personalization engines, chatbots, or predictive dashboards. · Solid knowledge of HTTP protocols, API testing (Postman, Supertest, RestAssured). · Familiarity with MLOps and model lifecycle management (e.g., via MLflow, SageMaker, Vertex AI). · Experience in testing data pipelines (ETL, streaming, batch), synthetic data generation, and test data versioning. Domain Knowledge: · Exposure to NLP, CV, recommendation engines, time-series forecasting, or tabular ML models. · Understanding of key ML metrics (precision, recall, F1-score, AUC), model drift, and concept drift. · Knowledge of bias/fairness auditing, especially in UI/UX contexts where AI decisions are shown to users. Leadership & Communication: · Proven experience leading QA/Automation teams (4+ engineers). · Strong documentation, code review, and stakeholder communication skills. · Experience collaborating in Agile/SAFe environments with cross-functional teams. --- Preferred Qualifications: · Experience with AI Explainability frameworks like LIME, SHAP, or What-If Tool. · Familiarity with Test Data Management platforms (e.g., Tonic.ai, Delphix) for ML training/inference data. · Background in performance and load testing for AI systems using tools like Locust, JMeter, or k6. · Experience with GraphQL, Kafka, or event-driven architecture testing. · QA Certifications (ISTQB, Certified Selenium Engineer) or cloud certifications (AWS, GCP, Azure). --- Education: · Bachelor’s or Master’s degree in Computer Science, Software Engineering, or related technical discipline. · Bonus for certifications or formal training in Machine Learning, Data Science, or MLOps. --- Why Join Us? · Work on cutting-edge AI platforms shaping the future of [industry/domain]. · Collaborate with world-class AI researchers and engineers. · Drive the quality of products used by [millions of users / high-impact clients]. · Opportunity to define test automation practices for AI—one of the most exciting frontiers in tech.

Posted 2 weeks ago

Apply

18.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Company Brief House of Shipping provides business consultancy and advisory services for Shipping & Logistics companies. House of Shipping's commitment to their customers begins with developing an understanding of their business fundamentals. We are hiring on behalf of one of our key US based client - a globally recognized service provider of flexible and scalable outsourced warehousing solutions, designed to adapt to the evolving demands of today’s supply chains. Currently House of Shipping is looking to identify a high caliber Data Science Lead . This position is an on-site position for Hyderabad . Background and experience: 15–18 years in data science, with 5+ years in leadership roles Proven track record in building and scaling data science teams in logistics, e-commerce, or manufacturing Strong understanding of statistical learning, ML architecture, productionizing models, and impact tracking Job purpose: To lead enterprise-scale data science initiatives in supply chain optimization, forecasting, network analytics, and predictive maintenance. This role blends technical leadership with strategic alignment across business units and manages advanced analytics teams to deliver measurable business impact. Main tasks and responsibilities: Define and drive the data science roadmap across forecasting (demand, returns), route optimization, warehouse simulation, inventory management, and fraud detection Architect end-to-end pipelines with engineering teams: from data ingestion, model development, to API deployment Lead the design and deployment of ML models using Python (Scikit-Learn, XGBoost, PyTorch, LightGBM), and MLOps tools like MLflow, Vertex AI, or AWS SageMaker Collaborate with operations, product, and technology to prioritize AI use cases and define business metrics Manage experimentation frameworks (A/B testing, simulation models) and statistical hypothesis testing Mentor team members in model explainability, interpretability, and ethical AI practices Ensure robust model validation, drift monitoring, retraining schedules, and version control Contribute to organizational data maturity: feature stores, reusable components, metadata tracking Own team hiring, capability development, project estimation, and stakeholder presentations Collaborate with external vendors, universities, and open-source projects where applicable Education requirements: Bachelor’s or Master’s or PhD in Computer Science, Mathematics, Statistics, Operations Research Preferred: Certifications in Cloud ML stacks (AWS/GCP/Azure), MLOps, or Applied AI Competencies and skills: Strategic vision in AI applications across supply chain Team mentorship and delivery ownership Expertise in statistical and ML frameworks MLOps pipeline management and deployment best practices Strong business alignment and executive communication

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Mohali district, India

On-site

Software Developer - Full Stack (React/Python) nCare, Inc, California, US www.ncaremd.com Position Summary We are seeking an experienced Software Developer with 3+ years of hands-on development experience to join our dynamic engineering team. The ideal candidate will be proficient in modern frontend technologies (React, NextJS), backend development (Python), databases(PostgreSQL) and cloud platforms (Google Cloud), while demonstrating expertise in AI-powered development tools and workflows. Open to working on project/contract basis too. Key Responsibilities Development & Engineering Design, develop, and maintain scalable web applications using React and NextJS Build robust backend services and APIs using Python (FastAPI frameworks) Design and optimize PostgreSQL databases, write efficient queries, and manage database migrations Implement responsive, user-friendly interfaces with modern JavaScript, HTML5, and CSS3 Develop and optimize database interactions and data pipelines Ensure code quality through comprehensive testing, code reviews, and debugging Cloud & Infrastructure Deploy and manage applications on Google Cloud Platform (GCP) Utilize Google Cloud services including Cloud Run, Cloud Storage, Cloud SQL, Vertex AI and other services. Implement CI/CD pipelines and DevOps best practices Monitor application performance and optimize for scalability AI-Enhanced Development Leverage AI development tools (GitHub Copilot, Gemini Code Assist, or similar) to accelerate development cycles Integrate AI/ML capabilities into applications using Google Cloud AI services Stay current with emerging AI tools and incorporate them into development workflows Contribute to improving team productivity through AI-assisted coding practices Collaboration & Communication Work closely with cross-functional teams including designers, product managers, and other developers Participate in code reviews and provide constructive feedback to team members Document technical solutions and maintain clear project documentation Communicate technical concepts effectively to both technical and non-technical stakeholders Required Qualifications Technical Skills 3+ years of professional software development experience Frontend Development: Proficiency in React.js and NextJS Strong knowledge of JavaScript (ES6+), HTML5, CSS3 Experience with state management (Redux, Context API) Familiarity with modern build tools (Webpack, Vite) and package managers (npm, yarn) Backend Development: Strong Python programming skills Experience with web frameworks (Django, Flask, or FastAPI) Knowledge of RESTful API design and implementation Proficiency with PostgreSQL database design, optimization, and management Experience with SQL queries, database migrations, and ORM frameworks Additional experience with NoSQL databases is a plus Google Cloud Platform: Hands-on experience with GCP services and deployment Understanding of cloud architecture patterns Experience with containerization (Docker) and orchestration AI Development Tools: Demonstrated experience using AI-powered development tools (Copilot, ChatGPT, Claude, Gemini, etc.) Ability to effectively prompt and collaborate with AI assistants Experience optimizing development workflows with AI tools Core Competencies Problem-Solving: Strong analytical and critical thinking skills with ability to debug complex issues Quick Learner: Demonstrated ability to rapidly adapt to new technologies and frameworks Communication: Excellent verbal and written communication skills Team Collaboration: Experience working in agile development environments Additional Requirements Experience with version control systems (Git) and collaborative development workflows Knowledge building AI agents. Understanding of software testing principles and frameworks (Jest, pytest) Knowledge of web performance optimization and security best practices Familiarity with responsive design and cross-browser compatibility Preferred Qualifications Bachelor's degree in Computer Science, Engineering, or equivalent practical experience Experience with TypeScript Knowledge of GraphQL and modern API technologies Familiarity with machine learning concepts and implementation Previous experience in agile/scrum development methodologies Contributions to open-source projects or active GitHub profile Experience with monitoring and logging tools

Posted 2 weeks ago

Apply

1.0 - 3.0 years

13 - 18 Lacs

Bengaluru

Work from Office

Job Area: Engineering Group, Engineering Group > Software Engineering General Summary: As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Software Engineer, you will design, develop, create, modify, and validate embedded and cloud edge software, applications, and/or specialized utility programs that launch cutting-edge, world class products that meet and exceed customer needs. Qualcomm Software Engineers collaborate with systems, hardware, architecture, test engineers, and other teams to design system-level software solutions and obtain information on performance requirements and interfaces. Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field. 1-3 years work experience in embedded software and/or driver.Candidate should be detail-oriented and have strong analytic and problem-solving skills, highly organizedExtremely strong knowledge with C/C++ programming, ARM assembly language.Solid understanding of overall embedded system architecture.Experience in 2D and 3D graphics technology and standards such as OpenGL, OpenGL ES/EGL, VulkanExperience in multimedia on embedded systems and the use of graphics and in a highly integrated system.Experience and/or knowledge of the use of the GPU as a compute engineGPGPU and OpenCL is an asset.Experience with virtualization technologies across CPU and MM hardware accelerators.Experience with GPU optimization, advanced rendering, and latency optimizations and ability to identify and isolate performance issues in graphics applications.Experience with design and implementation of modern 3D graphics applications using OpenGLES API is a plus.Experience with writing vertex and fragment shaders using shading languages such as GLSL is a plus.Knowledge in one or more of the following operating systems is preferredAndroid, QNX, embedded Linux, Genivi, Integrity.Knowledge of Graphics frameworksKanzi, QT, is a plus *Fluent in industry standard software toolsSW/HW debuggers, code revision control systems (GIT, Perforce), IDEs and build tools.Strong communication skills (written and verbal), working with teams across multiple time zones.A passion for excellence in programming, and exceeding goals. RequiredBachelor's, Computer Engineering and/or Computer and/or Electrical Engineering PreferredMaster's, Computer Engineering and/or Computer and/or Electrical Engineering Applicants Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers.

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About Spyne At Spyne, we are transforming how cars are marketed and sold with cutting-edge Generative AI. What started as a bold idea—using AI-powered visuals to help auto dealers sell faster online—has now evolved into a full-fledged, AI-first automotive retail ecosystem. Backed by $16M in Series A funding from Accel, Vertex Ventures, and other top investors, we’re scaling at breakneck speed: Launched industry-first AI-powered Image, Video & 360° solutions for Automotive dealers Launching Gen AI powered Automotive Retail Suite to power Inventory, Marketing, CRM for dealers Onboarded 1500+ dealers across US, EU and other key markets in the past 2 years of launch Gearing up to onboard 10K+ dealers across global market of 200K+ dealers 150+ members team with near equal split on R&D and GTM Learn more about our products: Spyne AI Products - StudioAI, RetailAI Series A Announcement - CNBC-TV18, Yourstory What are we looking for? We are looking for Account Executives who can sell, hustle, and thrive in chaos . Working on B2B SaaS sales and driving revenue growth. This role will be responsible for building relationships with car dealers while helping in shaping the overall sales strategy in the US region . 📍 Location: Gurugram (Work from Office, 5 days a week) 🌎 Shift Timings: US Shift (6 PM – 3 AM IST) 🚀 Why this role? At Spyne, we believe that a high-performing sales team is the key to growth. As an Account Executive - US , you will be the driving force behind our expansion in the US market. This role offers the opportunity to work with a rapidly growing AI Tech company, build strong client relationships, and make a significant impact on our business. If you are passionate about sales, thrive in a fast-paced environment, and want to be part of an exciting growth journey, this role is for you! 📌 What will you do? Generate leads , nurture prospects & close high-value deals. Own & exceed annual sales targets. Deliver compelling product demos & refine sales strategies. Build long-term client relationships & explore upsell opportunities. Develop & execute account growth plans . Communicate value propositions effectively. Use a Challenger-based selling approach to close complex deals. 🏆 What will make you successful in this role? Deep understanding of the US market: Experience selling to SMBs and enterprise accounts in the US. Sales expertise and Strong Communication Skills : Proven track record of achieving and exceeding sales targets. Ability to engage clients effectively via phone, email, and presentations. Relationship-building skills: Ability to foster long-term, trusting relationships with clients. Strategic thinking: Ability to identify and execute upselling and cross-selling opportunities. Process-oriented mindset: Proficiency with CRM software like HubSpot to manage and track sales efforts. 📊 What will a typical quarter at Spyne look like? Identify and close high-value deals with SMBs and enterprise clients in the US market. Conduct engaging product demos to convert prospects into long-term customers. Build and manage a strong sales pipeline while exceeding set targets. Collaborate with internal teams to ensure a seamless customer experience. Track sales metrics and refine strategies to improve conversion rates and revenue. 🔹 How will we set you up for success? Comprehensive onboarding and training to help you understand our product and market. Continuous learning and development opportunities to enhance your sales skills. A culture that values innovation, customer obsession, and long-term success. Access to cutting-edge AI technology that makes selling more effective and impactful. 🎯 What you must have? Bachelor’s or Master’s degree with 3 - 5 years of relevant sales experience (US sales experience preferred). Experience working with SMBs/Enterprise Accounts in the US market. Proficiency with HubSpot or other CRM software . Prior experience as a sales professional with a track record of achieving sales quotas . 🚀 Why Spyne? Strong Culture: A supportive and collaborative work environment. Transparency & Trust: High levels of autonomy and decision-making. Competitive Salary & Equity: Stock options for top performers. Health Insurance: Coverage for employees and dependents, including GMC, GPA, and GTLI coverage. Dynamic Growth Environment: Join a high-growth startup and accelerate your career. 📢 If you’re a go-getter, passionate about B2B SaaS sales, and excited to drive revenue growth in a Series A startup, apply now! 🚀

Posted 2 weeks ago

Apply

10.0 - 15.0 years

10 - 14 Lacs

Ahmedabad

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : SAP FI S/4HANA Accounting Good to have skills : NAMinimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will be responsible for managing the team and ensuring successful project delivery. Your typical day will involve collaborating with multiple teams, making key decisions, and providing solutions to problems that apply across multiple teams. Roles & Responsibilities:Expertise in Tax and its application in SAP Finance modules and aligned business processes.Should be able to own end to end responsibility for Indirect Tax solution being deployed in S4 HANAExtensive experience of SAP FI Tax ConfigurationKnowledge of country specific Indirect tax scenarios with applicable business scenarios.Cross-functional knowledge on OTC, PTP and IntercompanyPrepare functional specification documents for enhancements related to IDTDesign, build and support integrations to external tax authorities where needed.Ability to understand complex business/tax requirements within the business processes and communicate effectively with a non-technical audience.Support testing (Integration, UAT),cutover, Go live and hyper care.Experience with Vertex interface administration(Added advantage) Professional & Technical Skills: Must To Have Skills: Proficiency in SAP FI CO FinanceExtensive knowledge of S4 HANA Implementation of Tax & FICO.Minimum 6-10+ years of experience in implementation / application enhancement /production support/ projects in Finance.Minimum experience of 4+ implementation projects.Knowledge and hands-on experience across all SAP Finance modules and DevOps methodology.Good to have hands-on knowledge and experience on support tools - Service Now, JIRA, CharmEffective business communication and stakeholder management skills Additional Information:- The candidate should have a minimum of 12 years of experience in SAP FI CO Finance- This position is based at our Hyderabad office- A 15 years full-time education is required Qualification 15 years full time education

Posted 2 weeks ago

Apply

1.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Position – Sales Coordinator Company – Attentive OS Pvt Ltd Location – Remote - India Department – Growth About Attentive.ai Attentive.ai is a fast-growing vertical SaaS startup backed by Peak XV (Surge), InfoEdge, and Vertex Ventures. We build innovative software solutions for the landscape, paving, and construction industries in the United States. Our mission is to help these businesses improve operations and win more work through AI-powered takeoffs and a streamlined software platform. We’re looking for a resourceful and highly motivated professional to join our Growth team. This role will support sales execution, deal flow operations, partner outreach, and executive-level initiatives, making it ideal for someone who thrives in a fast-paced, high-ownership support role. Job Description The ideal candidate is a self-starter who brings structure, initiative, and attention to detail. This role will support Account Executives, assist the President of Field Services, and act as a key communication bridge between our internal teams and external stakeholders. You’ll work across CRM, partner communications, customer preparation, and executive projects to ensure smooth sales execution and strategic growth initiatives. Responsibilities Of The Role Manage deal flow and communication on behalf of Account Executives, including outreach, follow-ups, and recap emails. Assist in CRM management (HubSpot) - ensuring pipeline hygiene, updating deal data, logging call notes, and maintaining accuracy across records. Support credential creation and routing of free trials for the sales team. Collaborate directly with the President of Field Services on customer follow-ups, proposal development, partner outreach, and strategic initiatives. Draft emails, memos, and proposals; create both internal and customer-facing decks and supporting materials from scratch. Pull together data from various internal sources and synthesize it into structured documents with initial insights. Participate in select customer and partner meetings to support note-taking, documentation, and follow-up. Assist in preparing agendas, customer correspondence, and partner updates for ongoing executive-level accounts and initiatives. Requirements For The Role 1+ years' experience in a similar sales support, business operations, or executive assistant role within a B2B/SaaS environment (preferred). Experience working with North American teams and availability during EST business hours. (7am - 4pm EST) Proficiency in Google Suite and Slack; familiarity with Notion and HubSpot is a plus. Excellent written and verbal communication skills across both business formal and conversational styles. High attention to detail, organizational strength, and the ability to manage multiple priorities independently. Professional discretion and sound judgment when working with sensitive business information. Traits that will help you thrive: resourcefulness, initiative, and a strong sense of urgency.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

India

On-site

Role Overview: We are looking for a skilled and versatile AI Infrastructure Engineer (DevOps/MLOps) to build and manage the cloud infrastructure, deployment pipelines, and machine learning operations behind our AI-powered products. You will work at the intersection of software engineering, ML, and cloud architecture to ensure that our models and systems are scalable, reliable, and production-ready. Key Responsibilities: Design and manage CI/CD pipelines for both software applications and machine learning workflows. Deploy and monitor ML models in production using tools like MLflow, SageMaker, Vertex AI, or similar. Automate the provisioning and configuration of infrastructure using IaC tools (Terraform, Pulumi, etc.). Build robust monitoring, logging, and alerting systems for AI applications. Manage containerized services with Docker and orchestration platforms like Kubernetes. Collaborate with data scientists and ML engineers to streamline model experimentation, versioning, and deployment. Optimize compute resources and storage costs across cloud environments (AWS, GCP, or Azure). Ensure system reliability, scalability, and security across all environments. Requirements: 5+ years of experience in DevOps, MLOps, or infrastructure engineering roles. Hands-on experience with cloud platforms (AWS, GCP, or Azure) and services related to ML workloads. Strong knowledge of CI/CD tools (e.g., GitHub Actions, Jenkins, GitLab CI). Proficiency in Docker, Kubernetes, and infrastructure-as-code frameworks. Experience with ML pipelines, model versioning, and ML monitoring tools. Scripting skills in Python, Bash, or similar for automation tasks. Familiarity with monitoring/logging tools (Prometheus, Grafana, ELK, CloudWatch, etc.). Understanding of ML lifecycle management and reproducibility. Preferred Qualifications: Experience with Kubeflow, MLflow, DVC, or Triton Inference Server. Exposure to data versioning, feature stores, and model registries. Certification in AWS/GCP DevOps or Machine Learning Engineering is a plus. Background in software engineering, data engineering, or ML research is a bonus. What We Offer: Work on cutting-edge AI platforms and infrastructure Cross-functional collaboration with top ML, research, and product teams Competitive compensation package – no constraints for the right candidate send mail to :- thasleema@qcentro.com Job Type: Permanent Ability to commute/relocate: Thiruvananthapuram District, Kerala: Reliably commute or planning to relocate before starting work (Required) Experience: Devops and MLops: 5 years (Required) Work Location: In person

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

India

Remote

Location: Remote Experience: 8+ Years Job Type: Contract Job Overview: We are seeking a highly skilled Machine Learning Architect to design and implement cutting-edge AI/ML solutions that drive business innovation and operational efficiency. The ideal candidate will have deep expertise in Google Cloud Platform , Gurobi , and Google OR-Tools , with a proven ability to build scalable, optimized machine learning models for complex decision-making processes. Key Responsibilities: Design and develop robust machine learning architectures aligned with business objectives. Implement optimization models using Gurobi and Google OR to address complex operational problems. Leverage Google Cloud AI/ML services (Vertex AI, TensorFlow, AutoML) for scalable model training and deployment. Build automated pipelines for data preprocessing, model training, evaluation, and deployment. Ensure high-performance computing and efficient resource usage in cloud environments. Collaborate with data scientists, ML engineers, and business stakeholders to integrate ML solutions into production. Monitor, retrain, and enhance model performance to maintain accuracy and efficiency. Stay current with emerging AI/ML trends, tools, and best practices. Required Qualifications: Bachelor’s or Master’s degree in Computer Science, AI, Data Science, or a related field. 5+ years of experience in machine learning solution architecture and deployment. Strong hands-on experience with Google Cloud AI/ML services (Vertex AI, AutoML, BigQuery, etc.). Deep expertise in optimization modeling using Gurobi and Google OR-Tools . Proficiency in Python, TensorFlow, PyTorch, and ML libraries/frameworks. Solid understanding of big data processing frameworks (e.g., Apache Spark , BigQuery ). Excellent problem-solving skills with the ability to work across cross-functional teams. Preferred Qualifications: PhD in Machine Learning, Artificial Intelligence, or a related field. Experience with Reinforcement Learning and complex optimization algorithms. Working knowledge of MLOps , CI/CD pipelines, and Kubernetes for model lifecycle management. Familiarity with Google Cloud security best practices and identity/access management. Unlock more with Plus

Posted 2 weeks ago

Apply

16.0 - 25.0 years

0 Lacs

Gurgaon

On-site

Skill required: Financial Planning & Analysis - Financial Planning and Analysis (FP&A) Designation: Delivery Lead Senior Manager Qualifications: Any Graduation Years of Experience: 16 to 25 years About Accenture Accenture is a global professional services company with leading capabilities in digital, cloud and security.Combining unmatched experience and specialized skills across more than 40 industries, we offer Strategy and Consulting, Technology and Operations services, and Accenture Song— all powered by the world’s largest network of Advanced Technology and Intelligent Operations centers. Our 699,000 people deliver on the promise of technology and human ingenuity every day, serving clients in more than 120 countries. We embrace the power of change to create value and shared success for our clients, people, shareholders, partners and communities.Visit us at www.accenture.com What would you do? You will be aligned with our Finance Operations vertical and will be helping us in determining financial outcomes by collecting operational data/reports, whilst conducting analysis and reconciling transactions. Financial planning, reporting, variance analysis, budgeting and forecasting Financial planning and analysis (FP&A) refers to the processes designed to help organizations accurately plan, forecast, and budget to support the company s major business decisions and future financial health. These processes include planning, budgeting, forecasting, scenario modeling, and performance reporting. Build, mentor, and inspire a high-performing tax team, setting aspirations, defining career paths, and promoting technical training. Foster a culture of excellence, continuous improvement, and ethical conduct. Serve as primary liaison with external entities—regulatory authorities, auditors, tax advisors—and internal stakeholders across Finance, Legal, Treasury, and Operations. Communicate tax strategy, issues, and risks to C-level and global leadership with clarity and confidence. What are we looking for? •Direct Tax Processing •Indirect Tax Processing •1. Establish and maintain robust governance and control frameworks for indirect tax, U.S. GAAP and federal tax reporting, identifying and mitigating key risks. 2. Lead evaluation of high-risk processes, audits, and open compliance items; take decisive action and implement corrective plans. 3. Serve as a strategic advisor to senior leadership on all tax matters—policy, compliance, operational risks, and automation initiatives 4. Ensure full compliance across: Indirect Taxes: Customs, federal excise, FIP, export reporting, state excise, sales/use, and PACT Act filings. Direct Taxes: FBAR (FinCEN 114), Form 5472, federal/state income taxes, extensions, estimated payments, and Country-by-Country Reporting. 5. Oversee timely filings and payments of tax returns, with final approval and validation. 6. Lead digital transformation initiatives—including SAP, Power BI, RPA, and tax engines (Vertex, Avalara, OneSource, etc.) to drive efficiency and accuracy 7. Manage tax technology roadmap and liaise with Finance, IT, and external vendors to implement integrated systems. 8. Lead cross-functional, global teams to execute tax initiatives, ensuring timelines, governance, and quality standards. Roles and Responsibilities: •In this role you are required to identify and assess complex problems for area(s) of responsibility • The individual should create solutions in situations in which analysis requires in-depth knowledge of organizational objectives • Requires involvement in setting strategic direction to establish near-term goals for area(s) of responsibility • Interaction is with senior management levels at a client and/or within Accenture, involving negotiating or influencing on significant matters • Should have latitude in decision-making and determination of objectives and approaches to critical assignments • Their decisions have a lasting impact on area of responsibility with the potential to impact areas outside of own responsibility • Individual manages large teams and/or work efforts (if in an individual contributor role) at a client or within Accenture • Please note that this role may require you to work in rotational shifts Any Graduation

Posted 2 weeks ago

Apply

5.0 - 6.0 years

9 - 17 Lacs

Bengaluru

Work from Office

About You Experience, Education, Skills, and Accomplishments Education: Bachelor’s degree in Information Systems, Accounting, Taxation, Finance, or a related field. 5+ years of hands-on experience with tax technology platforms, with a focus on Vertex integrated into Oracle EBS . Experience supporting tax processes in ERP environments, including tax engine configurations, testing, and issue resolution. Strong understanding of tax reporting needs and compliance requirements across U.S. and global jurisdictions. Knowledge & Skills: Familiarity with indirect tax types such as sales tax, VAT, GST, and withholding tax. Working knowledge of end-to-end processes: Order-to-Cash (O2C) and Procure-to-Pay (P2P) . Ability to gather and document requirements, support UAT testing, and drive production support in coordination with tax and IT teams. Experience with data mapping , error reconciliation, and troubleshooting in ERP and tax engine interfaces. Strong analytical skills with attention to detail and the ability to translate business needs into technical configurations. Comfortable working with cross-functional teams across tax, finance, and engineering. It Would Be Great If You Also Had: Experience with OneSource, Avalara, or similar tax software. Exposure to NetSuite or Oracle Cloud ERP is a plus. Involvement in tax system upgrades, ERP migrations, or M&A integrations. Understanding of SOX controls and audit documentation in a tax systems context. Product You Will Be Supporting You’ll be responsible for enhancing and maintaining Vertex integrated with Oracle EBS , ensuring tax automation, compliance, and accuracy. Your work will focus on designing and implementing solutions that support tax operations at scale.

Posted 2 weeks ago

Apply

5.0 years

10 Lacs

Calcutta

On-site

Lexmark is now a proud part of Xerox, bringing together two trusted names and decades of expertise into a bold and shared vision. When you join us, you step into a technology ecosystem where your ideas, skills, and ambition can shape what comes next. Whether you’re just starting out or leading at the highest levels, this is a place to grow, stretch, and make real impact—across industries, countries, and careers. From engineering and product to digital services and customer experience, you’ll help connect data, devices, and people in smarter, faster ways. This is meaningful, connected work—on a global stage, with the backing of a company built for the future, and a robust benefits package designed to support your growth, well-being, and life beyond work. Responsibilities : A Data Engineer with AI/ML focus combines traditional data engineering responsibilities with the technical requirements for supporting Machine Learning (ML) systems and artificial intelligence (AI) applications. This role involves not only designing and maintaining scalable data pipelines but also integrating advanced AI/ML models into the data infrastructure. The role is critical for enabling data scientists and ML engineers to efficiently train, test, and deploy models in production. This role is also responsible for designing, building, and maintaining scalable data infrastructure and systems to support advanced analytics and business intelligence. This role often involves leading mentoring junior team members, and collaborating with cross-functional teams. Key Responsibilities: Data Infrastructure for AI/ML: Design and implement robust data pipelines that support data preprocessing, model training, and deployment. Ensure that the data pipeline is optimized for high-volume and high-velocity data required by ML models. Build and manage feature stores that can efficiently store, retrieve, and serve features for ML models. AI/ML Model Integration: Collaborate with ML engineers and data scientists to integrate machine learning models into production environments. Implement tools for model versioning, experimentation, and deployment (e.g., MLflow, Kubeflow, TensorFlow Extended). Support automated retraining and model monitoring pipelines to ensure models remain performant over time. Data Architecture & Design Design and maintain scalable, efficient, and secure data pipelines and architectures. Develop data models (both OLTP and OLAP). Create and maintain ETL/ELT processes. Data Pipeline Development Build automated pipelines to collect, transform, and load data from various sources (internal and external). Optimize data flow and collection for cross-functional teams. MLOps Support: Develop CI/CD pipelines to deploy models into production environments. Implement model monitoring, alerting, and logging for real-time model predictions. Data Quality & Governance Ensure high data quality, integrity, and availability. Implement data validation, monitoring, and alerting mechanisms. Support data governance initiatives and ensure compliance with data privacy laws (e.g., GDPR, HIPAA). Tooling & Infrastructure Work with cloud platforms (AWS, Azure, GCP) and data engineering tools like Apache Spark, Kafka, Airflow, etc. Use containerization (Docker, Kubernetes) and CI/CD pipelines for data engineering deployments. Team Collaboration & Mentorship Collaborate with data scientists, analysts, product managers, and other engineers. Provide technical leadership and mentor junior data engineers. Core Competencies Data Engineering: Apache Spark, Airflow, Kafka, dbt, ETL/ELT pipelines ML/AI Integration: MLflow, Feature Store, TensorFlow, PyTorch, Hugging Face GenAI: LangChain, OpenAI API, Vector DBs (FAISS, Pinecone, Weaviate) Cloud Platforms: AWS (S3, SageMaker, Glue), GCP (BigQuery, Vertex AI) Languages: Python, SQL, Scala, Bash DevOps & Infra: Docker, Kubernetes, Terraform, CI/CD pipelines Educational Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or related field. 5+ years of experience in data engineering or related field. Strong understanding of data modeling, ETL/ELT concepts, and distributed systems. Experience with big data tools and cloud platforms. Soft Skills: Strong problem-solving and critical-thinking skills. Excellent communication and collaboration abilities. Leadership experience and the ability to guide technical decisions. How to Apply ? Are you an innovator? Here is your chance to make your mark with a global technology leader. Apply now!

Posted 2 weeks ago

Apply

1.0 - 3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Title: Inside Sales Executive – Study Abroad (Full-Time) 📍 Location: Noida 📅 Experience: 1-3 Years in EdTech / Study Abroad 💼 Department: Sales & Student Counseling 🎓 Industry: Education & Overseas Consulting 🕒 Working Hours: 10 AM – 7 PM (6 days/week) About Vertex Edu Vertex Edu is a trusted name in global education consulting, empowering students to achieve admissions in top universities worldwide – including Ivy League and other elite institutions. With a personalized mentorship model and an 80% success rate, we are reimagining how students and families plan for international education. Job Overview We are seeking a driven and empathetic Inside Sales Executive to join our dynamic team. You will be the first point of contact for students and parents exploring study abroad options. Your role is not just to sell, but to guide, build trust, and convert inquiries into committed journeys. Key Responsibilities (KRAs) ✅ Lead Conversion & Sales Closure Handle inbound and outbound calls with parents and students who have expressed interest. Conduct needs assessment and explain Vertex Edu’s offerings. Convert qualified leads into enrollments through consultative sales. Consistently meet or exceed monthly sales targets. ✅ Student/Parent Consultation Provide clarity on study abroad options, exams, countries, costs, scholarships, and timelines. Book Zoom/phone sessions with seniors or academic mentors as needed. ✅ CRM & Follow-ups Maintain daily records of calls, leads, and student data on the CRM. Follow up regularly through calls, emails, and WhatsApp with interested leads. Update status and detailed remarks for each lead. ✅ Collaboration Coordinate with counselors, admission teams, and marketing for smooth handover post-enrollment. Share student/parent feedback with the marketing team for better campaign targeting. ✅ Reporting Submit daily, weekly, and monthly reports on leads, conversions, and pipeline movement. Requirements 1–3 years of experience in Inside Sales / Tele-sales / Counseling in EdTech, Overseas, or similar industry. Strong communication skills in English and Hindi. Ability to handle objections, and emotionally connect with parents & students. Experience with CRM tools and Google Workspace preferred. Passion for education, empathy, and result-oriented approach. What We Offer A fast-growing startup environment with direct mentorship Opportunity to work on impactful global student journeys Incentive structure for high performers Career growth path into Senior Counselor / Sales Manager roles Salary- 20,000- 40,000 rupees per month (Depends upon profile) How to Apply 📩 Send your CV with the subject “Inside Sales – Vertex Edu” to [hr@vertexedu.com] 🌐 Visit us at: www.vertexedu.com

Posted 2 weeks ago

Apply

15.0 years

0 Lacs

India

On-site

Job Location: Hyderabad / Bangalore / Pune Immediate Joiners / less than 30 days About the Role We are looking for a seasoned AI/ML Solutions Architect with deep expertise in designing and deploying scalable AI/ML and GenAI solutions on cloud platforms. The ideal candidate will have a strong track record in BFSI, leading end-to-end projects—from use case discovery to productionization—while ensuring governance, compliance, and performance at scale. Key Responsibilities Lead the design and deployment of enterprise-scale AI/ML and GenAI architectures. Drive end-to-end AI/ML project delivery : discovery, prototyping, productionization. Architect solutions using leading cloud-native AI services (AWS, Azure, GCP). Implement MLOps/LLMOps pipelines for model lifecycle and automation. Guide teams in selecting and integrating GenAI/LLM frameworks (OpenAI, Cohere, Hugging Face, LangChain, etc.). Ensure robust AI governance, model risk management , and compliance practices. Collaborate with senior business stakeholders and cross-functional engineering teams. Required Skills & Experience 15+ years in AI/ML, cloud architecture, and data engineering. At least 10 end-to-end AI/ML project implementations. Hands-on expertise in one or more of the following: ML frameworks: scikit-learn, XGBoost, TensorFlow, PyTorch GenAI/LLM tools: OpenAI, Cohere, LangChain, Hugging Face, FAISS, Pinecone Cloud platforms: AWS, Azure, GCP (AI/ML services) MLOps: MLflow, SageMaker Pipelines, Kubeflow, Vertex AI Strong understanding of data privacy, model governance, and compliance frameworks in BFSI. Proven leadership of cross-functional technical teams and stakeholder engagement.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Your potential has a place here with TTEC’s award-winning employment experience.As a Principal Data Scientist working onsite in Hyderabad, India, you’ll be a part of bringing humanity to business. #experienceTTEC Our employees have spoken. Our purpose, team, and company culture are amazing and our Great Place to Work® certification in India says it all! What You’ll Do In this role, you'll work on everything from data ingestion and model training to deployment and dashboarding using BigQuery, VertexAI, PySpark, and advanced ML frameworks. You'll report to Director, Data Engineering During a Typical Day, You’ll Prepare and manage training data for machine learning models using BigQuery and GCS. Design and optimize complex SQL queries for efficient data processing and preparation. Build, deploy, and support end-to-end machine learning systems, from data ingestion to dashboarding. Develop and maintain machine learning models using frameworks like TensorFlow, PyTorch, and scikit-learn. Implement MLOps best practices to streamline model training, deployment, and monitoring. Utilize Google Cloud tools such as Vertex AI, BigQuery, and Vertex AI Pipelines for ML workflows. Lead ML projects from conception to deployment, ensuring documentation and collaboration via JIRA and Confluence. What You Bring To The Role Strong knowledge of DevOps/MLOps best practices. Experience with machine learning training, prediction, and deployment workflows. Proficiency in Python and R, with experience using multiple ML libraries. Good understanding of internal business data domains (Empower, Oracle, Kronos, Employee, NICE). Familiarity with Contact Center Switch data. Ability to write clear, concise code and maintain strong documentation throughout the ML lifecycle. Excellent communication, collaboration, and problem-solving skills; able to work across cross-functional teams and build relationships. What You Can Expect Supportive of your career and professional development An inclusive culture and community minded organization where giving back is encouraged A global team of curious lifelong learners guided by our company values Ask us about our paid time off (PTO) and wellness and healthcare benefits And yes... a great compensation package and performance bonus opportunities, benefits you'd expect and maybe a few that would pleasantly surprise you (like tuition reimbursement) About TTEC Our business is about making customers happy. That's all we do. Since 1982, we've helped companies build engaged, pleased, profitable customer experiences powered by our combination of humanity and technology. On behalf of many of the world's leading iconic and hypergrowth brands, we talk, message, text, and video chat with millions of customers every day. These exceptional customer experiences start with you. TTEC is proud to be an equal opportunity employer where all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. TTEC embraces and is committed to building a diverse and inclusive workforce that respects and empowers the cultures and perspectives within our global teams. We aim to reflect the communities we serve, by not only delivering amazing service and technology, but also humanity. We make it a point to make sure all our employees feel valued, belonging, and comfortable being their authentic selves at work. As a global company, we know diversity is our strength because it enables us to view things from different vantage points and for you to bring value to the table in your own unique way. Primary Location India-Telangana-Hyderabad

Posted 2 weeks ago

Apply

0.0 - 5.0 years

0 Lacs

Thiruvananthapuram District, Kerala

On-site

Role Overview: We are looking for a skilled and versatile AI Infrastructure Engineer (DevOps/MLOps) to build and manage the cloud infrastructure, deployment pipelines, and machine learning operations behind our AI-powered products. You will work at the intersection of software engineering, ML, and cloud architecture to ensure that our models and systems are scalable, reliable, and production-ready. Key Responsibilities: Design and manage CI/CD pipelines for both software applications and machine learning workflows. Deploy and monitor ML models in production using tools like MLflow, SageMaker, Vertex AI, or similar. Automate the provisioning and configuration of infrastructure using IaC tools (Terraform, Pulumi, etc.). Build robust monitoring, logging, and alerting systems for AI applications. Manage containerized services with Docker and orchestration platforms like Kubernetes. Collaborate with data scientists and ML engineers to streamline model experimentation, versioning, and deployment. Optimize compute resources and storage costs across cloud environments (AWS, GCP, or Azure). Ensure system reliability, scalability, and security across all environments. Requirements: 5+ years of experience in DevOps, MLOps, or infrastructure engineering roles. Hands-on experience with cloud platforms (AWS, GCP, or Azure) and services related to ML workloads. Strong knowledge of CI/CD tools (e.g., GitHub Actions, Jenkins, GitLab CI). Proficiency in Docker, Kubernetes, and infrastructure-as-code frameworks. Experience with ML pipelines, model versioning, and ML monitoring tools. Scripting skills in Python, Bash, or similar for automation tasks. Familiarity with monitoring/logging tools (Prometheus, Grafana, ELK, CloudWatch, etc.). Understanding of ML lifecycle management and reproducibility. Preferred Qualifications: Experience with Kubeflow, MLflow, DVC, or Triton Inference Server. Exposure to data versioning, feature stores, and model registries. Certification in AWS/GCP DevOps or Machine Learning Engineering is a plus. Background in software engineering, data engineering, or ML research is a bonus. What We Offer: Work on cutting-edge AI platforms and infrastructure Cross-functional collaboration with top ML, research, and product teams Competitive compensation package – no constraints for the right candidate send mail to :- thasleema@qcentro.com Job Type: Permanent Ability to commute/relocate: Thiruvananthapuram District, Kerala: Reliably commute or planning to relocate before starting work (Required) Experience: Devops and MLops: 5 years (Required) Work Location: In person

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Tamil Nadu, India

On-site

Role : Sr. AI/ML Engineer Years of experience: 5+ years (with minimum 4 years of relevant experience) Work mode: WFO- Chennai (mandate) Type: FTE Notice Period: Immediate to 15 days ONLY Key skills: Python, Tensorflow, Generative AI ,Machine Learning, AWS , Agentic AI, Open AI, Claude, Fast API JD: Experience in Gen AI, CI/CD pipelines, scripting languages, and a deep understanding of version control systems(e.g. Git), containerization (e.g. Docker), and continuous integration/deployment tools (e.g. Jenkins) third party integration is a plus, cloud computing platforms (e.g. AWS, GCP, Azure), Kubernetes and Kafka. Experience building production-grade ML pipelines. Proficient in Python and frameworks like Tensorflow, Keras , or PyTorch. Experience with cloud build, deployment, and orchestration tools Experience with MLOps tools such as MLFlow, Kubeflow, Weights & Biases, AWS Sagemaker, Vertex AI, DVC, Airflow, Prefect, etc., Experience in statistical modeling, machine learning, data mining, and unstructured data analytics. Understanding of ML Lifecycle, MLOps & Hands on experience to Productionize the ML Model Detail-oriented, with the ability to work both independently and collaboratively. Ability to work successfully with multi-functional teams, principals, and architects, across organizational boundaries and geographies. Equal comfort driving low-level technical implementation and high-level architecture evolution Experience working with data engineering pipelines

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Greater Kolkata Area

On-site

Lexmark is now a proud part of Xerox, bringing together two trusted names and decades of expertise into a bold and shared vision. When you join us, you step into a technology ecosystem where your ideas, skills, and ambition can shape what comes next. Whether you’re just starting out or leading at the highest levels, this is a place to grow, stretch, and make real impact—across industries, countries, and careers. From engineering and product to digital services and customer experience, you’ll help connect data, devices, and people in smarter, faster ways. This is meaningful, connected work—on a global stage, with the backing of a company built for the future, and a robust benefits package designed to support your growth, well-being, and life beyond work. Responsibilities : A Data Engineer with AI/ML focus combines traditional data engineering responsibilities with the technical requirements for supporting Machine Learning (ML) systems and artificial intelligence (AI) applications. This role involves not only designing and maintaining scalable data pipelines but also integrating advanced AI/ML models into the data infrastructure. The role is critical for enabling data scientists and ML engineers to efficiently train, test, and deploy models in production. This role is also responsible for designing, building, and maintaining scalable data infrastructure and systems to support advanced analytics and business intelligence. This role often involves leading mentoring junior team members, and collaborating with cross-functional teams. Key Responsibilities: Data Infrastructure for AI/ML: Design and implement robust data pipelines that support data preprocessing, model training, and deployment. Ensure that the data pipeline is optimized for high-volume and high-velocity data required by ML models. Build and manage feature stores that can efficiently store, retrieve, and serve features for ML models. AI/ML Model Integration: Collaborate with ML engineers and data scientists to integrate machine learning models into production environments. Implement tools for model versioning, experimentation, and deployment (e.g., MLflow, Kubeflow, TensorFlow Extended). Support automated retraining and model monitoring pipelines to ensure models remain performant over time. Data Architecture & Design Design and maintain scalable, efficient, and secure data pipelines and architectures. Develop data models (both OLTP and OLAP). Create and maintain ETL/ELT processes. Data Pipeline Development Build automated pipelines to collect, transform, and load data from various sources (internal and external). Optimize data flow and collection for cross-functional teams. MLOps Support: Develop CI/CD pipelines to deploy models into production environments. Implement model monitoring, alerting, and logging for real-time model predictions. Data Quality & Governance Ensure high data quality, integrity, and availability. Implement data validation, monitoring, and alerting mechanisms. Support data governance initiatives and ensure compliance with data privacy laws (e.g., GDPR, HIPAA). Tooling & Infrastructure Work with cloud platforms (AWS, Azure, GCP) and data engineering tools like Apache Spark, Kafka, Airflow, etc. Use containerization (Docker, Kubernetes) and CI/CD pipelines for data engineering deployments. Team Collaboration & Mentorship Collaborate with data scientists, analysts, product managers, and other engineers. Provide technical leadership and mentor junior data engineers. Core Competencies Data Engineering: Apache Spark, Airflow, Kafka, dbt, ETL/ELT pipelines ML/AI Integration: MLflow, Feature Store, TensorFlow, PyTorch, Hugging Face GenAI: LangChain, OpenAI API, Vector DBs (FAISS, Pinecone, Weaviate) Cloud Platforms: AWS (S3, SageMaker, Glue), GCP (BigQuery, Vertex AI) Languages: Python, SQL, Scala, Bash DevOps & Infra: Docker, Kubernetes, Terraform, CI/CD pipelines Educational Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or related field. 5+ years of experience in data engineering or related field. Strong understanding of data modeling, ETL/ELT concepts, and distributed systems. Experience with big data tools and cloud platforms. Soft Skills: Strong problem-solving and critical-thinking skills. Excellent communication and collaboration abilities. Leadership experience and the ability to guide technical decisions. How to Apply ? Are you an innovator? Here is your chance to make your mark with a global technology leader. Apply now! Global Privacy Notice Lexmark is committed to appropriately protecting and managing any personal information you share with us. Click here to view Lexmark's Privacy Notice.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Greater Kolkata Area

On-site

Lexmark is now a proud part of Xerox, bringing together two trusted names and decades of expertise into a bold and shared vision. When you join us, you step into a technology ecosystem where your ideas, skills, and ambition can shape what comes next. Whether you’re just starting out or leading at the highest levels, this is a place to grow, stretch, and make real impact—across industries, countries, and careers. From engineering and product to digital services and customer experience, you’ll help connect data, devices, and people in smarter, faster ways. This is meaningful, connected work—on a global stage, with the backing of a company built for the future, and a robust benefits package designed to support your growth, well-being, and life beyond work. Responsibilities : A Senior Data Engineer with AI/ML focus combines traditional data engineering responsibilities with the technical requirements for supporting Machine Learning (ML) systems and artificial intelligence (AI) applications. This role involves not only designing and maintaining scalable data pipelines but also integrating advanced AI/ML models into the data infrastructure. The role is critical for enabling data scientists and ML engineers to efficiently train, test, and deploy models in production. This role is also responsible for designing, building, and maintaining scalable data infrastructure and systems to support advanced analytics and business intelligence. This role often involves leading data engineering projects, mentoring junior team members, and collaborating with cross-functional teams. Key Responsibilities: Data Infrastructure for AI/ML: Design and implement robust data pipelines that support data preprocessing, model training, and deployment. Ensure that the data pipeline is optimized for high-volume and high-velocity data required by ML models. Build and manage feature stores that can efficiently store, retrieve, and serve features for ML models. AI/ML Model Integration: Collaborate with ML engineers and data scientists to integrate machine learning models into production environments. Implement tools for model versioning, experimentation, and deployment (e.g., MLflow, Kubeflow, TensorFlow Extended). Support automated retraining and model monitoring pipelines to ensure models remain performant over time. Data Architecture & Design Design and maintain scalable, efficient, and secure data pipelines and architectures. Develop data models (both OLTP and OLAP). Create and maintain ETL/ELT processes. Data Pipeline Development Build automated pipelines to collect, transform, and load data from various sources (internal and external). Optimize data flow and collection for cross-functional teams. MLOps Support: Develop CI/CD pipelines to deploy models into production environments. Implement model monitoring, alerting, and logging for real-time model predictions. Data Quality & Governance Ensure high data quality, integrity, and availability. Implement data validation, monitoring, and alerting mechanisms. Support data governance initiatives and ensure compliance with data privacy laws (e.g., GDPR, HIPAA). Tooling & Infrastructure Work with cloud platforms (AWS, Azure, GCP) and data engineering tools like Apache Spark, Kafka, Airflow, etc. Use containerization (Docker, Kubernetes) and CI/CD pipelines for data engineering deployments. Team Collaboration & Mentorship Collaborate with data scientists, analysts, product managers, and other engineers. Provide technical leadership and mentor junior data engineers. Core Competencies Data Engineering: Apache Spark, Airflow, Kafka, dbt, ETL/ELT pipelines ML/AI Integration: MLflow, Feature Store, TensorFlow, PyTorch, Hugging Face GenAI: LangChain, OpenAI API, Vector DBs (FAISS, Pinecone, Weaviate) Cloud Platforms: AWS (S3, SageMaker, Glue), GCP (BigQuery, Vertex AI) Languages: Python, SQL, Scala, Bash DevOps & Infra: Docker, Kubernetes, Terraform, CI/CD pipelines Educational Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or related field. 5+ years of experience in data engineering or related field. Strong understanding of data modeling, ETL/ELT concepts, and distributed systems. Experience with big data tools and cloud platforms. Soft Skills: Strong problem-solving and critical-thinking skills. Excellent communication and collaboration abilities. Leadership experience and the ability to guide technical decisions. How to Apply ? Are you an innovator? Here is your chance to make your mark with a global technology leader. Apply now! Global Privacy Notice Lexmark is committed to appropriately protecting and managing any personal information you share with us. Click here to view Lexmark's Privacy Notice.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Introducing Thinkproject Platform Pioneering a new era and offering a cohesive alternative to the fragmented landscape of construction software, Thinkproject seamlessly integrates the most extensive portfolio of mature solutions with an innovative platform, providing unparalleled features, integrations, user experiences, and synergies. By combining information management expertise and in-depth knowledge of the building, infrastructure, and energy industries, Thinkproject empowers customers to efficiently deliver, operate, regenerate, and dispose of their built assets across their entire lifecycle through a Connected Data Ecosystem. We are seeking a hands-on Applied Machine Learning Engineer to join our team and lead the development of ML-driven insights from historical data in our contracts management, assets management and common data platform. This individual will work closely with our data engineering and product teams to design, develop, and deploy scalable machine learning models that can parse, learn from, and generate value from both structured and unstructured contract data. You will use BigQuery and its ML capabilities (including SQL and Python integrations) to prototype and productionize models across a variety of NLP and predictive analytics use cases. Your work will be critical in enhancing our platform’s intelligence layer, including search, classification, recommendations, and risk detection. What your day will look like Key Responsibilities Model Development: Design and implement machine learning models using structured and unstructured historical contract data to support intelligent document search, clause classification, metadata extraction, and contract risk scoring. BigQuery ML Integration: Build, train, and deploy ML models directly within BigQuery using SQL and/or Python, leveraging native GCP tools (e.g., Vertex AI, Dataflow, Pub/Sub). Data Preprocessing & Feature Engineering: Clean, enrich, and transform raw data (e.g., legal clauses, metadata, audit trails) into model-ready features using scalable and efficient pipelines. Model Evaluation & Experimentation: Conduct experiments, model validation, A/B testing, and iterate based on precision, recall, F1-score, RMSE, etc. Deployment & Monitoring: Operationalize models in production environments with monitoring, retraining pipelines, and CI/CD best practices for ML (MLOps). Collaboration: Work cross-functionally with data engineers, product managers, legal domain experts, and frontend teams to align ML solutions with product needs. What you need to fulfill the role Skills And Experience Education: Bachelor’s or Master’s degree in Computer Science, Machine Learning, Data Science, or a related field. ML Expertise: Strong applied knowledge of supervised and unsupervised learning, classification, regression, clustering, feature engineering, and model evaluation. NLP Experience: Hands-on experience working with textual data, especially in NLP use cases like entity extraction, classification, and summarization. GCP & BigQuery: Proficiency with Google Cloud Platform, especially BigQuery and BigQuery ML; comfort querying large-scale datasets and integrating with external ML tooling. Programming: Proficient in Python and SQL; familiarity with libraries such as Scikit-learn, TensorFlow, PyTorch, Keras. MLOps Knowledge: Experience with model deployment, monitoring, versioning, and ML CI/CD best practices. Data Engineering Alignment: Comfortable working with data pipelines and tools like Apache Beam, Dataflow, Cloud Composer, and pub/sub systems. Version Control: Strong Git skills and experience collaborating in Agile teams. Preferred Qualifications Experience working with contractual or legal text datasets. Familiarity with document management systems, annotation tools, or enterprise collaboration platforms. Exposure to Vertex AI, LangChain, RAG-based retrieval, or embedding models for Gen AI use cases. Comfortable working in a fast-paced, iterative environment with changing priorities. What we offer Lunch 'n' Learn Sessions I Women's Network I LGBTQIA+ Network I Coffee Chat Roulette I Free English Lessons I Thinkproject Academy I Social Events I Volunteering Activities I Open Forum with Leadership Team (Tp Café) I Hybrid working I Unlimited learning We are a passionate bunch here. To join Thinkproject is to shape what our company becomes. We take feedback from our staff very seriously and give them the tools they need to help us create our fantastic culture of mutual respect. We believe that investing in our staff is crucial to the success of our business. Your contact: Mehal Mehta Please submit your application, including salary expectations and potential date of entry, by submitting the form on the next page. Working at thinkproject.com - think career. think ahead.

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Location : Hyderabad and Chennai Immediate Joiners Experience : 3 to 5 years Mandatory skills: MLOps, Model lifecycle + Python + PySpark + GCP (BigQuery, Dataproc & Airflow), And CI/CD Required Skills and Experience: Strong programming skills: Proficiency in languages like Python, with experience in libraries like TensorFlow, PyTorch, or scikit-learn. Cloud Computing: Deep understanding of GCP services relevant to ML, such as Vertex AI, BigQuery, Cloud Storage, Dataflow, Dataproc, and others. Data Science Fundamentals: Solid foundation in machine learning concepts, statistical analysis, and data modeling. Software Engineering Principles: Experience with software development best practices, version control (e.g., Git), and testing methodologies. MLOps: Familiarity with MLOps principles and practices. Data Engineering: Experience in building and managing data pipelines.

Posted 2 weeks ago

Apply

0.0 - 7.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Job Information Date Opened 07/17/2025 Industry Technology Job Type Full-time Work Experience 4-8 years City Bengaluru State/Province Karnataka Country India Zip/Postal Code 560037 Job Description Decision Minds is a leader in Data Cloud, Big Data, Cloud Analytics, AI/ML, and Multi-Cloud deployments. We are a team of passionate thought leaders and industry experts who strive for excellence and clarity in creating innovative solutions and products. Our goal is to revolutionize Data Analytics, Artificial Intelligence, Cloud Computing, and Robotic Process Automation, and make a positive impact on people's lives. Know more about us at www.decisionminds.com | LinkedIn Job Role: Data Science Engineer Primary Skills: Strong SQL, Python Experience: 4 to 8 Years Mode of working: WFO Key responsibilities: Hands-on experience getting from raw data to AIML to insights to drive business outcome / change. Collaborate closely with business stakeholders to gather requirements, understand use cases, and translate them into data science solutions. Build, validate, and deploy predictive models and analytical solutions using GCP services present insights and findings to business users in a clear and impactful manner. Conduct exploratory data analysis to identify trends, patterns, and opportunities for optimization. Required Tech Stack and other related details: 4-7 years of experience in Data Science roles, preferably with exposure to Data Engineering tasks (ideal split 70% DS / 30% DE). Strong hands-on experience with machine learning, statistical modeling, and analytical techniques. Experience working with Google Cloud Platform (GCP),especially Big Query, Vertex AI, Cloud Storage, and related services. Good to have: Excellent communication and stakeholder management skills – ability to interact directly with business users and explain technical concepts clearly. Proficiency in Python, SQL, and relevant data science libraries Understanding of data pipeline concepts; direct implementation experience is a plus.

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies