Jobs
Interviews

1489 Vertex Jobs - Page 6

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description Role: Site Reliability Engineers (SREs) in Google Cloud Platform (GCP) and RedHat OpenShift administration. Responsibilities System Reliability: Ensure the reliability and uptime of critical services and infrastructure. Google Cloud Expertise: Design, implement, and manage cloud infrastructure using Google Cloud services. Automation: Develop and maintain automation scripts and tools to improve system efficiency and reduce manual intervention. Monitoring and Incident Response: Implement monitoring solutions and respond to incidents to minimize downtime and ensure quick recovery. Collaboration: Work closely with development and operations teams to improve system reliability and performance. Capacity Planning: Conduct capacity planning and performance tuning to ensure systems can handle future growth. Documentation: Create and maintain comprehensive documentation for system configurations, processes, and procedures. Qualifications Education: Bachelor’s degree in computer science, Engineering, or a related field. Experience: 4+ years of experience in site reliability engineering or a similar role. Skills Proficiency in Google Cloud services (Compute Engine, Kubernetes Engine, Cloud Storage, BigQuery, Pub/Sub, etc.). Familiarity with Google BI and AI/ML tools (Looker, BigQuery ML, Vertex AI, etc.) Experience with automation tools (Terraform, Ansible, Puppet). Familiarity with CI/CD pipelines and tools (Azure pipelines Jenkins, GitLab CI, etc.). Strong scripting skills (Python, Bash, etc.). Knowledge of networking concepts and protocols. Experience with monitoring tools (Prometheus, Grafana, etc.). Preferred Certifications Google Cloud Professional DevOps Engineer Google Cloud Professional Cloud Architect Red Hat Certified Engineer (RHCE) or similar Linux certification Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation.

Posted 6 days ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description Responsibilities: System Reliability: Ensure the reliability and uptime of critical services and infrastructure. Google Cloud Expertise: Design, implement, and manage cloud infrastructure using Google Cloud services. Automation: Develop and maintain automation scripts and tools to improve system efficiency and reduce manual intervention. Monitoring and Incident Response: Implement monitoring solutions and respond to incidents to minimize downtime and ensure quick recovery. Collaboration: Work closely with development and operations teams to improve system reliability and performance. Capacity Planning: Conduct capacity planning and performance tuning to ensure systems can handle future growth. Documentation: Create and maintain comprehensive documentation for system configurations, processes, and procedures. Qualifications Education: Bachelor’s degree in computer science, Engineering, or a related field. Experience: 4+ years of experience in site reliability engineering or a similar role. Skills Proficiency in Google Cloud services (Compute Engine, Kubernetes Engine, Cloud Storage, BigQuery, Pub/Sub, etc.). Familiarity with Google BI and AI/ML tools (Looker, BigQuery ML, Vertex AI, etc.) Experience with automation tools (Terraform, Ansible, Puppet). Familiarity with CI/CD pipelines and tools (Azure pipelines Jenkins, GitLab CI, etc.). Strong scripting skills (Python, Bash, etc.). Knowledge of networking concepts and protocols. Experience with monitoring tools (Prometheus, Grafana, etc.). Preferred Certifications Google Cloud Professional DevOps Engineer Google Cloud Professional Cloud Architect Red Hat Certified Engineer (RHCE) or similar Linux certification Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation.

Posted 6 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description Job Summary: We are seeking a highly skilled MLOps Engineer to design, deploy, and manage machine learning pipelines in Google Cloud Platform (GCP). In this role, you will be responsible for automating ML workflows, optimizing model deployment, ensuring model reliability, and implementing CI/CD pipelines for ML systems. You will work with Vertex AI, Kubernetes (GKE), BigQuery, and Terraform to build scalable and cost-efficient ML infrastructure. The ideal candidate must have a good understanding of ML algorithms, experience in model monitoring, performance optimization, Looker dashboards and infrastructure as code (IaC), ensuring ML models are production-ready, reliable, and continuously improving. You will be interacting with multiple technical teams, including architects and business stakeholders to develop state of the art machine learning systems that create value for the business. Responsibilities Managing the deployment and maintenance of machine learning models in production environments and ensuring seamless integration with existing systems. Monitoring model performance using metrics such as accuracy, precision, recall, and F1 score, and addressing issues like performance degradation, drift, or bias. Troubleshoot and resolve problems, maintain documentation, and manage model versions for audit and rollback. Analyzing monitoring data to preemptively identify potential issues and providing regular performance reports to stakeholders. Optimization of the queries and pipelines. Modernization of the applications whenever required Qualifications Expertise in programming languages like Python, SQL Solid understanding of best MLOps practices and concepts for deploying enterprise level ML systems. Understanding of Machine Learning concepts, models and algorithms including traditional regression, clustering models and neural networks (including deep learning, transformers, etc.) Understanding of model evaluation metrics, model monitoring tools and practices. Experienced with GCP tools like BigQueryML, MLOPS, Vertex AI Pipelines (Kubeflow Pipelines on GCP), Model Versioning & Registry, Cloud Monitoring, Kubernetes, etc. Solid oral and written communication skills and ability to prepare detailed technical documentation of new and existing applications. Strong ownership and collaborative qualities in their domain. Takes initiative to identify and drive opportunities for improvement and process streamlining. Bachelor’s Degree in a quantitative field of mathematics, computer science, physics, economics, engineering, statistics (operations research, quantitative social science, etc.), international equivalent, or equivalent job experience. Bonus Qualifications Experience in Azure MLOPS, Familiarity with Cloud Billing. Experience in setting up or supporting NLP, Gen AI, LLM applications with MLOps features. Experience working in an Agile environment, understanding of Lean Agile principles. Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation.

Posted 6 days ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

About SAIGroup SAIGroup is a private investment firm that has committed $1 billion to incubate and scale revolutionary AI-powered enterprise software application companies. Our portfolio, a testament to our success, comprises rapidly growing AI companies that collectively cater to over 2,000+ major global customers, approaching $800 million in annual revenue, and employing a global workforce of over 4,000 individuals. SAIGroup invests in new ventures based on breakthrough AI-based products that have the potential to disrupt existing enterprise software markets. SAIGroup’s latest investment, JazzX AI , is a pioneering technology company on a mission to shape the future of work through an AGI platform purpose-built for the enterprise. JazzX AI is not just building another AI tool—it’s reimagining business processes from the ground up, enabling seamless collaboration between humans and intelligent systems. The result is a dramatic leap in productivity, efficiency, and decision velocity, empowering enterprises to become pacesetters who lead their industries and set new benchmarks for innovation and excellence. Job Title: AGI Solutions Engineer (Junior) – GTM Solution Delivery (Full-time Remote-first with periodic travel to client sites & JazzX hubs) Role Overview As an Artificial General Intelligence Engineer you are the hands-on technical force that turns JazzX’s AGI platform into working, measurable solutions for customers. You will: Build and integrate LLM-driven features, vector search pipelines, and tool-calling agents into client environments. Collaborate with solution architects, product, and customer-success teams from discovery through production rollout. Contribute field learnings back to the core platform, accelerating time-to-value across all deployments. You are as comfortable writing production-quality Python as you are debugging Helm charts, and you enjoy explaining your design decisions to both peers and client engineers. Key Responsibilities Focus Area What You’ll Do Solution Implementation Develop and extend JazzX AGI services (LLM orchestration, retrieval-augmented generation, agents) within customer stacks. Integrate data sources, APIs, and auth controls; ensure solutions meet security and compliance requirements. Pair with Solution Architects on design reviews; own component-level decisions. Delivery Lifecycle Drive proofs-of-concept, pilots, and production rollouts with an agile, test-driven mindset. Create reusable deployment scripts (Terraform, Helm, CI/CD) and operational runbooks. Instrument services for observability (tracing, logging, metrics) and participate in on-call rotations. Collaboration & Support Work closely with product and research teams to validate new LLM techniques in real-world workloads. Troubleshoot customer issues, triage bugs, and deliver patches or performance optimisations. Share best practices through code reviews, internal demos, and technical workshops. Innovation & Continuous Learning Evaluate emerging frameworks (e.g., LlamaIndex, AutoGen, WASM inferencing) and pilot promising tools. Contribute to internal knowledge bases and GitHub templates that speed future projects. Qualifications Must-Have 2+ years of professional software engineering experience; 1+ years working with ML or data-intensive systems. Proficiency in Python (or Java/Go) with strong software-engineering fundamentals (testing, code reviews, CI/CD). Hands-on experience deploying containerised services on AWS, GCP, or Azure using Kubernetes & Helm. Practical knowledge of LLM / Gen-AI frameworks (LangChain, LlamaIndex, PyTorch, or TensorFlow) and vector databases. Familiarity integrating REST/GraphQL APIs, streaming platforms (Kafka), and SQL/NoSQL stores. Clear written and verbal communication skills; ability to collaborate with distributed teams. Willingness to travel 10–20 % for key customer engagements. Nice-to-Have Experience delivering RAG or agent-based AI solutions in regulated domains (finance, healthcare, telecom). Cloud or Kubernetes certifications (AWS SA-Assoc/Pro, CKA, CKAD). Exposure to MLOps stacks (Kubeflow, MLflow, Vertex AI) or data-engineering tooling (Airflow, dbt). Attributes Empathy & Ownership: You listen carefully to user needs and take full ownership of delivering great experiences. Startup Mentality: You move fast, learn quickly, and are comfortable wearing many hats. Detail-Oriented Builder: You care about the little things Mission-Driven: You want to solve important, high-impact problems that matter to real people. Team-Oriented: Low ego, collaborative, and excited to build alongside highly capable engineers, designers, and domain experts. Travel This position requires the ability to travel to client sites as needed for on-site deployments and collaboration. Travel is estimated at approximately 20–30% of the time (varying by project), and flexibility is expected to accommodate key client engagement activities. Why Join Us At JazzX AI, you have the opportunity to join the foundational team that is pushing the boundaries of what’s possible to create an autonomous intelligence driven future. We encourage our team to pursue bold ideas, foster continuous learning, and embrace the challenges and rewards that come with building something truly innovative. Your work will directly contribute to pioneering solutions that have the potential to transform industries and redefine how we interact with technology. As an early member of our team, your voice will be pivotal in steering the direction of our projects and culture, offering an unparalleled chance to leave your mark on the future of AI. We offer a competitive salary, equity options, and an attractive benefits package, including health, dental, and vision insurance, flexible working arrangements, and more.

Posted 6 days ago

Apply

3.0 - 6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: Digital Engineer – AI/ML & Automation Location: Hyderabad, Telangana Experience : 3 to 6 years of hands-on experience in digital engineering, business intelligence, AI/ML solutions, or automation technologies. Employment Type: Full-Time | 6 Days Working Notice Period: Immediate Joiners Preferred Key Responsibilities AI/ML & Generative AI Solutions : Design, develop, and deploy AI/ML models, intelligent agents, and tools using LLMs, Vertex AI, OpenAI APIs, and other cloud-based AI platforms. Data Engineering & Integration : Build and optimize data pipelines leveraging data lakes, data warehouses, and both structured and unstructured data sources to ensure high-quality, accessible data for analytics and automation. Business Intelligence (BI) : Develop and maintain interactive dashboards using Power BI and SAP Analytics Cloud (SAC), integrated with SAP S/4HANA and other enterprise systems. Application Development : Build responsive, user-centric web and mobile applications using low-code/no-code platforms such as Microsoft Power Apps, Google AppSheet, or equivalent tools. Automation & RPA : Identify and implement intelligent workflow automation using tools like UiPath, Zapier, Python scripts, and conversational AI agents (e.g., ChatGPT). Digital Process Optimization : Design and implement digital SOPs, automate business workflows, and support enterprise-wide digital transformation initiatives. Candidate Profile Technical Skills Proficient in Power BI or SAP Analytics Cloud dashboard development Hands-on with LLMs, generative AI tools, and APIs (e.g., ChatGPT, OpenAI) Skilled in Python, REST APIs, and RPA tools (e.g., UiPath) Strong grasp of SQL/NoSQL, data modeling, and API integrations Platform Expertise Experience with low-code platforms (Power Apps, AppSheet) Familiar with WordPress/Drupal CMS, DNS, and web hosting Working knowledge of Google Workspace administration

Posted 6 days ago

Apply

2.0 - 6.0 years

0 Lacs

noida, uttar pradesh

On-site

Genpact is a global professional services and solutions firm dedicated to delivering outcomes that shape the future. With a workforce of over 125,000 individuals across 30+ countries, we are driven by innate curiosity, entrepreneurial agility, and a commitment to creating lasting value for our clients. Our purpose, the relentless pursuit of a world that works better for people, guides us as we serve and transform leading enterprises worldwide, including the Fortune Global 500. We leverage our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI to drive growth and success. We are currently seeking applications for the position of Management Trainee/Assistant Manager in US Sales and Use Tax compliance. In this role, you will be responsible for ensuring the timeliness and accuracy of all deliverables. You will provide guidance to your team on the correct accounting treatment and act as an escalation point when necessary. Your primary responsibilities will include: - Coordinating with the Business License and Sales Tax teams to execute Business license renewals - Tracking and adhering to US Property Tax Returns filing deadlines - Managing exemptions and exceptions for applicable states - Monitoring new store openings and closures - Reconciling assessment notices to the PTMS Property Manager - Handling PPTX accrual accounts and booking monthly accrual and adjustment entries - Managing vendor queries and maintaining related documents for US Sales Tax - Investigating and resolving open items by collaborating with different teams - Ensuring adherence to internal and external US GAAP/SOX audits Qualifications we are looking for: Minimum qualifications: - Accounting graduates with relevant experience, CA/CMA preferred - Strong written and verbal communication skills - Working experience with ERPs, particularly Oracle, is preferred - Familiarity with tools like Alteryx, AS400, PTMS, Vertex, Sovos, Bloomberg sites, and Middleware for tax rates - Previous experience with Retail Clients of similar size is preferred - Strong interpersonal skills, ability to manage complex tasks, and effective communication Preferred qualifications: - Strong accounting and analytical skills - Good understanding of accounting GAAP principles, preferably US GAAP - Ability to prioritize tasks, multitask, and drive projects to completion - Proficiency in Microsoft Excel and other applications - Experience collaborating with systems, customers, and key stakeholders - Previous experience working remotely in a US time zone is a plus This is a full-time position based in India-Noida. The ideal candidate will hold a Bachelor's degree or equivalent and should be able to demonstrate mastery in Operations. If you meet the qualifications and are ready to make an impact, we encourage you to apply.,

Posted 6 days ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Position Title: Specialty Development Consultant 34321 Job Type: Contract Location: Chennai Budget: ₹23 LPA Notice Period: Immediate Joiners Only Role Overview We are looking for a Specialty Development Consultant with hands-on experience in full stack development using React or Angular , cloud deployment, and DevOps practices. The ideal candidate will play a critical role in designing and delivering end-to-end scalable applications, collaborating with cross-functional teams and integrating modern ML platforms with cloud services. Key Responsibilities Develop full stack applications using React/Angular and integrate with cloud-native components. Collaborate with Tech Anchors, Product Managers, and cross-functional teams to implement business features end-to-end. Understand and incorporate technical, functional, non-functional, and security requirements into the software design and delivery. Apply Test-Driven Development (TDD) methodologies to ensure code quality and maintainability. Work extensively with Google Cloud Platform (GCP) products and services for deployment and scalability. Integrate with open-source tools and frameworks to support ML platform integration. Participate in CI/CD workflows using Tekton, SonarQube, and Terraform. Mandatory Skills React and/or Angular (3+ years) Full Stack Development Google Cloud Platform (GCP) Tekton, SonarQube, Terraform, GCS Experience with Kubernetes or OpenShift (preferred) Strong grasp of CICD, DevOps, and cloud-native development practices Experience Required 2 to 5 years overall software development experience Minimum 3 years of hands-on experience in React/Angular full stack development Experience with GCP products and deployments, including Vertex AI and GCS Proven track record in API development, cloud deployments, and integrating with modern DevOps pipelines Education Bachelor's Degree in Computer Science, Engineering, or related technical field Skills: react,google cloud platform (gcp),devops,full stack development,cloud,cloud-native development,kubernetes,terraform,cicd,tekton,openshift,angular,sonarqube

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

About Atos Atos is a global leader in digital transformation with c. 78,000 employees and annual revenue of c. € 10 billion. European number one in cybersecurity, cloud and high-performance computing, the Group provides tailored end-to-end solutions for all industries in 68 countries. A pioneer in decarbonization services and products, Atos is committed to a secure and decarbonized digital for its clients. Atos is a SE (Societas Europaea) and listed on Euronext Paris. The purpose of Atos is to help design the future of the information space. Its expertise and services support the development of knowledge, education and research in a multicultural approach and contribute to the development of scientific and technological excellence. Across the world, the Group enables its customers and employees, and members of societies at large to live, work and develop sustainably, in a safe and secure information space. Responsibilities SAP S/4 Hana FI TAX with Vertex knowledge Technical Skill sets Should have worked on at least one Implementation & two support projects on SAP S/4 HANA with tax in O2C and P2P. Should have good experience with withholding tax (TDS) and VAT. Must have experience in VAT configuration such as tax procedures, tax keys, tax conditions and input/output tax codes. Must have experience in withholding tax configuration such as WHT codes, types, keys, master data. Perform systems review and analysis for the conversion of in-house developed business applications, master data, and re­ engineer business practices to facilitate standardization to a single SAP platform. Responsible for the SAP configuration for external tax calculation 02C and P2P. Configuration of SAP Pricing with Tax Procedures for business organizations. Develop and update business process documentation utilizing confidential technical WRICEF Project Management methodology. Completed process flow documentation for support organization and end-user guides - illustrated BPP and FAQ sheets. Develop close-loop regression testing procedure for inbound/outbound processing with legacy systems utilizing iDocs and XML documents Designed custom report for the balancing and reconciliation of SAP financial account data of tax and Vertex Reporting and Returns databases. Requirements Must be expert in writing Functional Specifications independently and create Custom Objects from Scratch to Deployments. Should have good experience on interfaces with third party systems. Vertex Must have knowledge on Vertex (Tax Engine) and mapping concept. Must have knowledge on tax calculations on Vertex and comparison to SAP S/4 Hana tax module Provide technical guidance for development and coding for industry specific excise tax processing, compliance and reporting General knowledge and tools Excellent communication & strong collaboration skills Flexible to adapt to fast changing environment and self-motivated Creating technical design specifications to ensure compliance with the functional teams and IT Management Analytical thinking, high level of comprehension and independent working style Seeking candidates who are flexible and willing to work on shifts as required What We Offer Competitive salary package. Leave Policies 10 Days of Public Holiday (Includes 2 days optional) & 22 days of Earned Leave (EL) & 11 days for sick or caregiving leave. Office Requirement 3 Days WFO Here at Atos, diversity and inclusion are embedded in our DNA. Read more about our commitment to a fair work environment for all. Atos is a recognized leader in its industry across Environment, Social and Governance (ESG) criteria. Find out more on our CSR commitment. Choose your future. Choose Atos.

Posted 1 week ago

Apply

4.0 years

4 - 20 Lacs

Chennai

On-site

Job Summary: We are seeking an experienced and results-driven GCP Data Engineer with over 4 years of hands-on experience in building and optimizing data pipelines and architectures using Google Cloud Platform (GCP). The ideal candidate will have strong expertise in data integration, transformation, and modeling, with a focus on delivering scalable, efficient, and secure data solutions. This role requires a deep understanding of GCP services, big data processing frameworks, and modern data engineering practices. Key Responsibilities: Design, develop, and deploy scalable and reliable data pipelines on Google Cloud Platform . Build data ingestion processes from various structured and unstructured sources using Cloud Dataflow , Pub/Sub , BigQuery , and other GCP tools. Optimize data workflows for performance, reliability, and cost-effectiveness. Implement data transformations, cleansing, and validation using Apache Beam , Spark , or Dataflow . Work closely with data analysts, data scientists, and business stakeholders to understand data needs and translate them into technical solutions. Ensure data security and compliance with company and regulatory standards. Monitor, troubleshoot, and enhance data systems to ensure high availability and accuracy. Participate in code reviews, design discussions, and continuous integration/deployment processes. Document data processes, workflows, and technical specifications. Required Skills: Minimum 4 years of experience in data engineering with at least 2 years working on GCP . Strong proficiency in GCP services such as BigQuery , Cloud Storage , Dataflow , Pub/Sub , Cloud Composer , Cloud Functions , and Vertex AI (preferred). Hands-on experience in SQL , Python , and Java/Scala for data processing and transformation. Experience with ETL/ELT development, data modeling, and data warehousing concepts. Familiarity with CI/CD pipelines , version control (Git), and DevOps practices. Solid understanding of data security, IAM, encryption, and compliance within cloud environments. Experience with performance tuning, workload management, and cost optimization in GCP. Preferred Qualifications: GCP Professional Data Engineer Certification. Experience with real-time data processing using Kafka , Dataflow , or Pub/Sub . Familiarity with Terraform , Cloud Build , or infrastructure-as-code tools. Exposure to data quality frameworks and observability tools. Previous experience in an agile development environment. Job Types: Full-time, Permanent Pay: ₹473,247.51 - ₹2,000,000.00 per year Schedule: Monday to Friday Application Question(s): Mention Your Last Working Date Experience: Google Cloud Platform: 4 years (Preferred) Python: 4 years (Preferred) ETL: 4 years (Preferred) Work Location: In person

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

General Summary: The Senior AI Engineer (2–5 years' experience) is responsible for designing and implementing intelligent, scalable AI solutions with a focus on Retrieval-Augmented Generation (RAG), Agentic AI, and Modular Cognitive Processes (MCP). This role is ideal for individuals who are passionate about the latest AI advancements and eager to apply them in real-world applications. The engineer will collaborate with cross-functional teams to deliver high-quality, production-ready AI systems aligned with business goals and technical standards Essential Duties & Responsibilities: Design, develop, and deploy AI-driven applications using RAG and Agentic AI frameworks. Build and maintain scalable data pipelines and services to support AI workflows. Implement RESTful APIs using Python frameworks (e.g., FastAPI, Flask) for AI model integration. Collaborate with product and engineering teams to translate business needs into AI solutions. Debug and optimize AI systems across the stack to ensure performance and reliability. Stay current with emerging AI tools, libraries, and research, and integrate them into projects. Contribute to the development of internal AI standards, reusable components, and best practices. Apply MCP principles to design modular, intelligent agents capable of autonomous decision-making. Work with vector databases, embeddings, and LLMs (e.g., GPT-4, Claude, Mistral) for intelligent retrieval and reasoning. Participate in code reviews, testing, and validation of AI components using frameworks like pytest or unittest. Document technical designs, workflows, and research findings for internal knowledge sharing. Adapt quickly to evolving technologies and business requirements in a fast-paced environment. Knowledge, Skills, and/or Abilities Required: 2–5 years of experience in AI/ML engineering, with at least 2 years in RAG and Agentic AI. Strong Python programming skills with a solid foundation in OOP and software engineering principles. Hands-on experience with AI frameworks such as LangChain, LlamaIndex, Haystack, or Hugging Face. Familiarity with MCP (Modular Cognitive Processes) and their application in agent-based systems. Experience with REST API development and deployment. Proficiency in CI/CD tools and workflows (e.g., Git, Docker, Jenkins, Airflow). Exposure to cloud platforms (AWS, Azure, or GCP) and services like S3, SageMaker, or Vertex AI. Understanding of vector databases (e.g., OpenSearch, Pinecone, Weaviate) and embedding techniques. Strong problem-solving skills and ability to work independently or in a team. Interest in exploring and implementing cutting-edge AI tools and technologies. Experience with SQL/NoSQL databases and data manipulation. Ability to communicate technical concepts clearly to both technical and non-technical audiences. Educational/Vocational/Previous Experience Recommendations: Bachelor/ Master degree or related field. 2+ years of relevant experience Working Conditions: Hybrid - Pune Location

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

The key responsibilities for this position include collaborating with finance, IT, and operations teams to ensure end-to-end U.S. indirect tax compliance. You will be responsible for developing and maintaining SOPs, internal controls, and documentation for indirect tax processes. Additionally, you will manage tax determination logic, engine configurations (Avalara, Vertex), and ERP mapping (SAP/Oracle). In this role, you will oversee compliance for U.S. sales & use tax, property tax, business licenses, and exemption certificates. You will also be expected to support tax audits, notices, nexus reviews, and state authority responses. Furthermore, you will serve as a subject matter expert on U.S. transaction taxes for internal teams and auditors. If you are interested in this position, please contact us at 9205999380.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

hyderabad, telangana

On-site

As an Associate Consultant in the SALT Tax Tech team based in Bangalore, you will be an Individual Contributor reporting to the Manager. You will primarily support the US region and work from 11:30 AM to 8:30 PM IST. Your main responsibilities will include applying your functional and technical knowledge of SAP and third-party tax engines like Onesource/Vertex. Your role will involve establishing and maintaining relationships with business leaders, driving engagement on major tax integration projects, gathering business requirements, leading analysis, and driving high-level E2E design. You will also be responsible for driving configuration and development activities to meet business requirements, managing external software vendors and System Integrators, and ensuring adherence to established SLAs. In addition, you will take a domain lead role on IT projects to ensure all business stakeholders are included and receive sufficient and timely communications. Providing leadership to teams, integrating technical expertise and business understanding to create superior solutions, consulting with team members and other organizations, customers, and vendors on complex issues, and mentoring others in the team on process/technical issues are also part of your responsibilities. You are expected to have a BE/B.Tech/MCA qualification with 4 to 7 years of relevant work experience. Mandatory skills for this role include SAP, Onesource, Vertex, and indirect tax integration. Preferred skills include knowledge of indirect tax concepts, SAP native tax, and hands-on experience in Tax Engine Integration with ERP systems. Key behavioural attributes required for this role include hands-on experience in Tax Engine Integration with ERP systems, understanding of indirect tax concepts, good knowledge of O2C and P2P processes, experience in tax engine configurations, troubleshooting tax-related issues, and solution design. Knowledge of Avalara and VAT compliance is considered a plus. The interview process for this role includes 2 technical rounds followed by an HR round. Travel may be involved in this role, while the busy season does not apply.,

Posted 1 week ago

Apply

3.0 years

0 Lacs

India

On-site

In this Job, you will: As a Gen AI Engineer at Telerapp, you will play a crucial role in designing, developing, and implementing cutting-edge generative artificial intelligence models and systems. Your expertise will contribute to creating AI-powered solutions that generate high-quality clinically relevant texts while pushing the boundaries of creativity and innovation. You will collaborate closely with cross-functional teams to deliver AI-driven products that reshape healthcare industries and user experiences. This role requires a deep understanding of machine learning, neural networks, and generative modeling techniques. To join, you should: Have 3+ years of full-time industry experience. Have hands-on experience in contemporary AI, such as, training generative AI models such as LLMs and image-to-text models, improving upon pre-trained models, evaluating these models, feedback loop etc. Have specialized expertise in model fine-tuning, RLHF, RAG, LLM tool use, etc. Have experience with LLM prompt engineering and familiarity with LLM-based workflows/architectures. Have proficiency in Python, PySpark, TensorFlow, PyTorch, Keras, Transformer, and cloud platforms such as Google Cloud Platform (GCP) or Vertex AI or similar platforms. Have to collaborate with software engineers to integrate generative models into production systems. Ensure scalability, reliability, and efficiency of the deployed models. Have experience in effective data visualization approaches and a keen eye for detail in the visual communication of findings. Your Responsibilities: Develop new LLMs for medical imaging. Developing and implementing methods that improve training efficiency and extend or improve LLM capabilities, reliability, and safety in the realm of image-to-text generation using medical data. Perform data preprocessing, indexing, and feature engineering specific to healthcare image and text data. Keep up to date with the research literature and think beyond the state of the art to address the needs of our users. Preferred Qualifications: Master’s degree in a quantitative field such as Computer Science or Data Science Salary: Market competitive For immediate consideration, send your CV at hr@telerapps.com.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Greater Kolkata Area

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities: · 3+ years of experience in implementing analytical solutions using Palantir Foundry. · · preferably in PySpark and hyperscaler platforms (cloud services like AWS, GCP and Azure) with focus on building data transformation pipelines at scale. · · Team management: Must have experience in mentoring and managing large teams (20 to 30 people) for complex engineering programs. Candidate should have experience in hiring and nurturing talent in Palantir Foundry. · · Training: candidate should have experience in creating training programs in Foundry and delivering the same in a hands-on format either offline or virtually. · · At least 3 years of hands-on experience of building and managing Ontologies on Palantir Foundry. · · At least 3 years of experience with Foundry services: · · Data Engineering with Contour and Fusion · · Dashboarding, and report development using Quiver (or Reports) · · Application development using Workshop. · · Exposure to Map and Vertex is a plus · · Palantir AIP experience will be a plus · · Hands-on experience in data engineering and building data pipelines (Code/No Code) for ELT/ETL data migration, data refinement and data quality checks on Palantir Foundry. · · Hands-on experience of managing data life cycle on at least one hyperscaler platform (AWS, GCP, Azure) using managed services or containerized deployments for data pipelines is necessary. · · Hands-on experience in working & building on Ontology (esp. demonstrable experience in building Semantic relationships). · · Proficiency in SQL, Python and PySpark. Demonstrable ability to write & optimize SQL and spark jobs. Some experience in Apache Kafka and Airflow is a prerequisite as well. · · Hands-on experience on DevOps on hyperscaler platforms and Palantir Foundry is necessary. · · Experience in MLOps is a plus. · · Experience in developing and managing scalable architecture & working experience in managing large data sets. · · Opensource contributions (or own repositories highlighting work) on GitHub or Kaggle is a plus. · · Experience with Graph data and graph analysis libraries (like Spark GraphX, Python NetworkX etc.) is a plus. · · A Palantir Foundry Certification (Solution Architect, Data Engineer) is a plus. Certificate should be valid at the time of Interview. · · Experience in developing GenAI application is a plus Mandatory skill sets: · At least 3 years of hands-on experience of building and managing Ontologies on Palantir Foundry. · At least 3 years of experience with Foundry services Preferred skill sets: Palantir Foundry Years of experience required: Experience 4 to 7 years ( 3 + years relevant) Education qualification: Bachelor's degree in computer science, data science or any other Engineering discipline. Master’s degree is a plus. Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Science Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Palantir (Software) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Minimum qualifications: Bachelor's degree in Electrical Engineering, Computer Engineering, Computer Science, or a related field, or equivalent practical experience. 5 years of experience in High Bandwidth Memory/Double Data Rate (HBM/DDR). Experience in silicon bringup, functional validation, characterizing, and qualification. Experience with board schematics, layout, and debug methodologies with using lab equipment. Preferred qualifications: Experience in hardware emulation with hardware/software integration. Experience in coding (e.g., Python) for automation development. Experience in Register-Transfer Level (RTL) design, verification or emulation. Knowledge of SoC architecture including boot flows. Knowledge of HBM/DDR standards. About The Job In this role, you’ll work to shape the future of AI/ML hardware acceleration. You will have an opportunity to drive cutting-edge TPU (Tensor Processing Unit) technology that powers Google's most demanding AI/ML applications. You’ll be part of a team that pushes boundaries, developing custom silicon solutions that power the future of Google's TPU. You'll contribute to the innovation behind products loved by millions worldwide, and leverage your design and verification expertise to verify complex digital designs, with a specific focus on TPU architecture and its integration within AI/ML-driven systems. In this role, you will be responsible for post-silicon validation of the Cloud Tensor Processing Unit (TPU) projects. You will create test plans and test content for exercising the various subsystems in the Artificial Intelligence/Machine Learning (AI/ML) System on a Chip (SoC), verify the content on pre-silicon platforms, execute the tests on post-silicon platforms, and triage and debug issues. You will work with engineers from architecture, design, design verification, and software/firmware teams. You will be validating the functional, power, performance, and electrical characteristics of the Cloud Tensor Processing Unit (TPU) silicon to help deliver high-quality designs for next generation data center accelerators. The ML, Systems, & Cloud AI (MSCA) organization at Google designs, implements, and manages the hardware, software, machine learning, and systems infrastructure for all Google services (Search, YouTube, etc.) and Google Cloud. Our end users are Googlers, Cloud customers and the billions of people who use Google services around the world. We prioritize security, efficiency, and reliability across everything we do - from developing our latest TPUs to running a global network, while driving towards shaping the future of hyperscale computing. Our global impact spans software and hardware, including Google Cloud’s Vertex AI, the leading AI platform for bringing Gemini models to enterprise customers. Responsibilities Develop and execute tests for memory controller High Bandwidth Memory (HBM) post-silicon validation and on hardware emulators and assist in bring-up processes from prototyping through post-silicon validation. Drive debugging and investigation efforts to root-cause, cross-functional issues. This includes pre-silicon prototyping platforms as well as post-silicon bringup and production. Ensure validation provides necessary functional coverage for skilled design. Help operate and maintain our hardware emulation platform for pre-silicon integration and validation. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form .

Posted 1 week ago

Apply

10.0 - 15.0 years

20 - 30 Lacs

Hyderabad, Bengaluru

Hybrid

About Client Hiring for One of the Most Prestigious Multinational Corporations! Job Description: Job Title :AES SAP VERTEX (Manager) Required skills and qualifications : 3 to 6 years of hands-on experience in Project Management Good experience required with Vertex setups (Tax Product Category (TPC), Tax Product Driver (TPD), Tax Assist Rules) Strong knowledge of SAP FICO Business Processes like P2P, OTC, AP, AR, GL, and Controlling, Strong knowledge of Month/year-end processes. Qualification : Any Graduate or Above Relevant Experience :10 to 15yrs Location :Bangalore/ Hyderabad CTC Range : 20 to 30 lpa Notice period : 0 to 90days Mode of Interview : Virtual Chaithra B Staffing analyst - IT recruiter Black and White Business solutions PVT Ltd Bangalore, Karnataka, INDIA chaithra.b@blackwhite.in / www.blackwhite.in +91 8067432409

Posted 1 week ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Minimum qualifications: Bachelor's degree in Electrical Engineering, Computer Engineering, Computer Science, a related field, or equivalent practical experience. 8 years of experience in Application-Specific Integrated Circuit (ASIC) development, with power optimization. Experience with low power schemes, power roll up, and power estimations. Experience in ASIC design verification, synthesis, timing analysis. Preferred qualifications: Experience with coding languages (e.g., Python or Perl). Experience in System on a Chip (SoC) designs and integration flows. Experience with power optimization and power modeling tools. Knowledge of high performance and low power design techniques. About The Job In this role, you’ll work to shape the future of AI/ML hardware acceleration. You will have an opportunity to drive cutting-edge TPU (Tensor Processing Unit) technology that powers Google's most demanding AI/ML applications. You’ll be part of a team that pushes boundaries, developing custom silicon solutions that power the future of Google's TPU. You'll contribute to the innovation behind products loved by millions worldwide, and leverage your design and verification expertise to verify complex digital designs, with a specific focus on TPU architecture and its integration within AI/ML-driven systems. In this role, you will be part of a team developing Application-Specific Integrated Circuits (ASICs) used to accelerate machine learning computation in data centers. You will collaborate with members of architecture, verification, power and performance, physical design, etc. to specify and deliver quality designs for next generation data center accelerators. You will also solve problems with micro-architecture and practical reasoning solutions, and evaluate design options with performance, power and area in mind. The ML, Systems, & Cloud AI (MSCA) organization at Google designs, implements, and manages the hardware, software, machine learning, and systems infrastructure for all Google services (Search, YouTube, etc.) and Google Cloud. Our end users are Googlers, Cloud customers and the billions of people who use Google services around the world. We prioritize security, efficiency, and reliability across everything we do - from developing our latest TPUs to running a global network, while driving towards shaping the future of hyperscale computing. Our global impact spans software and hardware, including Google Cloud’s Vertex AI, the leading AI platform for bringing Gemini models to enterprise customers. Responsibilities Participate in defining power management schemes and low power modes. Create power specifications and Unified Power Format (UPF) definition for System on a Chip (SoC) and subsystems. Estimate and track through all phases of the project. Run power optimization tools, suggest ways to improve power and drive convergence. Work with cross-functional teams for handoff of power intent and power projections. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form .

Posted 1 week ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About This Role Wells Fargo is seeking a Senior Software Engineer within the Enterprise Core Infrastructure Services (CIS) under CTO Organization focused on enabling the Next generation Gen AI solutions on public cloud platforms. In This Role, You Will Write, develop, and deploy Terraform code and modules for infrastructure as code to provision and manage Gen-AI services on GCP, Azure, and OpenAI platforms, ensuring production-ready deployments. CIS CTO is a key contributor in delivering and automating the provisioning of Cloud Infrastructure using Infrastructure as a Code for the cutting-edge Gen-AI services such as Agentic Solutions, Agentic Development Frameworks (ADK), Agentic-to-Agentic (A2A) services, MCP (Model Control Protocol) framework and the associated LLM models in Azure and GCP Clouds. Enable and optimize infrastructure for agentic AI frameworks, including A2A protocols (e.g., Google's proposed frameworks) and Model Control Protocol (MCP). Leverage expertise in large language models (LLMs) such as GPT-4, Google-based models (e.g., Gemini), and Anthropic-based models to support financial services applications. Collaborate with cross-functional teams to integrate Gen-AI solutions into financial platforms, ensuring alignment with business needs. Ensure scalability, security, and performance of cloud-based Gen-AI infrastructure. Mentor junior engineers and contribute to technical strategy for Gen-AI initiatives. Understanding of industry best practices and new technologies, influencing and contributing as part of the technology team to meet deliverables and work on new initiatives. Collaborate and consult with key technical experts, senior technology team, and external industry groups to resolve complex technical issues and achieve goals. Build and Enable cloud infrastructure, automate the orchestration of the entire GCP/Azure Cloud Platforms for Wells Fargo Enterprise. Working in a globally distributed team to provide innovative and robust Cloud centric solutions. Closely working with Product Team and Vendors to develop and deploy Cloud services to meet customer expectations. Required Qualifications: 4+ years of Software Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education 3+ years working with GCP and a proven track record of building complex infrastructure programmatically with IaC tools. Having at least 2+ years of experience in Azure Cloud delivering Enterprise production grade services and solutions is a huge plus. Must have 2+ years of hands-on experience with Infrastructure as Code tool Terraform and GitHub. Must have professional cloud certification on GCP and or Azure Infrastructure and automation technologies: Orchestration, Harness, Terraform, API development, Test Driven Development Sound knowledge on the following areas with an expertise on one of them - Should have a good understanding of networking, firewalls, load balancing concepts (IP, DNS, Guardrails, Vnets) and exposure to cloud security, AD, authentication methods, RBAC. Proficient and have a thorough understanding of Cloud service offerings on Data, Analytics, AI/ML. Exposure to Analytics AIML services like BigQuery, Vertex AI, Azure AI, OpenAI, Azure Machine Learning etc. Proficient with GCP services like Vertex AI Suite , Agent Builder, Vector Search,DialogFlow,Workbench etc Proficient with GCP Predictive AI services ML Pipelines, Model serving Proficient with GCP Generative AI services , LLMs ,RAG , Reasoning engine,Valuation etc Thorough understanding and handson for GCP Agent space NotebookLM Proficient and have a thorough understanding of Cloud service offerings on Security, Data Protection and Security policy implementation Thorough understanding of landing zone and networking, Security best practices, Monitoring and logging, Risk and controls. Experience working in Agile environment and product backlog grooming against ongoing engineering work Enterprise Change Management and change control, experience working within procedural and process driven environment Desired Qualifications: Deep expertise in GenAI, agentic frameworks, A2A protocols, and MCP, with hands-on experience in LLMs (e.g., GPT-4, Google Gemini, Anthropic Claude). Advanced proficiency in writing Terraform code and modules for infrastructure as code on GCP, Azure with a focus on production deployment. Should have exposure to Cloud governance and logging/monitoring tools. Experience with Agile, CI/CD, DevOps concepts and SRE principles. Experience in scripting (Shell, Python, Go) Excellent verbal, written, and interpersonal communication skills. Ability to articulate technical solutions to both technical and business audiences Ability to deliver & engage with partners effectively in a multi-cultural environment by demonstrating co-ownership & accountability in a matrix structure. Delivery focus and willingness to work in a fast-paced, enterprise environment. Posting End Date: 31 Jul 2025 Job posting may come down early due to volume of applicants. We Value Equal Opportunity Wells Fargo is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other legally protected characteristic. Employees support our focus on building strong customer relationships balanced with a strong risk mitigating and compliance-driven culture which firmly establishes those disciplines as critical to the success of our customers and company. They are accountable for execution of all applicable risk programs (Credit, Market, Financial Crimes, Operational, Regulatory Compliance), which includes effectively following and adhering to applicable Wells Fargo policies and procedures, appropriately fulfilling risk and compliance obligations, timely and effective escalation and remediation of issues, and making sound risk decisions. There is emphasis on proactive monitoring, governance, risk identification and escalation, as well as making sound risk decisions commensurate with the business unit's risk appetite and all risk and compliance program requirements. Candidates applying to job openings posted in Canada: Applications for employment are encouraged from all qualified candidates, including women, persons with disabilities, aboriginal peoples and visible minorities. Accommodation for applicants with disabilities is available upon request in connection with the recruitment process. Applicants With Disabilities To request a medical accommodation during the application or interview process, visit Disability Inclusion at Wells Fargo . Drug and Alcohol Policy Wells Fargo maintains a drug free workplace. Please see our Drug and Alcohol Policy to learn more. Wells Fargo Recruitment And Hiring Requirements Third-Party recordings are prohibited unless authorized by Wells Fargo. Wells Fargo requires you to directly represent your own experiences during the recruiting and hiring process. Reference Number R-472121

Posted 1 week ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description Job Title: Data Science Candidate Specification: 6+ years, Notice � Immediate to 15 days, Hybrid model. Job Description 5+ years of hands-on experience as an AI Engineer, Machine Learning Engineer, or a similar role focused on building and deploying AI/ML solutions. Strong proficiency in Python and its relevant ML/data science libraries (e.g., NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch). Extensive experience with at least one major deep learning framework such as TensorFlow, PyTorch, or Keras. Solid understanding of machine learning principles, algorithms (e.g., regression, classification, clustering, ensemble methods), and statistical modeling. Experience with cloud platforms (e.g., AWS, Azure, GCP) and their AI/ML services (e.g., SageMaker, Azure ML, Vertex AI). Skills Required RoleData Science ( AI ML ) Industry TypeIT Services & Consulting Functional AreaIT-Software Required Education Bachelor Degree Employment TypeFull Time, Permanent Key Skills DATA SCIENCE AI ENGINEER MACHINE LEARNING DATA SCIENCE AI ML PYTHON AWS Other Information Job CodeGO/JC/686/2025 Recruiter NameSheena Rakesh

Posted 1 week ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Company – Attentive OS Pvt Ltd Location – Noida Department – Growth Attentive.ai is a fast-growing vertical SaaS start-up, funded by Peak XV (Surge), InfoEdge, Vertex Ventures, and Tenacity Ventures, that provides innovative software solutions for the landscape, paving & construction industries in the US & Canada. Our mission is to help businesses in this space improve their operations and grow their revenue through our simple & easy-to-use software platforms. Position Description Attentive.ai is looking for a Senior Analyst – Product Marketing to help shape how we communicate our product value across the funnel. This is a content-forward role focused on bringing product messaging to life through high-converting downloadable assets, from landing pages and case studies to email sequences and sales enablement materials. You’ll work closely with the Sales, Product, and Customer Success teams to ensure every piece of content we put out helps customers understand, evaluate, and choose Attentive.ai with confidence. Roles & Responsibilities Create Long & Short-Form Content Craft compelling blog posts, landing pages, email campaigns, and product one-pagers Collaborate cross-functionally to align content with product positioning and launch priorities Case Studies & Customer Validation Interview customers and translate their wins into narrative-driven case studies Create customer-facing collateral that brings our product impact to life Build Sales Enablement Collateral Develop pitch decks, sales one-pagers, competitive battlecards, and objection-handling sheets Keep collateral up to date as messaging evolves and product features expand Distribute Content Across Channels Amplify product and customer content through relevant social media platforms, collaborating team to drive awareness and engagement among target personas. Drive Content for Conversion Support the creation of campaign assets that improve demo bookings and lead nurture while iterating on messaging and formats across the buyer journey Required Skills & Experience 3–5 years of experience in content marketing, product marketing, or B2B SaaS storytelling A portfolio of content that shows you can turn complex product ideas into clear, engaging, and effective assets Strong collaboration skills - you can work with PMs, AEs, CSMs, and designers to bring ideas to life Excellent writing and editing skills with attention to clarity, tone, and structure Experience with tools like HubSpot, Google Docs/Sheets, or content operations workflows Exposure to field services, construction tech, or vertical SaaS is a plus, but not required Exceptional communication skills Why work with us Be part of a fast-scaling SaaS company building in a high-impact, underserved industry Shape product GTM through high-leverage content that actually gets used A culture that values creativity, clarity, and customer-first thinking Competitive compensation and the opportunity to grow with a global, driven team

Posted 1 week ago

Apply

5.0 years

9 - 16 Lacs

India

On-site

Gen-AI Tech Lead - Enterprise AI Applications About Us We're a cutting-edge technology company building enterprise-grade AI solutions that transform how businesses operate. Our platform leverages the latest in Generative AI to create intelligent applications for document processing, automated decision-making, and knowledge management across industries. Role Overview We're seeking an exceptional Gen-AI Tech Lead to architect, build, and scale our next-generation AI-powered enterprise applications. You'll lead the technical strategy for implementing Large Language Models, fine-tuning custom models, and deploying production-ready AI systems that serve millions of users. Key Responsibilities - AI/ML Leadership (90% Hands-on) Design and implement enterprise-scale Generative AI applications using custom LLMs or (GPT, Claude, Llama, Gemini) Lead fine-tuning initiatives for domain-specific models and custom use cases Build and optimize model training pipelines for large-scale data processing Develop RAG (Retrieval-Augmented Generation) systems with vector databases and semantic search Implement prompt engineering strategies and automated prompt optimization Create AI evaluation frameworks and model performance monitoring systems Enterprise Application Development Build scalable Python applications integrating multiple AI models and APIs Develop microservices architecture for AI model serving and orchestration Implement real-time AI inference systems with sub-second response times Design fault-tolerant systems with fallback mechanisms and error handling Create APIs and SDKs for enterprise AI integration Build AI model version control and A/B testing frameworks MLOps & Infrastructure Containerize AI applications using Docker and orchestrate with Kubernetes Design and implement CI/CD pipelines for ML model deployment Set up model monitoring, drift detection, and automated retraining systems Optimize inference performance and cost efficiency in cloud environments Implement security and compliance measures for enterprise AI applications Technical Leadership Lead a team of 3-5 AI engineers and data scientists Establish best practices for AI development, testing, and deployment Mentor team members on cutting-edge AI technologies and techniques Collaborate with product and business teams to translate requirements into AI solutions Drive technical decision-making for AI architecture and technology stack Required Skills & Experience Core AI/ML Expertise Python : 5+ years of production Python development with AI/ML libraries LLMs : Hands-on experience with GPT-4, Claude, Llama 2/3, Gemini, or similar models Fine-tuning : Proven experience fine-tuning models using LoRA, QLoRA, or full parameter tuning Model Training : Experience training models from scratch or continued pre-training Frameworks : Expert-level knowledge of PyTorch, TensorFlow, Hugging Face Transformers Vector Databases : Experience with Pinecone, Weaviate, ChromaDB, or Qdrant Technical StackAI/ML Stack Models : OpenAI GPT, Anthropic Claude, Meta Llama, Google Gemini Frameworks : PyTorch, Hugging Face Transformers, LangChain, LlamaIndex Training : Distributed training with DeepSpeed, Accelerate, or Fairscale Serving : vLLM, TensorRT-LLM, or Triton Inference Server Vector Search : Pinecone, Weaviate, FAISS, Elasticsearch Infrastructure & DevOps Containerization : Docker, Kubernetes, Helm charts Cloud : AWS (ECS, EKS, Lambda, SageMaker), GCP Vertex AI Databases : PostgreSQL, MongoDB, Redis, Neo4j Monitoring : Prometheus, Grafana, DataDog, MLflow CI/CD : GitHub Actions, Jenkins, ArgoCD Professional Growth Work directly with founders and C-level executives Opportunity to publish research and speak at AI conferences Access to latest AI models and cutting-edge research Mentorship from industry experts and AI researchers Budget for attending top AI conferences (NeurIPS, ICML, ICLR) Ideal Candidate Profile Passionate about pushing the boundaries of AI technology Strong engineering mindset with focus on production systems Experience shipping AI products used by thousands of users Stays current with latest AI research and implements cutting-edge techniques Excellent problem-solving skills and ability to work under ambiguity Leadership experience in fast-paced, high-growth environments Apply now and help us democratize AI for enterprise customers worldwide. Job Type: Full-time Pay: ₹900,000.00 - ₹1,600,000.00 per year Schedule: Monday to Friday Supplemental Pay: Performance bonus

Posted 1 week ago

Apply

8.0 - 12.0 years

5 - 10 Lacs

Noida

On-site

Senior Assistant Vice President EXL/SAVP/1418398 Digital SolutionsNoida Posted On 24 Jul 2025 End Date 07 Sep 2025 Required Experience 8 - 12 Years Basic Section Number Of Positions 1 Band D2 Band Name Senior Assistant Vice President Cost Code D014959 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type New Max CTC 4500000.0000 - 6000000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group EXL Digital Sub Group Digital Solutions Organization Digital Solutions LOB CX Transformation Practice SBU CX Capability Development Country India City Noida Center Noida - Centre 59 Skills Skill TECHNICAL CONSULTING DATA SCIENCE - AI PRE-SALES CONSULTING SOLUTIONING Minimum Qualification GRADUATION Certification No data available Job Description Job Description Designation and SRF Name- Technical CX Consultant role Role- Permanent/ Full time Panel and Hiring Manager- Sanjay Pathak Experience- 8-12 years relevant experience Location- Noida/ Gurgaon/ Pune/ Bangalore Shift- 12PM to 10PM (10 Hours Shift. Also depends on the project/work dependencies) Working Days- 5 days Work Mode- Hybrid Job Description: Highly skilled CX Consulting with deep expertise in CCaasS, Integrations, IVR, Natural Language Processing (NLP), Language Models, and scalable cloud-based solution deployment. Skills: Technical Expertise: Having a deep understanding of Conversational AI, Smart Agent Assist, CCaaS and their technical capabilities. Stay current with industry trends, emerging technologies, and competitor offerings. Customer Engagement: Engage with prospective clients to understand their technical requirements and business challenges. Conduct needs assessments and provide tailored technical solutions. Solution Demonstrations: Deliver compelling product demonstrations that showcase the features and benefits of our solutions. Customize demonstrations to align with the specific needs and use cases of potential customers. Strong NLP and Language Model fundamentals (e.g., transformer architectures, embeddings, tokenization, fine-tuning). Expert in Python, with clean, modular, and scalable coding practices. Experience developing and deploying solutions on Azure, AWS, or Google Cloud Platform. Familiarity with Vertex AI, including Model Registry, Pipelines, and RAG integrations (preferred). Experience with PyTorch, including model training, evaluation, and serving. Knowledge of GPU-based inferencing (e.g., ONNX, Torch Script, Triton Inference Server). Understanding of ML lifecycle management, including MLOps best practices. Experience with containerization (Docker) and orchestration tools (e.g., Kubernetes). Exposure to REST APIs, gRPC, and real-time data pipelines is a plus. Degree in Computer Science, Mathematics, Computational Linguistics, AI, ML or similar field. PhD is a plus. Responsibilities: Consulting and design end-to-end AI solutions for CX. Consulting engagement of scalable AI services on cloud infrastructure (Azure/AWS/GCP). Collaborate with engineering, product, and data teams to define AI-driven features and solutions. Optimize model performance, scalability, and cost across CPU and GPU environments. Ensure reliable model serving with a focus on low-latency, high-throughput inferencing. Keep abreast of the latest advancements in NLP, LLMs, and AI infrastructure. Workflow Workflow Type Digital Solution Center

Posted 1 week ago

Apply

0 years

0 Lacs

Pune/Pimpri-Chinchwad Area

On-site

Primary Purpose Be the representative/front face of the organization to the existing / potential parents thereby enabling the organization in retaining the existing and expanding its new parents base to meet its envisioned growth. Parent Relationship Management: Ensure all parents are aptly welcomed and comfortably seated. Effectively address/resolve parents enquiries across mediums i.e. in-person, over the phone, email, company website etc. Escalate all unresolved grievances of the parents to the Principal and Marketing Team at Vertex for prompt resolution ending with a parent delight Adroitly track all parents queries via organizational query traction mechanism like CRM etc. Generate parents delight by ensuring high responsiveness and closing the loop with parents on all issues and keep them updated/engaged during the process of resolution. Efficiently guide the parent on school systems and processes and ensure that the repository of updated information is always available. Ensure ambient and parent friendly environment of the front office area with assistance from the admin department. Facilitate the information of all elements pertaining to a child's life cycle in the school as well as post school activities, summer camps etc. Sales and Marketing: Pre-sales: Efficiently manage the pre-sales process like; keeping a track on all leads whether from web, telephone, walk-ins etc. and participate in planning of activities like society camps or mall activities, pre-school tie ups, corporate tie ups, RWA and parent engagement activities. Should handle the entire sales process effectively for potential parents from first interface to closure, thus positively augmenting the conversions from walk-in to admissions. Contact potential parents, discuss their requirements, and present the VIBGYOR brand in order to commensurate the parent needs Should be an active team member in achieving the annual admission targets and objectives in line with the Organization Admission Target Plan Initiate and participate in the marketing initiatives to create brand awareness and promote the USPs like Summer Camps, Day Care, PSA activities etc. Efficiently maintain the upkeep of all the elements of the discovery room and inform the reporting manager for any support proactively. Conduct Campus tour for parents for enhanced experience. Ensuring First Time Right implementation of all processes and policies. Administrative Responsibilities Must record all admission registration on Lead Management System/MIS tracker as per the Process guidelines Should be effective planner and organize the day to ensure all opportunities are maximized Should ensure that all audits are attended to, activities and tasks done proactively and quickly act on the audit reports for closure, if required. Desired Qualification Graduate/Post Graduate in any discipline preferably in Business Administration or Marketing Experience 1-5yrs with prior sales work experience preferably in Education space

Posted 1 week ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We are seeking a visionary AI Architect to lead the design and integration of cutting-edge AI systems, including Generative AI , Large Language Models (LLMs) , multi-agent orchestration , and retrieval-augmented generation (RAG) frameworks. This role demands a strong technical foundation in machine learning, deep learning, and AI infrastructure, along with hands-on experience in building scalable, production-grade AI systems on the cloud. The ideal candidate combines architectural leadership with hands-on proficiency in modern AI frameworks, and can translate complex business goals into innovative, AI-driven technical solutions. Primary Stack & Tools: Languages : Python, SQL, Bash ML/AI Frameworks : PyTorch, TensorFlow, Scikit-learn, Hugging Face Transformers GenAI & LLM Tooling : OpenAI APIs, LangChain, LlamaIndex, Cohere, Claude, Azure OpenAI Agentic & Multi-Agent Frameworks : LangGraph, CrewAI, Agno, AutoGen Search & Retrieval : FAISS, Pinecone, Weaviate, Elasticsearch Cloud Platforms : AWS, GCP, Azure (preferred: Vertex AI, SageMaker, Bedrock) MLOps & DevOps : MLflow, Kubeflow, Docker, Kubernetes, CI/CD pipelines, Terraform, FAST API Data Tools : Snowflake, BigQuery, Spark, Airflow Key Responsibilities: Architect scalable and secure AI systems leveraging LLMs , GenAI , and multi-agent frameworks to support diverse enterprise use cases (e.g., automation, personalization, intelligent search). Design and oversee implementation of retrieval-augmented generation (RAG) pipelines integrating vector databases, LLMs, and proprietary knowledge bases. Build robust agentic workflows using tools like LangGraph , CrewAI , or Agno , enabling autonomous task execution, planning, memory, and tool use. Collaborate with product, engineering, and data teams to translate business requirements into architectural blueprints and technical roadmaps. Define and enforce AI/ML infrastructure best practices , including security, scalability, observability, and model governance. Manage technical road-map, sprint cadence, and 3–5 AI engineers; coach on best practices. Lead AI solution design reviews and ensure alignment with compliance, ethics, and responsible AI standards. Evaluate emerging GenAI & agentic tools; run proofs-of-concept and guide build-vs-buy decisions. Qualifications: 10+ years of experience in AI/ML engineering or data science, with 3+ years in AI architecture or system design. Proven experience designing and deploying LLM-based solutions at scale, including fine-tuning , prompt engineering , and RAG-based systems . Strong understanding of agentic AI design principles , multi-agent orchestration , and tool-augmented LLMs . Proficiency with cloud-native ML/AI services and infrastructure design across AWS, GCP, or Azure. Deep expertise in model lifecycle management, MLOps, and deployment workflows (batch, real-time, streaming). Familiarity with data governance , AI ethics , and security considerations in production-grade systems. Excellent communication and leadership skills, with the ability to influence technical and business stakeholders.

Posted 1 week ago

Apply

0 years

0 Lacs

India

On-site

As our first dedicated AI/ML hire , you’ll architect and ship the core learning systems that make our platform selfoptimizing. What you’ll build & own Horizon Highimpact work 03 months • Stand up modelserving infra (Vertex AI or GKE) for policy nets. • Productionize CUPED variancereduction pipeline in BigQuery ML. • Pair with founder to feed reward signals into the agentic runtime. 39 months • Ship realtime ReinforcementLearning Budget Optimizer (Thompson Sampling → PPO). • Automate syntheticcontrol jobs on Vertex AI for geolocked campaigns. • Build a feature store merging offline postal scans & streaming web events. 918 months • Finetune LLMs with LoRA/QLoRA for dynamic copy & template generation and craft robust prompt libraries (system / user prompts, chainofthought, compression). • Launch experimentdesign module (fractionalfactorial). • Mentor incoming ML/Data hires; set MLOps standards. You might be a fit if you have  5 + yrs production ML / dataplatform engineering (Python or Go/Kotlin).  Deployed RL or bandit systems (ad budget, recommender, or game AI) at scale.  Fluency with BigQuery / Snowflake SQL & ML plus streaming (Kafka / Pub/Sub).  Handson LLM finetuning using LoRA/QLoRA and proven promptengineering skills (system / assist hierarchies, fewshot, prompt compression).  Comfort running GPU & CPU model serving on GCP (Vertex AI, GKE, or baremetal K8s).  Solid causalinference experience (CUPED, diffindiff, synthetic control, uplift).  CI/CD, IaC (Terraform or Pulumi) & observability chops (Prometheus, Grafana).  Bias toward shipping working software over polishing research papers. Bonus points for:  Postal/geo datasets, adtech, or martech domain exposure.  Packaging RL models as secure microservices.  VPCSC, NIST, or SOC2 controls in a regulated data environment.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies