Home
Jobs

615 Vertex Jobs - Page 17

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

8 - 12 Lacs

Bengaluru

Remote

Naukri logo

Greetings!!! We have an urgent opening Tax Analyst- Vertex, Oracle Cloud Fusion Tax- Remote Role:- Vertex, Oracle Cloud Fusion Tax Location- Remote Duration: Long term Contract Budget: 13 LPA Immediate to 15 days Joiner Mandatory Skills: Vertex, Oracle Cloud Fusion Tax, Oracle Apps Finance, UAT, System Integration Testing JD: Skills: Having at least total 3+ years of working experience Experience on Merger & Acquisition is highly desirable Strong expertise in Vertex & Oracle Cloud Fusion Tax Good Knowledge of Oracle Apps Finance (EBS) modules Conduct System Integration Testing, work collaboratively with team members support business during UAT Leads troubleshooting efforts on critical issues with cross functional teams through resolution Works with functional leadership teams to set expectations and conduct planning and execution of SME training activities Strong technical and analytical skills on problem solving and suggest solutions Experience with Vertex tax engine and integrations with other applications/Billing platform. Participate in project throughout their lifecycle to ensure that the requirements are met to include making tax rules configurations, reviewing test cases, troubleshooting issues, and ensuring alignment across teams. Working knowledge of indirect taxes including US sales taxes, EU value-added taxes and other global transaction taxes Understands business environment and helps to connect/align relevant business stakeholders for complex Tax requirements. Develop as-is and to-be process flows to support the design and development of solutions Partners with Vertex, Oracle & E-Invoicing Vendors for Tax compliance If you're interested, please send your resume to suhas@iitjobs.com.

Posted 2 weeks ago

Apply

2.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

What You'll Do Oversee the implementation of detailed technology solutions for clients using company products, outsourced solutions, or proprietary tools/techniques. As a member of the Avalara Implementation team your goal is to provide world-class service to our customers. You will live by our cult of the customer philosophy and will increase the satisfaction of our customers. As part of the Implementation Team, you'd focus on New Product Introductions, with enhanced focus on customer onboarding. You will work from Pune office 5 days in a week. You will report to Manager, implementation What Your Responsibilities Will Be Lead planning and delivery of multiple client implementations simultaneously. Ensure that customer requirements are defined and met within the configuration and the final deliverable. Coordinate between internal implementation and technical resources and client teams to ensure smooth delivery. Assist clients with developing testing plans and procedures. Train clients on all Avalara products and services including the ERP and e-commerce integrations (called "AvaTax connectors"). Demo sales and use tax products, including pre-written and custom-built software applications. Support customers' success by answering application questions, tracking issues, monitoring changes, and resolving or escalating problems according to company guidelines. Provide training and end-user support during customer onboarding. Given our clientele based in US are ready to work in. What You’ll Need To Be Successful 2-5 years of software implementation within the B2B sector. Bachelor's degree (BCA, MCA, B.Tech) from an accredited college or university, or equivalent career experience. Install and configure the following ERPs: WooCommerce, Sage 100, Sage Intacct, Dynamics GP, D365 Sales, D365 Business Central, Salesforce Sales Cloud, NetSuite, QuickBooks, along with the ability to explain the various configuration options and demonstrate sales order/invoicing processes. Experience with Tax Automation: lead the implementation of tax engines, returns and/or exemption certificate systems for Avalara, Tax Jar, Vertex, or similar software. Knowledgeable in APIs Experience in implementing ERP solutions. How We’ll Take Care Of You Total Rewards In addition to a great compensation package, paid time off, and paid parental leave, many Avalara employees are eligible for bonuses. Health & Wellness Benefits vary by location but generally include private medical, life, and disability insurance. Inclusive culture and diversity Avalara strongly supports diversity, equity, and inclusion, and is committed to integrating them into our business practices and our organizational culture. We also have a total of 8 employee-run resource groups, each with senior leadership and exec sponsorship. What You Need To Know About Avalara We’re Avalara. We’re defining the relationship between tax and tech. We’ve already built an industry-leading cloud compliance platform, processing nearly 40 billion customer API calls and over 5 million tax returns a year, and this year we became a billion-dollar business . Our growth is real, and we’re not slowing down until we’ve achieved our mission - to be part of every transaction in the world. We’re bright, innovative, and disruptive, like the orange we love to wear. It captures our quirky spirit and optimistic mindset. It shows off the culture we’ve designed, that empowers our people to win. Ownership and achievement go hand in hand here. We instill passion in our people through the trust we place in them. We’ve been different from day one. Join us, and your career will be too. We’re An Equal Opportunity Employer Supporting diversity and inclusion is a cornerstone of our company — we don’t want people to fit into our culture, but to enrich it. All qualified candidates will receive consideration for employment without regard to race, color, creed, religion, age, gender, national orientation, disability, sexual orientation, US Veteran status, or any other factor protected by law. If you require any reasonable adjustments during the recruitment process, please let us know. Show more Show less

Posted 2 weeks ago

Apply

1.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Company Description At Konecranes, we believe that great customer experience is built on the people behind the Konecranes name. Everything we do, we do with passion and drive. We believe diversity drives business success and is the foundation for our growth. We welcome different backgrounds and skills that enrich our community and we promote a place where we can ALL be ourselves. This is what makes Konecranes a unique place to work. Job Description Job description: Material handling or crane industry experience in field of design. Candidate should have the knowledge of crane design & strong in basics of mechanical engineering. Good in prepare the drawings in AutoCAD. Excellent problem solving and time management skills. Self-motivated and highly organized. Ability to multi-task, prioritize and meet deadlines. Strong verbal, written and listening skills. Customer focused with a drive for excellent customer satisfaction. Position Overview: This position requires excellent understanding of wire rope hoists, cranes and its components from techno-commercial aspects. The hired Engineer will be responsible for the direct techno commercial sales support to the India frontline operations for the customized sales design solutions involving wire rope hoists; checking technical specifications from tender documents and making preliminary offer drawings. Must be familiar with industry specifications including crane standards FEM & EN. Excellent Verbal and written communication plays a key role in this position. This position requires the direct interface between sales and the global sales support, order engineering and the production facility Responsibilities: Act as a direct interface to the sales for all the needed techno-commercial support on Wire rope hoists and crane solutions. Interpret the required technical specifications to determine the scope of work Ability to search company archives to find records of reference offers. Must be proficient in reading mechanical equipment and structural drawings. Perform offer engineering tasks using established procedures. Create equipment layouts, undertake all necessary mechanical and structural calculations to confirm designs and generate solution based mechanical and structural designs. Escalate and Co-ordinate support for offer the inquiries to the global offer support team Communicate schedule for deliveries of offer requests to the requestor & the reporting Manager Assure quality assurance in drawing & design process by monitoring the Key Performance Indicators (KPI’s) and implementing the corrective actions. Awareness of ISO 14001 & 45001 Standards Experience: Minimum education requirement is a Bachelors of Engineering in Mechanical with a minimum of 1-2 years of experience with wire rope hoist or similar industrial equipment Good design knowledge of wire rope hoists, cranes and components Good understanding of the business requirements to pitch the most cost effective solution to the sales Experience on working in cross functional teams Excellent communication skills Critical Competencies: Strong mechanical aptitude, commercial and customer oriented mindset Competencies in 2D CAD tool (Vertex) & SAP will be an added advantage Can do attitude and ability to generate solutions Ability to create and implement new ways of working, create trust and work in effective cross-functional teams Excellent communication skills Qualifications BE (Mechanical) Additional Information What We can Offer you: Competitive Salary Work-Life balance Innovative and dynamic working environment Possibility to work in leading crane building company with leading technology Opportunity to work on Global platform Why join us? We are a global company with history dating back to 1910 - with future looking attitude that has brought us here today. Now together, we are shaping the next generation of material handling for smarter and safer world. We believe in creating a workplace that is built on trust, flexibility, friendliness and inclusivity towards each other and culture with open communication and low hierarchy. We are a strong expert organisation where you are able to use modern tools and technologies - while embracing agile methodologies and continuous learning and development. Want to learn more about Konecranes IT and what your future colleagues have to say? Visit our IT career pages on www.konecranes.com/careers/explore-our-roles/it-careers Interested? If this role sparked your interest, please submit your application by 30.06.25 at latest, on our career site. We will contact you after the application period at latest. Have questions? Please contact at Email: harshita.agrahari@konecranes.com Konecranes moves what matters . We are a global leader in material handling solutions, serving a broad range of customers across multiple industries. We consistently set the industry benchmark, from everyday improvements to the breakthroughs at moments that matter most, because we know we can always find a safer, more productive and sustainable way. That's why, with 16 000+ professionals in over 50 countries, we are trusted every day to lift, handle and move what the world needs. Konecranes is committed to ensuring that all employees and job applicants are treated fairly in an environment which is free from any form of discrimination. Show more Show less

Posted 2 weeks ago

Apply

4.0 - 6.0 years

12 - 21 Lacs

Hyderabad

Work from Office

Naukri logo

Role : Vertex AI Developer Location : Hyderabad / Chennai (Hybrid) Roles and Responsibilities Roles and Responsibilities: Must have-Spring Reactive, Microservices, Java , GCP, AWS, LucidWorks Fusion, Apache Solr, Python, React.js, Google Vertex Search skill sets. Designing, developing, and deploying scalable and efficient microservices using Java and related technologies. Collaborating with cross-functional teams, including product managers, architects, and other developers, to define and implement microservices solutions. Writing clean, maintainable, and testable code following best practices and design patterns. Ensuring the performance, scalability, and reliability of the microservices by conducting thorough testing and optimization. Integrating microservices with other systems and third-party APIs to enable seamless data exchange. Monitoring, troubleshooting, and resolving issues in the microservices architecture to ensure high availability and performance. Good working knowledge of design pattern and good understanding of software development life cycle (SDLC). Critical Skills to Possess: 3+ years of work experience with web applications. Experience designing microservices using Spring, Spring Boot, Spring Cloud. Experience in both relational and NoSQL database – MySQL, Couchbase. Experience writing unit test(jUnit) cases during application development. Experience with Jenkins for build and deployment job and an understanding of CI/CD. "" Preferred Qualifications: Bachelor’s degree in computer science or a related field (or equivalent work experience)

Posted 2 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description: We are seeking experienced Python developers with hands-on expertise in Google Cloud Vertex AI. The ideal candidate will have a strong background in machine learning model development, deployment pipelines, and cloud-native applications. Key Skills: • Advanced proficiency in Python • Experience with Vertex AI (training, deployment, pipelines, model registry) • Familiarity with Google Cloud Platform (GCP) services like BigQuery, Cloud Functions, AI Platform • Understanding of ML lifecycle, including data preprocessing, training, evaluation, and monitoring • CI/CD experience with ML workflows (e.g., Kubeflow, TFX, or Vertex Pipelines) Preferred: Experience integrating Vertex AI with DBT, Airflow, or Looker Exposure to MLOps and model governance Show more Show less

Posted 2 weeks ago

Apply

7.0 - 12.0 years

25 - 30 Lacs

Coimbatore

Work from Office

Naukri logo

Job Summary: We are seeking a Senior Data & AI/ML Engineer (Lead) with strong expertise in Google Cloud Platform (GCP) and hands-on experience in building, deploying, and managing machine learning solutions at scale. The ideal candidate will lead AI/ML initiatives, mentor a team of engineers, and collaborate cross-functionally to drive data-driven innovation and business value. Key Responsibilities: Lead the end-to-end design and implementation of scalable AI/ML models and data pipelines on GCP. Drive architecture and design discussions for AI/ML solutions across cloud-native environments. Collaborate with data scientists, analysts, and business stakeholders to define requirements and deliver intelligent solutions. Manage and optimize data pipelines using tools such as Dataflow, Pub/Sub, BigQuery, Cloud Functions , etc. Deploy ML models using Vertex AI , AI Platform , or custom CI/CD pipelines. Monitor model performance and manage model retraining, versioning, and lifecycle. Ensure best practices in data governance, security, and compliance. Mentor junior engineers and data scientists; provide leadership in code reviews and project planning. Required Skills: 8+ years of experience in Data Engineering, Machine Learning, or AI application development. Strong programming skills in Python (preferred) and/or Java/Scala . Hands-on experience with GCP services : BigQuery, Vertex AI, Cloud Functions, Dataflow, Pub/Sub, GCS, etc. Proficient in ML libraries/frameworks like TensorFlow, PyTorch, Scikit-learn . Deep understanding of data modeling, feature engineering, and model evaluation techniques. Experience with Docker , Kubernetes , and ML Ops tools. Strong background in SQL and data warehousing concepts. Exposure to data security and compliance best practices (GDPR, HIPAA, etc.). Nice to Have: GCP Certification (e.g., Professional Machine Learning Engineer , Data Engineer ) Experience with streaming data architectures . Familiarity with AI ethics , explainability , and bias mitigation techniques . Education: Bachelors or Master’s degree in Computer Science, Data Science, Artificial Intelligence, or a related field.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

Job Title: Senior Generative AI Developer Job Summary We are seeking a Senior Generative AI Developer to design, develop, and optimize AI-powered applications using LLMs, NLP, and AI orchestration frameworks. The ideal candidate will have hands-on experience with Generative AI models, cloud-based AI deployments (Google Vertex AI preferred), and multi-agent AI architectures. Key Responsibilities Develop and fine-tune Generative AI models (GPT, Gemini, Claude) for real-world applications. Build Conversational AI solutions using VertexAI Agent Buider or frameworks like LangChain, CrewAI, Langraph. Implement agentic AI architectures for autonomous task execution. Implement Multi agentic AI architectures for autonomous task execution. Deploy and optimize AI models on cloud platforms (Google Vertex AI, AWS, Azure) Work with retrieval-augmented generation (RAG), embeddings, and vector databases. Integrate AI models with APIs, databases, and enterprise applications. Ensure AI solutions are scalable, efficient, and compliant with industry standards. Stay updated on emerging AI trends and contribute to innovation initiatives. Required Skills & Qualifications 5+ years of experience in AI/ML, with a focus on Generative AI and NLP. Strong expertise in LLMs, Transformers, and AI orchestration frameworks. Experience with Python, TensorFlow, PyTorch, LangChain, or similar AI libraries. Hands-on experience in cloud AI deployments (Google Vertex AI CCAI/Dialogflow CX/Agent builder preferred). Understanding of multi-agent AI systems and automation. Experience with vector databases, embeddings, and RAG. Strong problem-solving skills and ability to work in an agile environment. Preferred Skills Experience in LLM fine-tuning, prompt engineering, and reinforcement learning. Experience integrating AI with enterprise-grade applications. Show more Show less

Posted 2 weeks ago

Apply

5.0 - 6.0 years

0 Lacs

Chandigarh, India

On-site

Linkedin logo

Job Summary JOB DESCRIPTION The SAP SD / CS / PS Analyst position at Emerson is an outstanding opportunity for a driven professional to contribute to a world-class IT team. This role is pivotal in ensuring the flawless execution and support of SAP systems across multiple regions and sites. In addition to mastering SAP SD, you will be instrumental in learning, improving, and standardizing CS and PS designs, thereby reducing support complexity. Manage SD, CS, and PS issues, requests, and changes. Engage in RUN, Change, and projects! In this Role, Your Responsibilities Will Be: Complete SAP SD and CS / PS bug fixes and minor improvements. Engage in weekly SD/CS/PS meetings with both internal and external counterparts. Audit third-party processes to ensure strict compliance with Emerson standards. Design and implement SAP SD, CS, and PS configurations and customizations in Emerson Automation Solutions SAP Systems. Review and validate all SD, CS, and PS Functional Specifications built by third parties and the change team. Receive and incorporate knowledge transfers from the change team. Resolve tickets and deliver changes and releases within the Emerson Automation Solutions Run Support Team. Provide functional/technical expertise in the SAP SD / CS / PS modules, demonstrating proficiency in SAP configuration and integration with PP, MM, FICO, and VC modules. Translate business requirements into detailed technical/functional specifications. Collaborate with ABAP, Basis, Security, and other relevant teams to ensure accurate planning, development, and execution of all work. Work closely with Business teams to optimize solution delivery and scheduling, ensuring business processes and requirements are detailed and validated with customer expectations. Complete and coordinate workstream activities primarily for RUN Support and minor improvements across multiple systems, ensuring timely progress and addressing any delays or issues. Participate in system testing and regression testing as required, following Emerson processes, policies, and procedures. Who You Are: You show a tremendous amount of initiative in tough situations; are exceptional at spotting and seizing opportunities. You observe situational and group dynamics and select best-fit approach. You make implementation plans that allocate resources precisely. You pursue everything with energy, drive, and the need to finish. For This Role, You Will Need: A minimum of 5-6 years of relevant IT experience in SAP. Proven experience coordinating with multiple internal staff and consultants concurrently within a heavily matrixed organization. Strong analytical ability, process orientation, and the ability to manage complexity independently. Experience collaborating in a global environment, including coordination and alignment with multiple collaborators. Proficiency in MS Office, particularly Excel and PowerPoint. Strong verbal and written communication skills in English. Outstanding interpersonal skills. Technical support experience with EDI, IDOC processing, Vertex, Trax, and Tradesphere is preferred. Comprehensive understanding of SAP's customizing toolset, including IMG. Detailed business understanding of sales and manufacturing processes in a plant environment. Comprehensive understanding of SAP architecture. Preferred Qualifications that Set You Apart: Bachelor’s Degree or equivalent experience. SAP certification in SD, CS, or PS modules. Familiarity with project management methodologies and tools. Our Culture & Commitment to You At Emerson, we prioritize a workplace where every employee is valued, respected, and empowered to grow. We foster an environment that encourages innovation, collaboration, and diverse perspectives—because we know that great ideas come from great teams. Our commitment to ongoing career development and growing an inclusive culture ensures you have the support to thrive. Whether through mentorship, training, or leadership opportunities, we invest in your success so you can make a lasting impact. We believe diverse teams, working together are key to driving growth and delivering business results. We recognize the importance of employee wellbeing. We prioritize providing competitive benefits plans, a variety of medical insurance plans, Employee Assistance Program, employee resource groups, recognition, and much more. Our culture offers flexible time off plans, including paid parental leave (maternal and paternal), vacation and holiday leave. About Us WHY EMERSON Our Commitment to Our People At Emerson, we are motivated by a spirit of collaboration that helps our diverse, multicultural teams across the world drive innovation that makes the world healthier, safer, smarter, and more sustainable. And we want you to join us in our bold aspiration. We have built an engaged community of inquisitive, dedicated people who thrive knowing they are welcomed, trusted, celebrated, and empowered to solve the world’s most complex problems — for our customers, our communities, and the planet. You’ll contribute to this vital work while further developing your skills through our award-winning employee development programs. We are a proud corporate citizen in every city where we operate and are committed to our people, our communities, and the world at large. We take this responsibility seriously and strive to make a positive impact through every endeavor. At Emerson, you’ll see firsthand that our people are at the center of everything we do. So, let’s go. Let’s think differently. Learn, collaborate, and grow. Seek opportunity. Push boundaries. Be empowered to make things better. Speed up to break through. Let’s go, together. Accessibility Assistance or Accommodation If you have a disability and are having difficulty accessing or using this website to apply for a position, please contact: idisability.administrator@emerson.com . About Emerson Emerson is a global leader in automation technology and software. Through our deep domain expertise and legacy of flawless execution, Emerson helps customers in critical industries like life sciences, energy, power and renewables, chemical and advanced factory automation operate more sustainably while improving productivity, energy security and reliability. With global operations and a comprehensive portfolio of software and technology, we are helping companies implement digital transformation to measurably improve their operations, conserve valuable resources and enhance their safety. We offer equitable opportunities, celebrate diversity, and embrace challenges with confidence that, together, we can make an impact across a broad spectrum of countries and industries. Whether you’re an established professional looking for a career change, an undergraduate student exploring possibilities, or a recent graduate with an advanced degree, you’ll find your chance to make a difference with Emerson. Join our team – let’s go! No calls or agencies please. Show more Show less

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

We are seeking a multi-skilled, multi-faceted Technical Development Lead with a deep expertise in Large Language models, Generative AI, and Full-Stack Cloud Native Application Development, to join our dynamic product development team. As a Development Lead, you will play a key role in developing cutting-edge products and innovative solutions for our clients, combining the power of LLMs, Generative AI, and Agentic AI together with Cloud Native Full Stack Application Development. Your primary focus will be on driving bespoke product development to build creative and impactful solutions, that enhance the product portfolio. The ideal candidate will have a strong technical background and a passion for pushing the boundaries of technology, as well as rapidly learning new skills and technology on the job. The ideal candidate will also combine traditional product development and cloud native app dev skills with modern and emerging Generative AI and LLM App development skills. Job Description: Responsibilities : Develop, implement and optimize scalable AI-enabled products, cloud-native apps and cloud solutions. Technical delivery and execution of applications involving Cloud Platforms, Cloud Native Apps, and Cloud AI Services Drive solution tech design and implementation across all layers of the application stack – including front-end, back-end, APIs, data and AI services Design and build enterprise products and full-stack applications on the MERN stack, with clear separation of concerns across layers Design and build web apps and solutions that leverage LLM models, and Generative AI workflows Leverage Multi modal AI capabilities supporting all content types and modalities, including text, imagery, audio, speech and video Constantly Research and explore emerging trends and techniques in the field of generative AI and LLMs to stay at the forefront of innovation. Drive product development and delivery within tight timelines Collaborate with full-stack developers, engineers, and quality engineers, to develop and integrate solutions into existing enterprise products. Collaborate with technology leaders and cross-functional teams to develop and validate client requirements and rapidly translate them into working solutions. Key Skills required : Full Stack MERN App Dev, Front-End + Back-End Development, API Dev, Micro Services Cloud Native App Dev, Cloud Solutions LLM, LLM App Dev AI Agents, Agentic AI Workflows Generative AI, Multi Modal AI , Creative AI Working with Text, Imagery, Speech, Audio and Video AI Must-Have capabilities: Strong Expertise in MERN stack (JavaScript) including client-side and server-side JavaScript Strong Expertise in Python based development, including Python App Dev for LLM integration Well-rounded in both programming languages Hands-on Experience in front-end and back-end development Hands-on Experience in Data Processing and Data Integration Hands-on Experience in API integration Hands-on Experience in LLM App Dev and LLM enabled solutions. Hands-on Experience in Multi modal AI models and tools. JavaScript / MERN stack - competencies : Minimum 4 years hands-on experience in working with Full-Stack MERN apps, using both client-side and server-side JavaScript Strong experience in client-side JavaScript Apps and building Static Web Apps + Dynamic Web Apps both in JavaScript Strong hands-on experience in the React.js framework, and building stateful and stateless front-end apps using React.js components Strong hands-on experience in Server-Side JavaScript, and using frameworks like Node.js, to build services and APIs in JavaScript Good experience with a Micro Services solution, and how to build the same with Node.js Gen-AI / LLM App Dev with Python – competencies : Minimum 2 years hands-on experience in Python development Minimum 2 years hands-on experience in working with LLMs and LLM models Strong experience with integrating data, both internal + external datasets, and building data pipelines, to ground LLMs in domain knowledge Strong hands-on experience with Data Pre-Processing and Processing for LLM Apps and solutions Solid Hands-on Experience with building end-to-end RAG pipelines and custom AI indexing solutions to ground LLMs and enhance LLM output Good Experience with building AI and LLM enabled Workflows Hands-on Experience integrating LLMs with external tools such as Web Search Ability to leverage advanced concepts such as tool calling and function calling, with LLM models Hands-on Experience with using LLMs for Research use cases and Research Workflows, to enable AI Research Assistants Hands-on Experience with Conversational AI solutions and chat-driven experiences Experience with multiple LLMs and models – primarily GPT-4o, GPT o1, and o3 mini, and preferably also Gemini, Claude Sonnet, etc. Experience and Expertise in Cloud Gen-AI platforms, services, and APIs, primarily Azure OpenAI, and perferably also AWS Bedrock, and/or GCP Vertex AI. Experience with vector databases (Azure AI Search, AWS OpenSearch Serverless, pgvector, etc.). Hands-on Experience with Assistants and the use of Assistants in orchestrating with LLMs Hands-on Experience working with AI Agents and Agent Services. Implement LLMOps processes, LLM evaluation frameworks, and the ability to manage Gen-AI apps and models across the lifecycle from prompt management to output evaluation. Multi Modal AI – competencies : Hands-on Experience with intelligent document processing and document indexing + document content extraction and querying, using multi modal AI Models Hands-on Experience with using Multi modal AI models and solutions for Imagery and Visual Creative – including text-to-image, image-to-image, image composition, image variations, etc. Hands-on Experience with Computer Vision and Image Processing using Multi-modal AI – for use cases such as object detection, automated captioning, etc. Hands-on Experience with using Multi modal AI for Speech – including Text to Speech, Pre-built vs. Custom Voices Hands-on Experience with building Voice-enabled and Voice-activated experiences, using Speech AI and Voice AI solutions Hands-on Experience with leveraging APIs to orchestrate across Multi Modal AI models Ability to lead design and development teams, for Full-Stack MERN Apps and Products/Solutions, built on top of LLMs and LLM models. Nice-to-Have capabilities : MERN Stack and Cloud-Native App Dev : Hands-on working experience with Server-side JavaScript Frameworks for building Domain-driven Micro Services, including Nest.js and Express.js Hands-on working experience with BFF frameworks such as GraphQL Hands-on working experience working with a Federated Graph architecture Hands-on working experience with API Management and API Gateways Experience working with container apps and containerized environments Hands-on working experience with Web Components and Portable UI components Python / ML LLM / Gen-AI App Dev : Hands-on Experience with building Agentic AI workflows that enable iterative improvement of output Hands-on experience with both Single-Agent and Multi-Agent Orchestration solutions and frameworks Hands-on experience with different Agent communication and chaining patterns Ability to leverage LLMs for Reasoning and Planning workflows, that enable higher order “goals” and automated orchestration across multiple apps and tools Ability to leverage Graph Databases and “Knowledge Graphs” as an alternate method / replacement of Vector Databases, for enabling more relevant semantic querying and outputs via LLM models. Good Background with Machine Learning solutions Good foundational understanding of Transformer Models Some Experience with custom ML model development and deployment is desirable. Proficiency in deep learning frameworks such as PyTorch, or Keras. Experience with Cloud ML Platforms such as Azure ML Service, AWS Sage maker, and NVidia AI Foundry. Location: DGS India - Pune - Kharadi EON Free Zone Brand: Dentsu Creative Time Type: Full time Contract Type: Permanent Show more Show less

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Odisha, India

Remote

Linkedin logo

As a global leader in cybersecurity, CrowdStrike protects the people, processes and technologies that drive modern organizations. Since 2011, our mission hasn’t changed — we’re here to stop breaches, and we’ve redefined modern security with the world’s most advanced AI-native platform. We work on large scale distributed systems, processing almost 3 trillion events per day. We have 3.44 PB of RAM deployed across our fleet of C* servers - and this traffic is growing daily. Our customers span all industries, and they count on CrowdStrike to keep their businesses running, their communities safe and their lives moving forward. We’re also a mission-driven company. We cultivate a culture that gives every CrowdStriker both the flexibility and autonomy to own their careers. We’re always looking to add talented CrowdStrikers to the team who have limitless passion, a relentless focus on innovation and a fanatical commitment to our customers, our community and each other. Ready to join a mission that matters? The future of cybersecurity starts with you. About The Role The charter of the Data + ML Platform team is to harness all the data that is ingested and cataloged within the Data LakeHouse for exploration, insights, model development, ML Engineering and Insights Activation. This team is situated within the larger Data Platform group, which serves as one of the core pillars of our company. We process data at a truly immense scale. Our processing is composed of various facets including threat events collected via telemetry data, associated metadata, along with IT asset information, contextual information about threat exposure based on additional processing, etc. These facets comprise the overall data platform, which is currently over 200 PB and maintained in a hyper scale Data Lakehouse, built and owned by the Data Platform team. The ingestion mechanisms include both batch and near real-time streams that form the core Threat Analytics Platform used for insights, threat hunting, incident investigations and more. As an engineer in this team, you will play an integral role as we build out our ML Experimentation Platform from the ground up. You will collaborate closely with Data Platform Software Engineers, Data Scientists & Threat Analysts to design, implement, and maintain scalable ML pipelines that will be used for Data Preparation, Cataloging, Feature Engineering, Model Training, and Model Serving that influence critical business decisions. You’ll be a key contributor in a production-focused culture that bridges the gap between model development and operational success. Future plans include generative AI investments for use cases such as modeling attack paths for IT assets. What You’ll Do Help design, build, and facilitate adoption of a modern Data+ML platform Modularize complex ML code into standardized and repeatable components Establish and facilitate adoption of repeatable patterns for model development, deployment, and monitoring Build a platform that scales to thousands of users and offers self-service capability to build ML experimentation pipelines Leverage workflow orchestration tools to deploy efficient and scalable execution of complex data and ML pipelines Review code changes from data scientists and champion software development best practices Leverage cloud services like Kubernetes, blob storage, and queues in our cloud first environment What You’ll Need B.S. in Computer Science, Data Science, Statistics, Applied Mathematics, or a related field and 7 + years related experience; or M.S. with 5+ years of experience; or Ph.D with 6+ years of experience. 3+ years experience developing and deploying machine learning solutions to production. Familiarity with typical machine learning algorithms from an engineering perspective (how they are built and used, not necessarily the theory); familiarity with supervised / unsupervised approaches: how, why, and when and labeled data is created and used 3+ years experience with ML Platform tools like Jupyter Notebooks, NVidia Workbench, MLFlow, Ray, Vertex AI etc. Experience building data platform product(s) or features with (one of) Apache Spark, Flink or comparable tools in GCP. Experience with Iceberg is highly desirable. Proficiency in distributed computing and orchestration technologies (Kubernetes, Airflow, etc.) Production experience with infrastructure-as-code tools such as Terraform, FluxCD Expert level experience with Python; Java/Scala exposure is recommended. Ability to write Python interfaces to provide standardized and simplified interfaces for data scientists to utilize internal Crowdstrike tools Expert level experience with CI/CD frameworks such as GitHub Actions Expert level experience with containerization frameworks Strong analytical and problem solving skills, capable of working in a dynamic environment Exceptional interpersonal and communication skills. Work with stakeholders across multiple teams and synthesize their needs into software interfaces and processes. Experience With The Following Is Desirable Go Iceberg Pinot or other time-series/OLAP-style database Jenkins Parquet Protocol Buffers/GRPC VJ1 Benefits Of Working At CrowdStrike Remote-friendly and flexible work culture Market leader in compensation and equity awards Comprehensive physical and mental wellness programs Competitive vacation and holidays for recharge Paid parental and adoption leaves Professional development opportunities for all employees regardless of level or role Employee Resource Groups, geographic neighbourhood groups and volunteer opportunities to build connections Vibrant office culture with world class amenities Great Place to Work Certified™ across the globe CrowdStrike is proud to be an equal opportunity employer. We are committed to fostering a culture of belonging where everyone is valued for who they are and empowered to succeed. We support veterans and individuals with disabilities through our affirmative action program. CrowdStrike is committed to providing equal employment opportunity for all employees and applicants for employment. The Company does not discriminate in employment opportunities or practices on the basis of race, color, creed, ethnicity, religion, sex (including pregnancy or pregnancy-related medical conditions), sexual orientation, gender identity, marital or family status, veteran status, age, national origin, ancestry, physical disability (including HIV and AIDS), mental disability, medical condition, genetic information, membership or activity in a local human rights commission, status with regard to public assistance, or any other characteristic protected by law. We base all employment decisions--including recruitment, selection, training, compensation, benefits, discipline, promotions, transfers, lay-offs, return from lay-off, terminations and social/recreational programs--on valid job requirements. If you need assistance accessing or reviewing the information on this website or need help submitting an application for employment or requesting an accommodation, please contact us at recruiting@crowdstrike.com for further assistance. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Greater Bengaluru Area

On-site

Linkedin logo

Job Description: We are looking for a Data Scientist with expertise in Python, Azure Cloud, NLP, Forecasting, and large-scale data processing. The role involves enhancing existing ML models, optimising embeddings, LDA models, RAG architectures, and forecasting models, and migrating data pipelines to Azure Databricks for scalability and efficiency. Key Responsibilities: Model Development Model Development & Optimisation Train and optimise models for new data providers, ensuring seamless integration. Enhance models for dynamic input handling. Improve LDA model performance to handle a higher number of clusters efficiently. Optimise RAG (Retrieval-Augmented Generation) architecture to enhance recommendation accuracy for large datasets. Upgrade Retrieval QA architecture for improved chatbot performance on large datasets. Forecasting & Time Series Modelling Develop and optimise forecasting models for marketing, demand prediction, and trend analysis. Implement time series models (e.g., ARIMA, Prophet, LSTMs) to improve business decision-making. Integrate NLP-based forecasting, leveraging customer sentiment and external data sources (e.g., news, social media). Data Pipeline & Cloud Migration Migrate the existing pipeline from Azure Synapse to Azure Databricks and retrain models accordingly - Note: this is required only for the AUB role(s) Address space and time complexity issues in embedding storage and retrieval on Azure Blob Storage. Optimise embedding storage and retrieval in Azure Blob Storage for better efficiency. MLOps & Deployment Implement MLOps best practices for model deployment on Azure ML, Azure Kubernetes Service (AKS), and Azure Functions. Automate model training, inference pipelines, and API deployments using Azure services. Experience: Experience in Data Science, Machine Learning, Deep Learning and Gen AI. Design, Architect and Execute end to end Data Science pipelines which includes Data extraction, data preprocessing, Feature engineering, Model building, tuning and Deployment. Experience in leading a team and responsible for project delivery. Experience in Building end to end machine learning pipelines with expertise in developing CI/CD pipelines using Azure Synapse pipelines, Databricks, Google Vertex AI and AWS. Experience in developing advanced natural language processing (NLP) systems, specializing in building RAG (Retrieval-Augmented Generation) models using Langchain. Deploy RAG models to production. Have expertise in building Machine learning pipelines and deploy various models like Forecasting models, Anomaly Detection models, Market Mix Models, Classification models, Regression models and Clustering Techniques. Maintaining Github repositories and cloud computing resources for effective and efficient version control, development, testing and production. Developing proof-of-concept solutions and assisting in rolling these out to our clients. Required Skills & Qualifications: Hands-on experience with Azure Databricks, Azure ML, Azure Synapse, Azure Blob Storage, and Azure Kubernetes Service (AKS). Experience with forecasting models, time series analysis, and predictive analytics. Proficiency in Python (NumPy, Pandas, TensorFlow, PyTorch, Statsmodels, Scikit-learn, Hugging Face, FAISS). Experience with model deployment, API optimisation, and serverless architectures. Hands-on experience with Docker, Kubernetes, and MLflow for tracking and scaling ML models. Expertise in optimising time complexity, memory efficiency, and scalability of ML models in a cloud environment. Experience with Langchain or equivalent and RAG and multi-agentic generation Location: DGS India - Bengaluru - Manyata N1 Block Brand: Merkle Time Type: Full time Contract Type: Permanent Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: MLOps Engineer (Remote) Experience: 5+ Years Location: Remote Type: Full-time About the Role: We are seeking an experienced MLOps Engineer to design, implement, and maintain scalable machine learning infrastructure and deployment pipelines. You will work closely with Data Scientists and DevOps teams to operationalize ML models, optimize performance, and ensure seamless CI/CD workflows in cloud environments (Azure ML/AWS/GCP). Key Responsibilities: ✔ ML Model Deployment: Containerize ML models using Docker and deploy on Kubernetes Build end-to-end ML deployment pipelines for TensorFlow/PyTorch models Integrate with Azure ML (or AWS SageMaker/GCP Vertex AI) ✔ CI/CD & Automation: Implement GitLab CI/CD pipelines for automated testing and deployment Manage version control using Git and enforce best practices ✔ Monitoring & Performance: Set up Prometheus + Grafana dashboards for model performance tracking Configure alerting systems for model drift, latency, and errors Optimize infrastructure for scalability and cost-efficiency ✔ Collaboration: Work with Data Scientists to productionize prototypes Document architecture and mentor junior engineers Skills & Qualifications: Must-Have: 5+ years in MLOps/DevOps, with 6+ years total experience Expertise in Docker, Kubernetes, CI/CD (GitLab CI/CD), Linux Strong Python scripting for automation (PySpark a plus) Hands-on with Azure ML (or AWS/GCP) for model deployment Experience with ML model monitoring (Prometheus, Grafana, ELK Stack) Nice-to-Have: Knowledge of MLflow, Kubeflow, or TF Serving Familiarity with NVIDIA Triton Inference Server Understanding of data pipelines (Airflow, Kafka) Why Join Us? 💻 100% Remote with flexible hours 🚀 Work on cutting-edge ML systems at scale 📈 Competitive salary + growth opportunities Show more Show less

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

We are seeking a multi-skilled, multi-faceted Technical Development Lead with a deep expertise in Large Language models, Generative AI, and Full-Stack Cloud Native Application Development, to join our dynamic product development team. As a Development Lead, you will play a key role in developing cutting-edge products and innovative solutions for our clients, combining the power of LLMs, Generative AI, and Agentic AI together with Cloud Native Full Stack Application Development. Your primary focus will be on driving bespoke product development to build creative and impactful solutions, that enhance the product portfolio. The ideal candidate will have a strong technical background and a passion for pushing the boundaries of technology, as well as rapidly learning new skills and technology on the job. The ideal candidate will also combine traditional product development and cloud native app dev skills with modern and emerging Generative AI and LLM App development skills. Job Description: Responsibilities : Develop, implement and optimize scalable AI-enabled products, cloud-native apps and cloud solutions. Technical delivery and execution of applications involving Cloud Platforms, Cloud Native Apps, and Cloud AI Services Drive solution tech design and implementation across all layers of the application stack – including front-end, back-end, APIs, data and AI services Design and build enterprise products and full-stack applications on the MERN stack, with clear separation of concerns across layers Design and build web apps and solutions that leverage LLM models, and Generative AI workflows Leverage Multi modal AI capabilities supporting all content types and modalities, including text, imagery, audio, speech and video Constantly Research and explore emerging trends and techniques in the field of generative AI and LLMs to stay at the forefront of innovation. Drive product development and delivery within tight timelines Collaborate with full-stack developers, engineers, and quality engineers, to develop and integrate solutions into existing enterprise products. Collaborate with technology leaders and cross-functional teams to develop and validate client requirements and rapidly translate them into working solutions. Key Skills required : Full Stack MERN App Dev, Front-End + Back-End Development, API Dev, Micro Services Cloud Native App Dev, Cloud Solutions LLM, LLM App Dev AI Agents, Agentic AI Workflows Generative AI, Multi Modal AI , Creative AI Working with Text, Imagery, Speech, Audio and Video AI Must-Have capabilities: Strong Expertise in MERN stack (JavaScript) including client-side and server-side JavaScript Strong Expertise in Python based development, including Python App Dev for LLM integration Well-rounded in both programming languages Hands-on Experience in front-end and back-end development Hands-on Experience in Data Processing and Data Integration Hands-on Experience in API integration Hands-on Experience in LLM App Dev and LLM enabled solutions. Hands-on Experience in Multi modal AI models and tools. JavaScript / MERN stack - competencies : Minimum 4 years hands-on experience in working with Full-Stack MERN apps, using both client-side and server-side JavaScript Strong experience in client-side JavaScript Apps and building Static Web Apps + Dynamic Web Apps both in JavaScript Strong hands-on experience in the React.js framework, and building stateful and stateless front-end apps using React.js components Strong hands-on experience in Server-Side JavaScript, and using frameworks like Node.js, to build services and APIs in JavaScript Good experience with a Micro Services solution, and how to build the same with Node.js Gen-AI / LLM App Dev with Python – competencies : Minimum 2 years hands-on experience in Python development Minimum 2 years hands-on experience in working with LLMs and LLM models Strong experience with integrating data, both internal + external datasets, and building data pipelines, to ground LLMs in domain knowledge Strong hands-on experience with Data Pre-Processing and Processing for LLM Apps and solutions Solid Hands-on Experience with building end-to-end RAG pipelines and custom AI indexing solutions to ground LLMs and enhance LLM output Good Experience with building AI and LLM enabled Workflows Hands-on Experience integrating LLMs with external tools such as Web Search Ability to leverage advanced concepts such as tool calling and function calling, with LLM models Hands-on Experience with using LLMs for Research use cases and Research Workflows, to enable AI Research Assistants Hands-on Experience with Conversational AI solutions and chat-driven experiences Experience with multiple LLMs and models – primarily GPT-4o, GPT o1, and o3 mini, and preferably also Gemini, Claude Sonnet, etc. Experience and Expertise in Cloud Gen-AI platforms, services, and APIs, primarily Azure OpenAI, and perferably also AWS Bedrock, and/or GCP Vertex AI. Experience with vector databases (Azure AI Search, AWS OpenSearch Serverless, pgvector, etc.). Hands-on Experience with Assistants and the use of Assistants in orchestrating with LLMs Hands-on Experience working with AI Agents and Agent Services. Implement LLMOps processes, LLM evaluation frameworks, and the ability to manage Gen-AI apps and models across the lifecycle from prompt management to output evaluation. Multi Modal AI – competencies : Hands-on Experience with intelligent document processing and document indexing + document content extraction and querying, using multi modal AI Models Hands-on Experience with using Multi modal AI models and solutions for Imagery and Visual Creative – including text-to-image, image-to-image, image composition, image variations, etc. Hands-on Experience with Computer Vision and Image Processing using Multi-modal AI – for use cases such as object detection, automated captioning, etc. Hands-on Experience with using Multi modal AI for Speech – including Text to Speech, Pre-built vs. Custom Voices Hands-on Experience with building Voice-enabled and Voice-activated experiences, using Speech AI and Voice AI solutions Hands-on Experience with leveraging APIs to orchestrate across Multi Modal AI models Ability to lead design and development teams, for Full-Stack MERN Apps and Products/Solutions, built on top of LLMs and LLM models. Nice-to-Have capabilities : MERN Stack and Cloud-Native App Dev : Hands-on working experience with Server-side JavaScript Frameworks for building Domain-driven Micro Services, including Nest.js and Express.js Hands-on working experience with BFF frameworks such as GraphQL Hands-on working experience working with a Federated Graph architecture Hands-on working experience with API Management and API Gateways Experience working with container apps and containerized environments Hands-on working experience with Web Components and Portable UI components Python / ML LLM / Gen-AI App Dev : Hands-on Experience with building Agentic AI workflows that enable iterative improvement of output Hands-on experience with both Single-Agent and Multi-Agent Orchestration solutions and frameworks Hands-on experience with different Agent communication and chaining patterns Ability to leverage LLMs for Reasoning and Planning workflows, that enable higher order “goals” and automated orchestration across multiple apps and tools Ability to leverage Graph Databases and “Knowledge Graphs” as an alternate method / replacement of Vector Databases, for enabling more relevant semantic querying and outputs via LLM models. Good Background with Machine Learning solutions Good foundational understanding of Transformer Models Some Experience with custom ML model development and deployment is desirable. Proficiency in deep learning frameworks such as PyTorch, or Keras. Experience with Cloud ML Platforms such as Azure ML Service, AWS Sage maker, and NVidia AI Foundry. Location: DGS India - Pune - Kharadi EON Free Zone Brand: Dentsu Creative Time Type: Full time Contract Type: Permanent Show more Show less

Posted 2 weeks ago

Apply

3.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Duties & Responsibilities*: Prepare U.S. state and local sales and use tax compliance filings, calculations and supporting workpapers, while applying analytics to identify key drivers for variances. Prepare foreign Value Added Tax (VAT) and Good and Services Tax (GST) compliance filings, calculations and supporting workpapers, as needed. Partner with tax management and other key stakeholders on special projects, tax planning, audit defense and process improvement. Provide internal sales and use tax training to various departments within business. Support integration of sales and use tax processes for new business acquisitions. Support analytical decision-making on key strategic initiatives by developing data-driven solutions. Learn and support current BI solutions to enhance automation, analytics, and reporting. Collaborate with tax team on U.S. and International tax projects, as needed. Build and maintain healthy working relationships with team members and internal business partners. Job Qualifications Graduate/Post Graduate in Tax, Accounting, Finance or a related field of study. Must have 3-5 years of professional experience in tax compliance. Must have experience with U.S. multi-state and local sales and use tax compliance. Experience with large U.S. public accounting firm preferred. Strong domestic sales and use tax technical skills and solid tax research skills. Must be proficient with technology, with an ability to apply technology tools to improve processes. Must be curious and maintain a passion for learning. Team player – must have the ability to dig into the details and help the team in any way needed. Confident verbal and written communication skills. Must be able to effectively communicate tax matters to non-tax professionals and partner with the business. Proven ability to meet deadlines and organizational skills. Experience using Microsoft applications, Adobe and sales tax software such as OneSource or Vertex. Knowledge and/or experience with OneSource Income Tax, GoSystems, Alteryx or ERP systems such as SAP, is a plus. Other Job-Related Components Collaborate with employees at all levels of the organization to support the business and ensure efficient/effective tax processes. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Join Us About VOIS VO IS (Vodafone Intelligent Solutions) is a strategic arm of Vodafone Group Plc, creating value and enhancing quality and efficiency across 28 countries, and operating from 7 locations: Albania, Egypt, Hungary, India, Romania, Spain and the UK. Over 29,000 highly skilled individuals are dedicated to being Vodafone Group’s partner of choice for talent, technology, and transformation. We deliver the best services across IT, Business Intelligence Services, Customer Operations, Business Operations, HR, Finance, Supply Chain, HR Operations, and many more. Established in 2006, VO IS has evolved into a global, multi-functional organisation, a Centre of Excellence for Intelligent Solutions focused on adding value and delivering business outcomes for Vodafone What You’ll Do We are seeking a highly experienced Solutions Architect to lead the design and implementation of end-to-end solutions across Employee Data, HR Dashboards, and People Analytics. This individual will play a key role in integrating data from SAP SuccessFactors into our Google Cloud-based data lake, designing ML-driven analytics models, and enabling data-driven decisions through Qlik dashboards, with a roadmap to transition into SAP Analytics Cloud (SAC). The ideal candidate will have deep expertise in both HR domain and enterprise technology architecture, with the ability to connect business needs with scalable, efficient, and secure data and analytics solutions. Who You Are Key accountabilities and decision ownership: Strong expertise in SAP SuccessFactors (Employee Central, Talent modules, Workforce Analytics). Deep understanding of people data models, HR metrics, and employee lifecycle data. Proven experience with Google Cloud Platform, including BigQuery, Vertex AI, Dataflow, etc. Hands-on experience in designing and deploying machine learning models for HR use cases. Experience with Qlik for dashboarding and SAC (SAP Analytics Cloud) is preferred. Lead end-to-end solution design for employee data and people analytics use cases, ensuring alignment with business and IT strategy. Architect data flows from SAP SuccessFactors into the Google Cloud Platform for advanced analytics. Define and implement ML/AI-based models to derive actionable insights on workforce trends, talent, and engagement. Analytics and Dashboard solutions designed on time with thought leadership on end-to-end systems architecture Not a perfect fit? Worried that you don’t meet all the desired criteria exactly? At Vodafone we are passionate about empowering people and creating a workplace where everyone can thrive, whatever their personal or professional background. If you’re excited about this role but your experience doesn’t align exactly with every part of the job description, we encourage you to still apply as you may be the right candidate for this role or another opportunity. What's In It For You Last: VOIS Equal Opportunity Employer Commitment India VO IS is proud to be an Equal Employment Opportunity Employer. We celebrate differences and we welcome and value diverse people and insights. We believe that being authentically human and inclusive powers our employees’ growth and enables them to create a positive impact on themselves and society. We do not discriminate based on age, colour, gender (including pregnancy, childbirth, or related medical conditions), gender identity, gender expression, national origin, race, religion, sexual orientation, status as an individual with a disability, or other applicable legally protected characteristics. As a result of living and breathing our commitment, our employees have helped us get certified as a Great Place to Work in India for four years running. We have been also highlighted among the Top 10 Best Workplaces for Millennials, Equity, and Inclusion , Top 50 Best Workplaces for Women , Top 25 Best Workplaces in IT & IT-BPM and 10th Overall Best Workplaces in India by the Great Place to Work Institute in 2024. These achievements position us among a select group of trustworthy and high-performing companies which put their employees at the heart of everything they do. By joining us, you are part of our commitment. We look forward to welcoming you into our family which represents a variety of cultures, backgrounds, perspectives, and skills! Who We Are We are a leading international Telco, serving millions of customers. At Vodafone, we believe that connectivity is a force for good. If we use it for the things that really matter, it can improve people's lives and the world around us. Through our technology we empower people, connecting everyone regardless of who they are or where they live and we protect the planet, whilst helping our customers do the same. Belonging at Vodafone isn't a concept; it's lived, breathed, and cultivated through everything we do. You'll be part of a global and diverse community, with many different minds, abilities, backgrounds and cultures. ;We're committed to increase diversity, ensure equal representation, and make Vodafone a place everyone feels safe, valued and included. If you require any reasonable adjustments or have an accessibility request as part of your recruitment journey, for example, extended time or breaks in between online assessments, please refer to https://careers.vodafone.com/application-adjustments/ for guidance. Together we can. Show more Show less

Posted 2 weeks ago

Apply

12.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Job description Job role - AI/ML Technical Architect Experience – More that 12 years of experience Location – Noida Mode of Work - Hybrid Shift - 2PM to 10 PM Key Responsibilities: Build scalable AI platforms that are customer-facing. Evangelize the platform with customers and internal stakeholders. Ensure platform scalability, reliability, and performance to meet business needs. Machine Learning Pipeline Design: Design ML pipelines for experiment management, model management, feature management, and model retraining. Implement A/B testing of models. Design APIs for model inferencing at scale. Proven expertise with MLflow, SageMaker, Vertex AI, and Azure AI. LLM Serving and GPU Architecture: Serve as an SME in LLM serving paradigms. Possess deep knowledge of GPU architectures. Expertise in distributed training and serving of large language models. Proficient in model and data parallel training using frameworks like DeepSpeed and service frameworks like vLLM. Model Fine-Tuning and Optimization: Demonstrate proven expertise in model fine-tuning and optimization techniques. Achieve better latencies and accuracies in model results. Reduce training and resource requirements for fine-tuning LLM and LVM models. LLM Models and Use Cases: Have extensive knowledge of different LLM models. Provide insights on the applicability of each model based on use cases. Proven experience in delivering end-to-end solutions from engineering to production for specific customer use cases. DevOps and LLMOps Proficiency: Proven expertise in DevOps and LLMOps practices. Knowledgeable in Kubernetes, Docker, and container orchestration. Deep understanding of LLM orchestration frameworks like Flowise, Langflow, and Langgraph. Skill Matrix LLM: Hugging Face OSS LLMs, GPT, Gemini, Claude, Mixtral, Llama LLM Ops: ML Flow, Langchain, Langraph, LangFlow, Flowise, LLamaIndex, SageMaker, AWS Bedrock, Vertex AI, Azure AI Databases/Datawarehouse: DynamoDB, Cosmos, MongoDB, RDS, MySQL, PostGreSQL, Aurora, Spanner, Google BigQuery. Cloud Knowledge: AWS/Azure/GCP Dev Ops (Knowledge): Kubernetes, Docker, FluentD, Kibana, Grafana, Prometheus Cloud Certifications (Bonus): AWS Professional Solution Architect, AWS Machine Learning Specialty, Azure Solutions Architect Expert Proficient in Python, SQL, Javascript About Company: Pattem Group is a conglomerate holding company headquartered in Bangalore, India. Our companies under the umbrella of Pattem Group. We represent the essence of software product development, catering to global Fortune 500 companies and innovative startups. We are seeking an HR Executive with hands-on experience and a strong focus on execution to handle critical HR functions. The role requires practical involvement in various HR areas, ensuring the smooth and effective operation of HR practices across the organization. Show more Show less

Posted 2 weeks ago

Apply

5.0 - 8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Role: GCP Cloud Architect Location: Hyderabad Notice period: Immediate joiners needed. Shift timings: US Time zones Work Mode: Work from Office Job description: Opportunity: We are seeking a highly skilled and experienced GCP Cloud Architect to join our dynamic technology team. You will play a crucial role in designing, implementing, and managing our Google Cloud Platform (GCP) infrastructure, with a primary focus on building a robust and scalable Data Lake in BigQuery. You will be instrumental in ensuring the reliability, security, and performance of our cloud environment, supporting critical healthcare data initiatives. This role requires strong technical expertise in GCP, excellent problem-solving abilities, and a passion for leveraging cloud technologies to drive impactful solutions within the healthcare domain. Responsibilities: Cloud Architecture & Design: Design and architect scalable, secure, and cost-effective GCP solutions, with a strong emphasis on BigQuery for our Data Lake. Define and implement best GCP infrastructure management, security, networking, and data governance practices. Develop and maintain comprehensive architectural diagrams, documentation, and standards. Collaborate with data engineers, data scientists, and application development teams to understand their requirements and translate them into robust cloud solutions. Evaluate and recommend new GCP services and technologies to optimize our cloud environment. Understand and implement the fundamentals of GCP, including resource hierarchy, projects, organizations, and billing. GCP Infrastructure Management: Manage and maintain our existing GCP infrastructure, ensuring high availability, performance, and security. Implement and manage infrastructure-as-code (IaC) using tools like Terraform or Cloud Deployment Manager. Monitor and troubleshoot infrastructure issues, proactively identifying and resolving potential problems. Implement and manage backup and disaster recovery strategies for our GCP environment. Optimize cloud costs and resource utilization, including BigQuery slot management. Collaboration & Communication: Work closely with cross-functional teams, including data engineering, data science, application development, security, and compliance. Communicate technical concepts and solutions effectively to both technical and non-technical stakeholders. Provide guidance and mentorship to junior team members. Participate in on-call rotation as needed. Develop and maintain thorough and reliable documentation of all cloud infrastructure processes, configurations, and security protocols. Qualifications: Bachelor's degree in Computer Science, Engineering, or a related field. Minimum of 5-8 years of experience in designing, implementing, and managing cloud infrastructure, with a strong focus on Google Cloud Platform (GCP). Proven experience in architecting and implementing Data Lakes on GCP, specifically using BigQuery. Hands-on experience with ETL/ELT processes and tools, with strong proficiency in Google Cloud Composer (Apache Airflow). Solid understanding of GCP services such as Compute Engine, Cloud Storage, Networking (VPC, Firewall Rules, Cloud DNS), IAM, Cloud Monitoring, and Cloud Logging. Experience with infrastructure-as-code (IaC) tools like Terraform or Cloud Deployment Manager. Strong understanding of security best practices for cloud environments, including identity and access management, data encryption, and network security. Excellent problem-solving, analytical, and troubleshooting skills. Strong communication, collaboration, and interpersonal skills. Bonus Points: Experience with Apigee for API management. Experience with containerization technologies like Docker and orchestration platforms like Cloud Run. Experience with Vertex AI for machine learning workflows on GCP. Familiarity with GCP Healthcare products and solutions (e.g., Cloud Healthcare API). Knowledge of healthcare data standards and regulations (e.g., HIPAA, HL7, FHIR). GCP Professional Architect certification. Experience with scripting languages (e.g., Python, Bash). Experience with Looker. Please share your resume to nitinkumar.b@apollohealthaxis.com Show more Show less

Posted 2 weeks ago

Apply

15.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Highly experienced Oracle Apps DBA with 15+ years of expertise in designing, implementing, and managing Oracle E-Business Suite (EBS), Oracle Cloud Applications, and Oracle Cloud Infrastructure (OCI). Proven track record of ensuring high availability, performance, and security of Oracle databases. Skilled in Oracle Fusion Middleware (FMW), Vertex tax compliance, Oracle APEX development, Demantra integration with non-Oracle systems, and cloud migration. Your expertise will enable us to deliver high-quality solutions that meet our clients' complex business needs. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

About the company MindBrain is a dynamic software company that integrates innovation, education, and strategic workforce solutions. As pioneers in cutting-edge technologies, we shape the future of digital transformation. Our commitment to education empowers individuals with the skills needed to thrive in a rapidly evolving landscape. Additionally, we connect businesses with the right talent at the right time, driving success through impactful collaborations. About the role MindBrain is seeking a highly experienced Senior Python Developer for a remote, contract-based position. This role is ideal for professionals passionate about AI/ML, cloud technologies, and building scalable backend solutions. You will be responsible for leading back-end development, contributing to data science initiatives, and architecting robust AI-driven systems. Responsibilities Lead back-end web development using Python and object-oriented programming principles Develop and manage robust and scalable applications and services Utilize both SQL and NoSQL databases effectively Work with data science libraries such as Pandas, Scikit-learn, TensorFlow, and PyTorch Leverage cloud-based AI/ML platforms including AWS (SageMaker, Bedrock), GCP (Vertex AI, Gemini), Azure AI Studio, and OpenAI Deploy and manage ML models in cloud environments Fine-tune models using a variety of AI/ML techniques and tools Build and manage agentic AI workflows Stay current with evolving trends in AI/ML and incorporate the latest advancements into development strategies Architect intelligent systems that utilize statistical modeling, machine learning, and deep learning for business forecasting and optimization Qualifications Fluency in Python and deep experience with SQL and NoSQL databases Proficiency with data science and deep learning libraries: Pandas, Scikit-learn, PyTorch, TensorFlow Experience deploying models on platforms like AWS SageMaker, GCP Vertex AI, Azure AI Studio, and OpenAI Hands-on experience in fine-tuning AI models and building agentic AI workflows Ability to architect scalable AI solutions based on business requirements Strong understanding of statistical modeling, machine learning, and deep learning concepts Self-motivated with a strong desire to stay current with advancements in the AI domain Excellent problem-solving, communication, and collaboration skills Equal opportunity MindBrain is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all team members. Show more Show less

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Gurugram, Haryana, India; Bengaluru, Karnataka, India; Mumbai, Maharashtra, India . Minimum qualifications: Bachelor's degree in Computer Science, Mathematics, a related technical field, or equivalent practical experience. 10 years of experience in modernizing legacy applications and applying cloud native development methodologies and tools, including container technology. Experience with software development, code technology migrations programs, code version migrations. Experience in one or more languages/frameworks (e.g., Python, Java, GoLang, JavaScript, Typescript, PyTorch, Jupyter/Colab notebooks). Experience with popular frameworks springboot, Model-View-Controller (MVC) frameworks. Preferred qualifications: Experience in Google Cloud Platform (GCP) and Vertex AI. Experience with modern software development life-cycle methodologies and tools including Integrated Development Environment (IDEs), Continuous Integration/Continuous Deployment (CI/CD), container orchestration and operations. Ability to engage with C-level or executive business leaders and influence decisions. Ability to create prototypes/demos, and articulate this to a customer audience, while integrating with existing systems and processes. Excellent consulting skills, with the ability to deliver technical presentations while leading detailed discovery and planning sessions that are aligned with the customer with defined scope and success criteria. About the job The Google Cloud Platform team helps customers transform and build what's next for their business — all with technology built in the cloud. Our products are developed for security, reliability and scalability, running the full stack from infrastructure to applications to devices and hardware. Our teams are dedicated to helping our customers — developers, small and large businesses, educational institutions and government agencies — see the benefits of our technology come to life. As part of an entrepreneurial team in this rapidly growing business, you will play a key role in understanding the needs of our customers and help shape the future of businesses of all sizes use technology to connect with customers, employees and partners. In this role, you will work with technical business teams as a Gemini Code Assist subject matter expert. You will help customers and partners in understanding the power of Gemini code assist, explaining technical differentiators, helping customers deploy the tool in the development eco-system, building use-cases for Artificial Intelligence (AI) assisted software development tooling like legacy code migration, version migration etc., and problem-solving any potential roadblocks and day-to-day software development life-cycle. You will work with customers and product development to shape the future of the Gemini Code Assist.Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Work with the team to identify and qualify business opportunities, understanding customer technical objections, and develop the strategy to resolve technical blockers. Support the technical relationship with Google’s customers, including product and solution briefings, proof-of-concept work, and partner with product management to prioritize solutions impacting customer adoption to Google Cloud. Create and deliver technical content (e.g., best practices, tutorials, code samples, presentations) to enable customers and internal teams to leverage Google Cloud Code Assist tool. Travel to customer locations and represent Google Cloud at conferences, meetups, and other industry events to share knowledge, network with peers, and stay current on the trends. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form . Show more Show less

Posted 3 weeks ago

Apply

10.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Gurugram, Haryana, India; Bengaluru, Karnataka, India; Mumbai, Maharashtra, India . Minimum qualifications: Bachelor's degree in Computer Science, Mathematics, a related technical field, or equivalent practical experience. 10 years of experience in modernizing legacy applications and applying cloud native development methodologies and tools, including container technology. Experience with software development, code technology migrations programs, code version migrations. Experience in one or more languages/frameworks (e.g., Python, Java, GoLang, JavaScript, Typescript, PyTorch, Jupyter/Colab notebooks). Experience with popular frameworks springboot, Model-View-Controller (MVC) frameworks. Preferred qualifications: Experience in Google Cloud Platform (GCP) and Vertex AI. Experience with modern software development life-cycle methodologies and tools including Integrated Development Environment (IDEs), Continuous Integration/Continuous Deployment (CI/CD), container orchestration and operations. Ability to engage with C-level or executive business leaders and influence decisions. Ability to create prototypes/demos, and articulate this to a customer audience, while integrating with existing systems and processes. Excellent consulting skills, with the ability to deliver technical presentations while leading detailed discovery and planning sessions that are aligned with the customer with defined scope and success criteria. About the job The Google Cloud Platform team helps customers transform and build what's next for their business — all with technology built in the cloud. Our products are developed for security, reliability and scalability, running the full stack from infrastructure to applications to devices and hardware. Our teams are dedicated to helping our customers — developers, small and large businesses, educational institutions and government agencies — see the benefits of our technology come to life. As part of an entrepreneurial team in this rapidly growing business, you will play a key role in understanding the needs of our customers and help shape the future of businesses of all sizes use technology to connect with customers, employees and partners. In this role, you will work with technical business teams as a Gemini Code Assist subject matter expert. You will help customers and partners in understanding the power of Gemini code assist, explaining technical differentiators, helping customers deploy the tool in the development eco-system, building use-cases for Artificial Intelligence (AI) assisted software development tooling like legacy code migration, version migration etc., and problem-solving any potential roadblocks and day-to-day software development life-cycle. You will work with customers and product development to shape the future of the Gemini Code Assist.Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Work with the team to identify and qualify business opportunities, understanding customer technical objections, and develop the strategy to resolve technical blockers. Support the technical relationship with Google’s customers, including product and solution briefings, proof-of-concept work, and partner with product management to prioritize solutions impacting customer adoption to Google Cloud. Create and deliver technical content (e.g., best practices, tutorials, code samples, presentations) to enable customers and internal teams to leverage Google Cloud Code Assist tool. Travel to customer locations and represent Google Cloud at conferences, meetups, and other industry events to share knowledge, network with peers, and stay current on the trends. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form . Show more Show less

Posted 3 weeks ago

Apply

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Gurugram, Haryana, India; Bengaluru, Karnataka, India; Mumbai, Maharashtra, India . Minimum qualifications: Bachelor's degree in Computer Science, Mathematics, a related technical field, or equivalent practical experience. 10 years of experience in modernizing legacy applications and applying cloud native development methodologies and tools, including container technology. Experience with software development, code technology migrations programs, code version migrations. Experience in one or more languages/frameworks (e.g., Python, Java, GoLang, JavaScript, Typescript, PyTorch, Jupyter/Colab notebooks). Experience with popular frameworks springboot, Model-View-Controller (MVC) frameworks. Preferred qualifications: Experience in Google Cloud Platform (GCP) and Vertex AI. Experience with modern software development life-cycle methodologies and tools including Integrated Development Environment (IDEs), Continuous Integration/Continuous Deployment (CI/CD), container orchestration and operations. Ability to engage with C-level or executive business leaders and influence decisions. Ability to create prototypes/demos, and articulate this to a customer audience, while integrating with existing systems and processes. Excellent consulting skills, with the ability to deliver technical presentations while leading detailed discovery and planning sessions that are aligned with the customer with defined scope and success criteria. About the job The Google Cloud Platform team helps customers transform and build what's next for their business — all with technology built in the cloud. Our products are developed for security, reliability and scalability, running the full stack from infrastructure to applications to devices and hardware. Our teams are dedicated to helping our customers — developers, small and large businesses, educational institutions and government agencies — see the benefits of our technology come to life. As part of an entrepreneurial team in this rapidly growing business, you will play a key role in understanding the needs of our customers and help shape the future of businesses of all sizes use technology to connect with customers, employees and partners. In this role, you will work with technical business teams as a Gemini Code Assist subject matter expert. You will help customers and partners in understanding the power of Gemini code assist, explaining technical differentiators, helping customers deploy the tool in the development eco-system, building use-cases for Artificial Intelligence (AI) assisted software development tooling like legacy code migration, version migration etc., and problem-solving any potential roadblocks and day-to-day software development life-cycle. You will work with customers and product development to shape the future of the Gemini Code Assist.Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Work with the team to identify and qualify business opportunities, understanding customer technical objections, and develop the strategy to resolve technical blockers. Support the technical relationship with Google’s customers, including product and solution briefings, proof-of-concept work, and partner with product management to prioritize solutions impacting customer adoption to Google Cloud. Create and deliver technical content (e.g., best practices, tutorials, code samples, presentations) to enable customers and internal teams to leverage Google Cloud Code Assist tool. Travel to customer locations and represent Google Cloud at conferences, meetups, and other industry events to share knowledge, network with peers, and stay current on the trends. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form . Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. SAP MM Position : SAP Senior MM Consultant Required Qualifications: Bachelor’s degree (or equivalent experience) Preferably Engineering Minimum two e2e Implementation Project along with experience in Support / Roll out / Upgrade Projects 6 to 9 Yrs. of Relevant experience Professional Mandatory Requirements: Strong knowledge of Business Processes Implementation Methodology Consumables Procurement Process Imports Procurement Source determination Demand Flow STO Automatic A/C Determination Automatic PO Conversion Pricing Procedure Output Determination Batch Management Sub-Contracting Third Party Sub-Contracting A/C Entries for the Document posting Serialization Consignment Pipeline Invoice planning Automatic PO Procedures Evaluated receipt Settlement EDI associated to Order/Delivery/Confirmation/Invoice/Material Master Data Migration with LSMW/BDC Added Advantage: Domain Experience will be added advantage. Worked with taxation components like Vertex will be added advantage. Knowledge on ABAP debugging. SAP MM Certification will be added advantage. Knowledge on Integration Modules like WM / QM / PP / SD will be an added advantage. Roles-Responsibilities: Strong configuration hands on experience in Material Management. Integration with WM / QM / PP / SD modules and with external applications. Responsible for planning and executing SAP Implementation / Development / Support activities regard to SAP – Material Management and ability to Lead the team. Understand client requirements, provide solutions, functional specifications and configure the system accordingly Ability to create presentation/workshop material for Blueprint that need to be conveyed and be able to present them to the client. Ability to create Process Flows in Microsoft Visios for the clients proposed business processes. Ability to create Process Definition Document / Design Document (PDD) and Business Process Procedure (BPP) for the solutions provided. Ability to configure SAP MM and deliver work products / packages conforming to the Client's Standards & Requirements. General : Should have good written & communication skills. Should able to handle the client individually. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

About the Role We are looking for an experienced and visionary SPM – Generative AI Product to lead the development and scaling of next-generation AI products. In this role, you will drive strategy, execution, and innovation across AI-powered experiences, collaborating cross-functionally with engineering, data science, research, and design teams. You will shape the roadmap and bring cutting-edge AI solutions to life across key business verticals. Key Responsibilities Define and own the product strategy and roadmap for Generative AI initiatives. Translate deep tech capabilities into scalable products and customer-centric solutions. Collaborate with researchers and engineers to commercialize LLM, NLP, vision, and multi-modal models. Lead end-to-end product lifecycle: discovery, MVP, launch, growth, and scaling. Identify new market opportunities and develop AI-based solutions for internal and external customers. Ensure products meet ethical, security, compliance, and data privacy standards. Drive go-to-market and monetization strategies for AI products across industries (e.g., finance, retail, healthcare, etc.). Stay up to date with the rapidly evolving Gen AI ecosystem, including open-source tools, startups, and research. Qualifications 4+ years in product management and 2+ years in AI/ML-based products, ideally with GenAI applications. Strong technical understanding of large language models (LLMs), transformers, and AI infrastructure. Proven experience shipping GenAI or ML-based products in production environments. Deep understanding of user experience in AI-driven interfaces (e.g., chatbots, copilots, content generation). Excellent cross-functional leadership, storytelling, and stakeholder management skills. Experience in working with cloud platforms (AWS, Azure, GCP) and AI/ML toolchains (e.g., LangChain, Hugging Face, OpenAI, Vertex AI). Bachelor’s or Master’s degree in Computer Science, Engineering, AI/ML, or related fields. MBA or equivalent is a plus. Preferred ExperienceExperience in building AI platforms or developer APIs. Exposure to data governance, bias mitigation, and AI ethics frameworks. Background in startups or high-growth tech organizations is a strong plus. Show more Show less

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

Remote

Linkedin logo

Python Backend Developer (FastAPI + LLM/ML Integrations) Location: Remote / Hybrid Type: Contract / Full-time Experience: 3+ Years Role Overview: We are seeking a skilled Python Backend Developer to design, build, and optimize RESTful APIs for seamless interaction with Large Language Models (LLMs), machine learning services, and cloud-based AI tools. You will work closely with frontend teams to deliver robust API services, manage database interactions, and integrate AI/ML components into scalable backend systems. The ideal candidate will have strong expertise in FastAPI, Python-based API development, text extraction (PDFs, documents), and cloud-based ML services (AWS Bedrock, Vertex AI, etc.). Key Responsibilities: Design and develop high-performance REST APIs using FastAPI for frontend consumption. Integrate with LLMs (OpenAI, Anthropic, Llama2, etc.) and ML models (RAG pipelines, embeddings, fine-tuning). Extract and process text/data from PDFs, documents, and unstructured sources using Python libraries (PyPDF2, pdfplumber, unstructured.io, etc.). Work with databases (PostgreSQL, MongoDB, Redis) for efficient data storage/retrieval. Implement authentication (JWT/OAuth), rate limiting, and API security best practices. Collaborate with frontend teams to optimize API responses and ensure smooth integration. Deploy and manage APIs on AWS/GCP (Lambda, API Gateway, EC2, Docker). Work with AI/ML tools like LangChain, LlamaIndex, Weaviate, or Pinecone for retrieval-augmented workflows. Write clean, scalable, and well-documented code with unit/integration testing (Pytest). Technology Stack: Backend: Python, FastAPI, Flask (optional), async programming AI/ML Integrations: OpenAI API, Hugging Face, AWS Bedrock, LangChain, LlamaIndex Database: PostgreSQL, MongoDB, Redis, ORMs (SQLAlchemy, Pydantic) Document Processing: PyPDF2, pdfplumber, unstructured.io, OCR tools (Tesseract) Cloud & DevOps: AWS (Lambda, API Gateway, S3), Docker, CI/CD (GitHub Actions) API Tools: Postman, Swagger/OpenAPI, RESTful standards Version Control: Git, GitHub/GitLab Required Skills & Experience: 3+ years of Python backend development (FastAPI/Flask/Django). Strong experience in building REST APIs for frontend/UI consumption. Familiarity with LLM APIs, RAG pipelines, and AI/ML cloud services. Knowledge of text extraction from PDFs/documents and data preprocessing. Experience with relational & NoSQL databases (PostgreSQL, MongoDB). Understanding of JWT, OAuth, API security, and rate limiting. Basic knowledge of AWS/GCP cloud services (Lambda, S3, EC2). Ability to work in an Agile/remote team environment. Nice to Have: Experience with asynchronous programming (async/await in FastAPI). Knowledge of WebSockets for real-time updates. Familiarity with ML model deployment (SageMaker, Vertex AI). Exposure to frontend frameworks (React, Next.js) for better API collaboration. Exposure to HIPAA Complaint project is a must. Show more Show less

Posted 3 weeks ago

Apply

Exploring Vertex Jobs in India

India has seen a rise in demand for professionals with expertise in Vertex, a cloud-based tax technology solution. Companies across various industries are actively seeking individuals with skills in Vertex to manage their tax compliance processes efficiently. If you are a job seeker looking to explore opportunities in this field, read on to learn more about the Vertex job market in India.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Hyderabad
  4. Pune
  5. Chennai

Average Salary Range

The salary range for Vertex professionals in India varies based on experience levels. Entry-level professionals can expect to earn around INR 4-6 lakhs per annum, while experienced professionals with several years in the industry can earn upwards of INR 12-15 lakhs per annum.

Career Path

In the Vertex domain, a typical career progression path may include roles such as Tax Analyst, Tax Consultant, Tax Manager, and Tax Director. Professionals may advance from Junior Tax Analyst to Senior Tax Analyst, and eventually take on leadership roles as Tax Managers or Directors.

Related Skills

Alongside expertise in Vertex, professionals in this field are often expected to have skills in tax compliance, tax regulations, accounting principles, and data analysis. Knowledge of ERP systems and experience in tax software implementation can also be beneficial.

Interview Questions

  • What is Vertex and how is it used in tax compliance? (basic)
  • Can you explain the difference between sales tax and value-added tax? (basic)
  • How do you stay updated on changes in tax laws and regulations? (basic)
  • Describe a challenging tax compliance project you worked on and how you overcame obstacles. (medium)
  • How do you ensure accuracy in tax calculations using Vertex? (medium)
  • What are some common challenges faced in implementing Vertex solutions for clients? (medium)
  • Can you walk us through a recent tax audit you were involved in? (medium)
  • How do you handle disputes with tax authorities regarding tax filings? (advanced)
  • In your opinion, what are the key factors to consider when choosing a tax technology solution like Vertex? (advanced)
  • How do you approach training and educating team members on using Vertex effectively? (advanced)
  • Describe a scenario where you had to customize Vertex to meet specific client requirements. (advanced)

Closing Remark

As you explore job opportunities in the Vertex domain in India, remember to showcase your expertise, skills, and experience confidently during interviews. Prepare thoroughly for technical questions and demonstrate your understanding of tax compliance processes. With dedication and continuous learning, you can build a successful career in Vertex roles. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies