Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Minimum qualifications: Bachelor's degree or equivalent practical experience. 5 years of experience in product management or related technical role. 2 years of experience taking technical products from conception to launch. Preferred qualifications: Experience in data centers and Cloud infrastructure. Experience in workflow engines, ETL and data pipelines or enterprise logging infrastructure. Knowledge of designing and building large-scale distributed workflow and orchestration systems. About The Job At Google, we put our users first. The world is always changing, so we need Product Managers who are continuously adapting and excited to work on products that affect millions of people every day. In this role, you will work cross-functionally to guide products from conception to launch by connecting the technical and business worlds. You can break down complex problems into steps that drive product development. One of the many reasons Google consistently brings innovative, world-changing products to market is because of the collaborative work we do in Product Management. Our team works closely with creative engineers, designers, marketers, etc. to help design and develop technologies that improve access to the world's information. We're responsible for guiding products throughout the execution cycle, focusing specifically on analyzing, positioning, packaging, promoting, and tailoring our solutions to our users. In this role, you will lead the development and evolution of Data Center Automation Platforms, encompassing workflow engines, data warehouse, and logging infrastructure. You will own the strategy, roadmap, and delivery of cutting-edge solutions that empower automation platform, improve operational efficiency, and enable data-driven decisions at scale. You will collaborate with cross-functional teams to define product requirements, align stakeholder priorities, and ensure seamless integration of platforms with the broader data center ecosystem. The ML, Systems, & Cloud AI (MSCA) organization at Google designs, implements, and manages the hardware, software, machine learning, and systems infrastructure for all Google services (Search, YouTube, etc.) and Google Cloud. Our end users are Googlers, Cloud customers and the billions of people who use Google services around the world. We prioritize security, efficiency, and reliability across everything we do - from developing our latest TPUs to running a global network, while driving towards shaping the future of hyperscale computing. Our global impact spans software and hardware, including Google Cloud’s Vertex AI, the leading AI platform for bringing Gemini models to enterprise customers. Responsibilities Define and articulate the product goal and strategy for data center automation platforms, and focused workflow orchestration systems that handle operational tasks such as machine state management. Develop and maintain a comprehensive roadmap, balancing short-term deliverables with long-term strategic goals. Prioritize features and initiatives based on impact, feasibility, and stakeholder feedback. Conduct customer research, Critical User Journey (CUJ) analysis, and pain-point synthesis to identify opportunities for automation and improved data visibility. Collaborate with stakeholders (e.g., Engineering teams, Data Center operators, and business units) to define requirements and refine solutions. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form . Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Responsibilities Manage Data: Extract, clean, and structure both structured and unstructured data. Coordinate Pipelines: Utilize tools such as Airflow, Step Functions, or Azure Data Factory to orchestrate data workflows. Deploy Models: Develop, fine-tune, and deploy models using platforms like SageMaker, Azure ML, or Vertex AI. Scale Solutions: Leverage Spark or Databricks to handle large-scale data processing tasks. Automate Processes: Implement automation using tools like Docker, Kubernetes, CI/CD pipelines, MLFlow, Seldon, and Kubeflow. Collaborate Effectively: Work alongside engineers, architects, and business stakeholders to address and resolve real-world problems efficiently. Qualifications 3+ years of hands-on experience in MLOps (4-5 years of overall software development experience). Extensive experience with at least one major cloud provider (AWS, Azure, or GCP). Proficiency in using Databricks, Spark, Python, SQL, TensorFlow, PyTorch, and Scikit-learn. Expertise in debugging Kubernetes and creating efficient Dockerfiles. Experience in prototyping with open-source tools and scaling solutions effectively. Strong analytical skills, humility, and a proactive approach to problem-solving. Preferred Qualifications Experience with SageMaker, Azure ML, or Vertex AI in a production environment. Commitment to writing clean code, creating clear documentation, and maintaining concise pull requests. Skills: sql,kubeflow,spark,docker,databricks,ml,gcp,mlflow,kubernetes,aws,pytorch,azure,ci/cd,tensorflow,scikit-learn,seldon,python,mlops Show more Show less
Posted 1 week ago
1.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description · Hands-on experience with data tools and technologies is a must. ·Partial design experience is acceptable, but core focus should be on strong data skills. ·Will be supporting the pre-sales team from a hands-on technical perspective. · GCP experience: Looker / BigQuery / Vertex – any of these with 6 months to 1 year of experience. Requirements On day one we'll expect you to... Own the modules and take complete ownership of the project Understand the scope, design and business objective of the project and articulate it in the form of a design document Strong experience with Google Cloud Platform data services, including BigQuery, Dataflow, Dataproc, Cloud Composer, Vertex AI Studio, GenAI (Gemini, Imagen, Veo) Experience in implementing data governance on GCP Familiarity with integrating GCP services with other platforms like Snowflake, and hands-on Snowflake project experience is a plus Experienced coder in python, SQL, ETL and orchestration tools Experience with containerized solutions using Google Kubernetes Engine Good communication skills to interact with internal teams and customers Expertise in pySpark(Batch and Real-time both), Kafka, SQL, Data Querying tools. Experience in working with a team, continuously monitoring, working as a individual contributor hand-on and helping team deliver their work as you deliver yours Experience in working with large volumes of data in distributed environment keeping in mind parallelism and concurrency, ensuring performant and resilient system ops Optimize the deployment architecture to reduce job run-times and resource utilization Develop and Optimize Data Warehouses given the schema design. Show more Show less
Posted 1 week ago
5.0 years
4 - 7 Lacs
Thiruvananthapuram
On-site
Techvantage.ai is a next-generation technology and product engineering company at the forefront of innovation in Generative AI, Agentic AI , and autonomous intelligent systems . We build intelligent, cutting-edge solutions designed to scale and evolve with the future of artificial intelligence. Role Overview: We are looking for a skilled and versatile AI Infrastructure Engineer (DevOps/MLOps) to build and manage the cloud infrastructure, deployment pipelines, and machine learning operations behind our AI-powered products. You will work at the intersection of software engineering, ML, and cloud architecture to ensure that our models and systems are scalable, reliable, and production-ready. What we are looking from an ideal candidate? Design and manage CI/CD pipelines for both software applications and machine learning workflows. Deploy and monitor ML models in production using tools like MLflow, SageMaker, Vertex AI, or similar. Automate the provisioning and configuration of infrastructure using IaC tools (Terraform, Pulumi, etc.). Build robust monitoring, logging, and alerting systems for AI applications. Manage containerized services with Docker and orchestration platforms like Kubernetes . Collaborate with data scientists and ML engineers to streamline model experimentation, versioning, and deployment. Optimize compute resources and storage costs across cloud environments (AWS, GCP, or Azure). Ensure system reliability, scalability, and security across all environments. Preferred Skills: What skills do you need? 5+ years of experience in DevOps, MLOps , or infrastructure engineering roles. Hands-on experience with cloud platforms ( AWS, GCP, or Azure ) and services related to ML workloads. Strong knowledge of CI/CD tools (e.g., GitHub Actions, Jenkins, GitLab CI). Proficiency in Docker , Kubernetes , and infrastructure-as-code frameworks. Experience with ML pipelines , model versioning, and ML monitoring tools. Scripting skills in Python , Bash , or similar for automation tasks. Familiarity with monitoring/logging tools (Prometheus, Grafana, ELK, CloudWatch, etc.). Understanding of ML lifecycle management and reproducibility. Preferred Qualifications: Experience with Kubeflow , MLflow , DVC , or Triton Inference Server . Exposure to data versioning , feature stores , and model registries . Certification in AWS/GCP DevOps or Machine Learning Engineering is a plus. Background in software engineering, data engineering, or ML research is a bonus. What We Offer: Work on cutting-edge AI platforms and infrastructure Cross-functional collaboration with top ML, research, and product teams Competitive compensation package – no constraints for the right candidate
Posted 1 week ago
5.0 years
0 Lacs
Pune
On-site
Pune India Technology Full time 6/8/2025 J00168128 Equifax is where you can power your possible. If you want to achieve your true potential, chart new paths, develop new skills, collaborate with bright minds, and make a meaningful impact, we want to hear from you. Equifax is currently looking for an exceptional candidate to help with production support operations for Oracle Billing & Revenue Management platform. The candidate will be primarily responsible for production support and involved in all phases of SDLC including Detailed Design, Development, Unit/Dev Integration testing, QA support. The candidate is expected to help triage & be involved with any production issues that arise on a day-to-day basis. The candidate is expected to be hands-on in the production support area and should be able to perform with minimal supervision. Duties: Perform all production support activities, including analyzing any issue tickets, resolving issues, conducting root cause analysis as required Responsible for ensuring application systems are in compliance to security, audit policies, and procedures Be involved during the architecture phase of projects and provide technical input as required Provide support for Oracle BRM systems for various projects around the globe Provide support for detailed design (application/system/network/DB) as necessitated by the project while ensuring complete architectural compliance Provide support for ensuring proper unit testing and/or dev integration testing is carried out and is 100% automated to help with creating a continuous integration environment Set & maintain very high quality in design, code and build quality and continuously strive to improve on the standards Assist QA and Production Support in troubleshooting technical issues and develop code fixes. Prepare reports, manuals, and other documentation on the status, operation and maintenance of software Follow the SLA for issues with respect to the severity Establish a strategy of continuous delivery risk management that enables proactive decisions and actions throughout the delivery life cycle. Measure and improve delivery productivity for all P1 and P2 support engineers. Participate in architecture, design, and code reviews with the software development teams. Collaborate with other support engineers to plan and organize the development of our systems. Proactively identify issues within the system or within international BU operations and/or infrastructure, security concerns, data concerns, and create a remediation plan to solve the issue permanently. Proactively creating tickets/escalation when needs are identified to correct recurrent issues with BU's; as well as modernize technology in application stack. Support, Manage, Optimize and Monitor all profiles, rules, configuration, certificates and software licenses on all environments and take appropriate action in the event of non-compliance with security requirements. Other duties as assigned Qualifications: A Bachelor's degree in Computer Science or related discipline with a significant software development component. 5-7 years of Production Support/ Software development experience using C/C++ and/or Java. 6+ years of experience with Oracle BRM (Portal Infranet/Integrate Billing Solution) 7.x is a must. Experience with BRM PCM C/Java development and customizations. Experience configuring and using various tools like Oracle Mediation Controller and integrating with third party apps like Vertex (O Series), payment processing systems (Chase Paymentech preferred), Invoice extraction systems & Oracle EBS (R12) is required. Experience in automating the jobs is a big plus Knowledge of data model and experience working with data warehouse feeds is required Experience with Oracle RDBMS database software and Oracle Weblogic. Experience with Unix/Linux operating systems and Bash/Korn Shell Scripting. Solid communication, organizational, and project management skills are required. Experience with data migration, import, and legacy conversion is valuable. Proven debugging and problem solving skills. We offer a hybrid work setting, comprehensive compensation and healthcare packages, attractive paid time off, and organizational growth potential through our online learning platform with guided career tracks. Are you ready to power your possible? Apply today, and get started on a path toward an exciting new career at Equifax, where you can make a difference! Who is Equifax? At Equifax, we believe knowledge drives progress. As a global data, analytics and technology company, we play an essential role in the global economy by helping employers, employees, financial institutions and government agencies make critical decisions with greater confidence. We work to help create seamless and positive experiences during life’s pivotal moments: applying for jobs or a mortgage, financing an education or buying a car. Our impact is real and to accomplish our goals we focus on nurturing our people for career advancement and their learning and development, supporting our next generation of leaders, maintaining an inclusive and diverse work environment, and regularly engaging and recognizing our employees. Regardless of location or role, the individual and collective work of our employees makes a difference and we are looking for talented team players to join us as we help people live their financial best. Equifax is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
Posted 1 week ago
5.0 years
0 Lacs
Pune
On-site
Pune India Technology Full time 6/8/2025 J00168129 Equifax is where you can power your possible. If you want to achieve your true potential, chart new paths, develop new skills, collaborate with bright minds, and make a meaningful impact, we want to hear from you. Equifax is currently looking for an exceptional candidate to help with production support operations for Oracle Billing & Revenue Management platform. The candidate will be primarily responsible for production support and involved in all phases of SDLC including Detailed Design, Development, Unit/Dev Integration testing, QA support. The candidate is expected to help triage & be involved with any production issues that arise on a day-to-day basis. The candidate is expected to be hands-on in the production support area and should be able to perform with minimal supervision. Duties: Perform all production support activities, including analyzing any issue tickets, resolving issues, conducting root cause analysis as required Responsible for ensuring application systems are in compliance to security, audit policies, and procedures Be involved during the architecture phase of projects and provide technical input as required Provide support for Oracle BRM systems for various projects around the globe Provide support for detailed design (application/system/network/DB) as necessitated by the project while ensuring complete architectural compliance Provide support for ensuring proper unit testing and/or dev integration testing is carried out and is 100% automated to help with creating a continuous integration environment Set & maintain very high quality in design, code and build quality and continuously strive to improve on the standards Assist QA and Production Support in troubleshooting technical issues and develop code fixes. Prepare reports, manuals, and other documentation on the status, operation and maintenance of software Follow the SLA for issues with respect to the severity Establish a strategy of continuous delivery risk management that enables proactive decisions and actions throughout the delivery life cycle. Measure and improve delivery productivity for all P1 and P2 support engineers. Participate in architecture, design, and code reviews with the software development teams. Collaborate with other support engineers to plan and organize the development of our systems. Proactively identify issues within the system or within international BU operations and/or infrastructure, security concerns, data concerns, and create a remediation plan to solve the issue permanently. Proactively creating tickets/escalation when needs are identified to correct recurrent issues with BU's; as well as modernize technology in application stack. Support, Manage, Optimize and Monitor all profiles, rules, configuration, certificates and software licenses on all environments and take appropriate action in the event of non-compliance with security requirements. Other duties as assigned Qualifications: A Bachelor's degree in Computer Science or related discipline with a significant software development component. 5-7 years of Production Support/ Software development experience using C/C++ and/or Java. 6+ years of experience with Oracle BRM (Portal Infranet/Integrate Billing Solution) 7.x is a must. Experience with BRM PCM C/Java development and customizations. Experience configuring and using various tools like Oracle Mediation Controller and integrating with third party apps like Vertex (O Series), payment processing systems (Chase Paymentech preferred), Invoice extraction systems & Oracle EBS (R12) is required. Experience in automating the jobs is a big plus Knowledge of data model and experience working with data warehouse feeds is required Experience with Oracle RDBMS database software and Oracle Weblogic. Experience with Unix/Linux operating systems and Bash/Korn Shell Scripting. Solid communication, organizational, and project management skills are required. Experience with data migration, import, and legacy conversion is valuable. Proven debugging and problem solving skills. We offer a hybrid work setting, comprehensive compensation and healthcare packages, attractive paid time off, and organizational growth potential through our online learning platform with guided career tracks. Are you ready to power your possible? Apply today, and get started on a path toward an exciting new career at Equifax, where you can make a difference! Who is Equifax? At Equifax, we believe knowledge drives progress. As a global data, analytics and technology company, we play an essential role in the global economy by helping employers, employees, financial institutions and government agencies make critical decisions with greater confidence. We work to help create seamless and positive experiences during life’s pivotal moments: applying for jobs or a mortgage, financing an education or buying a car. Our impact is real and to accomplish our goals we focus on nurturing our people for career advancement and their learning and development, supporting our next generation of leaders, maintaining an inclusive and diverse work environment, and regularly engaging and recognizing our employees. Regardless of location or role, the individual and collective work of our employees makes a difference and we are looking for talented team players to join us as we help people live their financial best. Equifax is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
Posted 1 week ago
2.0 years
0 Lacs
Bengaluru
On-site
Minimum qualifications: Bachelor’s degree in Computer Science, Electrical Engineering, a related field, or equivalent practical experience. 2 years of experience with network equipment and network protocols testing and debugging. Experience in test automation, test methodologies, writing test plans and creating test cases using Python, C++, Golang. Preferred qualifications: Master's degree or PhD in Computer Science or equivalent practical experience. 5 years of experience in software development and testing. Experience with Network Equipment, Network Protocols Testing, debugging, Large Networks troubleshooting, Deployment. About the job Our computational challenges are so big and unique we can't just buy our hardware, we've got to make it ourselves. Our Platforms Team designs and builds the hardware, software and networking technologies that power all of Google's services. As a Networking Test Engineer you make sure that our massive and growing network is operating at its peak potential. You have experience with complex networking equipment, a deep understanding of networking protocols, test design and implementation chops and a background in IP network design. It's your job to make sure Google's cutting-edge technology can perform at scale. The ML, Systems, & Cloud AI (MSCA) organization at Google designs, implements, and manages the hardware, software, machine learning, and systems infrastructure for all Google services (Search, YouTube, etc.) and Google Cloud. Our end users are Googlers, Cloud customers and the billions of people who use Google services around the world. We prioritize security, efficiency, and reliability across everything we do - from developing our latest TPUs to running a global network, while driving towards shaping the future of hyperscale computing. Our global impact spans software and hardware, including Google Cloud’s Vertex AI, the leading AI platform for bringing Gemini models to enterprise customers. Responsibilities Design, develop, and execute test plans and cases for Google Software Defined Networking (SDN) network, infrastructure, and services, and maintain lab test beds, test infrastructure, and existing test automation environments (hardware and software). Identify, document, and track network defects and performance issues, implement various simulation techniques to replicate and assess network behavior and performance. Collaborate with cross-functional teams to identify, troubleshoot, and resolve network problems, triage automated regression failures, provide failure analysis and manage software releases to production. Utilize testing tools to assess network system performance and reliability, and analyze test results, generate detailed reports on network performance and reliability. Participate in the design and implementation of automated testing solutions, serve as a technical resource to junior team members for simple to intermediate technical problems (e.g., lab infrastructure or installations). Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.
Posted 1 week ago
5.0 years
9 Lacs
Bengaluru
On-site
Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role As a Data Scientist at Kyndryl you are the bridge between business problems and innovative solutions, using a powerful blend of well-defined methodologies, statistics, mathematics, domain expertise, consulting, and software engineering. You'll wear many hats, and each day will present a new puzzle to solve, a new challenge to conquer. You will dive deep into the heart of our business, understanding its objectives and requirements – viewing them through the lens of business acumen, and converting this knowledge into a data problem. You’ll collect and explore data, seeking underlying patterns and initial insights that will guide the creation of hypotheses. Responsibilities: Lead the development and implementation of AI/ML, Generative AI and LLM projects, ensuring alignment with business objectives. Design and deploy Proof of Concepts (POCs) and Points of View (POVs) across various industry verticals, demonstrating the potential of Generative AI applications. Engage effectively with customers, showcasing and demonstrating the relevance and impact of Generative AI applications in their businesses. Collaborate with cross-functional teams to integrate AI/ML solutions into cloud environments (Azure, GCP, AWS, etc.). In this role, you will embark on a transformative process of business understanding, data understanding, and data preparation. Utilizing statistical and mathematical modelling techniques, you'll have the opportunity to create models that defy convention – models that hold the key to solving intricate business challenges. With an acute eye for accuracy and generalization, you'll evaluate these models to ensure they not only solve business problems but do so optimally. Additionally, you're not just building and validating models – you’re deploying them as code to applications and processes, ensuring that the model(s) you've selected sustains its business value throughout its lifecycle. Your expertise doesn't stop at data; you'll become intimately familiar with our business processes and have the ability to navigate their complexities, identifying issues and crafting solutions that drive meaningful change in these domains. You will develop and apply standards and policies that protect our organization's most valuable asset – ensuring that data is secure, private, accurate, available, and, most importantly, usable. Your mastery extends to data management, migration, strategy, change management, and policy and regulation. If you're ready to embrace the power of data to transform our business and embark on an epic data adventure, then join us at Kyndryl. Together, let's redefine what's possible and unleash your potential. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical and Professional Experience: Minimum of 5 years of experience in Data Science and Machine Learning, with expertise in NLP, Generative AI, LLMs, MLOps, optimization techniques, and AI solution architecture. Familiarity with business processes in 1-2 domains (e.g., Financial, Telecom, Retail, Manufacturing) to quickly understand requirements and develop solutions. Graduate/Postgraduate in computer science, computer engineering, or equivalent with minimum of 10 years of experience in the IT industry. Past experience in responding to or solutioning for RFPs, customer proposals, and customer presentations/orals. Strong understanding of Transformers, LLMs, Fine Tuning, Agents, and RAG techniques. Experience using LLM models on cloud platforms (Azure OpenAI, AWS Bedrock, GCP Vertex AI). Experience in AI/ML, with a focus on Generative AI and Large Language Models. Proven track record in working with major cloud platforms (Azure, GCP, AWS). Understanding of how to deploy LLMs on cloud/on-premise and use APIs to build industry solutions. Strong knowledge in programming, specifically in Python / R. Be well equipped to understand machine learning algorithms and be versatile enough to implement them in pure NumPy, TensorFlow or PyTorch as required Good understand of Mathematics Fundamentals like Statistics, Probability, Linear Algebra, and Calculus Experience in Data mining and providing data insights using visualization Knowledge in MySQL or NoSQL Databases will be added advantage Preferred Technical And Professional Experience Certification in one or more of the hyperscale's (Azure, AWS, Google Cloud Platform) Open Certified Data Scientist (Open CDS) – L2/L3 Familiarity with Agentic AI frameworks Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.
Posted 1 week ago
3.0 - 4.0 years
4 - 6 Lacs
Ahmedabad
On-site
We is looking for a skilled Machine Learning Engineer with hands-on experience deploying models on Google Cloud Platform (GCP) using Vertex AI. This role involves enabling real-time and batch model inferencing based on specific business requirements, with a strong focus on production-grade ML deployments. Experience: 3 to 4 years Key Responsibilities: * Deploy machine learning models on GCP using Vertex AI. * Design and implement real-time and batch inference pipelines. * Monitor model performance, detect drift, and manage lifecycle. * Ensure adherence to model governance best practices and support ML-Ops workflows. * Collaborate with cross-functional teams to support Credit Risk, Marketing, and Customer Service use cases, especially within the retail banking domain. * Develop scalable and maintainable code in Python and SQL. * Work with diverse datasets, perform feature engineering, and build, train, and fine-tune advanced predictive models. * Contribute to model deployment in the lending space. Required Skills & Experience: * Strong expertise in Python and SQL. * Proficient with ML libraries and frameworks such as scikit-learn, pandas, NumPy, spaCy, CatBoost, etc. * In-depth knowledge of GCP Vertex AI and ML pipeline orchestration. * Experience with ML-Ops and model governance. * Exposure to use cases in retail banking—Credit Risk, Marketing, and Customer Service. * Experience working with structured and unstructured data. Nice to Have: * Prior experience deploying models in the lending domain. * Understanding of regulatory considerations in financial services. Job Type: Full-time Schedule: Day shift Work Location: In person
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
SAP Technical Configuration Development and Testing Demonstrates a solid level of subject matter knowledge in one or more SAP modules, both technical configuration and functional process Responsible for the technical configuration of one or more SAP modules Leads efforts with other SAP Software engineers to validate the desired performance of cross module configuration Demonstrates expertise in the establishment of various SAP master data settings Demonstrates proficiency and skill in the area of unit testing of any configuration performed either alone or in conjunction with other Software engineer Provides estimates of configuration tasks to be completed Strong understanding of ASAP Methodology and other methodologies (i.e. Waterfall, Agile, etc.) Demonstrates a strong working knowledge of tools implemented to support the Software engineering function Ability to mentor/train other Software engineers on the various items above May be responsible for reviewing and approving the configuration or documentation of other Software engineers Communicates the impacts of new requirements in terms of effort, time and cost based on a solid understanding of current requirements and established technical configuration and functional business process built in SAP Main areas for SAP R2R are: Classic and New General ledger including profit center accounting Controlling experience (cost center accounting, Project system and COPA) Invoicing Accounts Payable Accounts Receivable Automatic payment program Electronic bank statement processing, including DMEE structure. Tax code configuration Experience with Vertex for Sales and Use tax desirable but not required Asset Accounting Technical Has an understanding of the SAP architecture, operating systems and technologies in order to facilitate solutions for WK General understanding of ABAP development and SAP Basis function Specific understanding of SAP user exits and scripts relevant to the modules of expertise Analysis, Design & Requirement Gathering May be responsible for the software requirements gathering for a project in the absence of an SAP Business Analyst Consistently apply a logical and analytical thought process to assigned tasks Develops multiple technical solution designs for solving the identified business problem or opportunity and works with team members to identify pros and cons of technical design options to assist in selection of best model for meeting business and technical system requirements Is responsible for providing estimates on a specific feature/s within a project Responsible for the creation of various pieces of documentation such as but not limited to the SAP Blueprint, elements of the Design documentation, Test Cases, SAP Process Documents and SAP Training Documents and delivery in a timely manner. Responsible for modeling and business process mapping and creation of use cases and process flows Ability to mentor/train other Software Engineers on any of the items above May be responsible for reviewing and approving of any configuration or project documentation artifacts created Awareness of project management practices Demonstrates ability to fulfill project management tasks if necessary Communicates the impacts of new requirements in terms of effort, time and cost based on a solid understanding of current requirements and established technical configuration and business process built in SAP Works with other business areas and team members to coordinate interdependencies and resolve issues Demonstrates strong ability to drive and lead work on complex and/or high visibility projects Customer Interaction Interacts with WK internal customers and others within WK to understand business needs and share SAP module knowledge, both technical and functional Provide input into the development of customers’ project planning Strong ability to work with internal customers and SAP Business Analysts to analyze a problem, think creatively and offer a variety of technical and functional solution options and to influence and negotiate appropriate option selections in the best interest of the internal customers’ area of business Communication Responsible for the effective leadership and facilitation of design sessions and other meetings with peers and clients Demonstrates a strong ability to utilize effective interviewing skills Provides proactive communication to project and department management on status and issues Consistently demonstrates effective interpersonal and verbal communication skills Consistently demonstrates effective documentation skills Consistently demonstrates effective negotiation and conflict resolution skills with minimal guidance Demonstrates effectiveness in presenting to large audiences Apply listening skills to gain a thorough understanding of what is being communicated and proactively solicit pertinent information Demonstrates the ability to restate and clarify what was heard in order to validate mutual understanding Demonstrates effectiveness and excellence in all areas of communication listed above with peers, clients, business partners and senior management/executives. Provides input for performance reviews as requested May participate in candidate interviews Leadership & Coordination Works toward professional self-improvement through continued training and education Commitment to the success of a team Demonstrates ability to work in a lead role on support projects, implementation projects, research, etc. with teams of various size and makeup Able to mentor other SAP Software Engineers on any of the items in any of the categories above Ability to engage effectively and be a driver of multiple projects simultaneously Teamwork Shares expertise and experience to help other team members. Works cooperatively with others to accomplish corporate, department and project goals / objectives. Interacts with third party consultants, onsite or offshore on all aspects of SAP development and support Exhibits behavior that demonstrates Wolters Kluwer core values – Fairness, Excellence, Collaboration, Integrity, Success. Applicants may be required to appear onsite at a Wolters Kluwer office as part of the recruitment process. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Equifax is currently looking for an exceptional candidate to help with production support operations for Oracle Billing & Revenue Management platform. The candidate will be primarily responsible for production support and involved in all phases of SDLC including Detailed Design, Development, Unit/Dev Integration testing, QA support. The candidate is expected to help triage & be involved with any production issues that arise on a day-to-day basis. The candidate is expected to be hands-on in the production support area and should be able to perform with minimal supervision. Duties: Perform all production support activities, including analyzing any issue tickets, resolving issues, conducting root cause analysis as required Responsible for ensuring application systems are in compliance to security, audit policies, and procedures Be involved during the architecture phase of projects and provide technical input as required Provide support for Oracle BRM systems for various projects around the globe Provide support for detailed design (application/system/network/DB) as necessitated by the project while ensuring complete architectural compliance Provide support for ensuring proper unit testing and/or dev integration testing is carried out and is 100% automated to help with creating a continuous integration environment Set & maintain very high quality in design, code and build quality and continuously strive to improve on the standards Assist QA and Production Support in troubleshooting technical issues and develop code fixes. Prepare reports, manuals, and other documentation on the status, operation and maintenance of software Follow the SLA for issues with respect to the severity Establish a strategy of continuous delivery risk management that enables proactive decisions and actions throughout the delivery life cycle. Measure and improve delivery productivity for all P1 and P2 support engineers. Participate in architecture, design, and code reviews with the software development teams. Collaborate with other support engineers to plan and organize the development of our systems. Proactively identify issues within the system or within international BU operations and/or infrastructure, security concerns, data concerns, and create a remediation plan to solve the issue permanently. Proactively creating tickets/escalation when needs are identified to correct recurrent issues with BU's; as well as modernize technology in application stack. Support, Manage, Optimize and Monitor all profiles, rules, configuration, certificates and software licenses on all environments and take appropriate action in the event of non-compliance with security requirements. Other duties as assigned Qualifications: A Bachelor's degree in Computer Science or related discipline with a significant software development component. 5-7 years of Production Support/ Software development experience using C/C++ and/or Java. 6+ years of experience with Oracle BRM (Portal Infranet/Integrate Billing Solution) 7.x is a must. Experience with BRM PCM C/Java development and customizations. Experience configuring and using various tools like Oracle Mediation Controller and integrating with third party apps like Vertex (O Series), payment processing systems (Chase Paymentech preferred), Invoice extraction systems & Oracle EBS (R12) is required. Experience in automating the jobs is a big plus Knowledge of data model and experience working with data warehouse feeds is required Experience with Oracle RDBMS database software and Oracle Weblogic. Experience with Unix/Linux operating systems and Bash/Korn Shell Scripting. Solid communication, organizational, and project management skills are required. Experience with data migration, import, and legacy conversion is valuable. Proven debugging and problem solving skills. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Equifax is currently looking for an exceptional candidate to help with production support operations for Oracle Billing & Revenue Management platform. The candidate will be primarily responsible for production support and involved in all phases of SDLC including Detailed Design, Development, Unit/Dev Integration testing, QA support. The candidate is expected to help triage & be involved with any production issues that arise on a day-to-day basis. The candidate is expected to be hands-on in the production support area and should be able to perform with minimal supervision. Duties: Perform all production support activities, including analyzing any issue tickets, resolving issues, conducting root cause analysis as required Responsible for ensuring application systems are in compliance to security, audit policies, and procedures Be involved during the architecture phase of projects and provide technical input as required Provide support for Oracle BRM systems for various projects around the globe Provide support for detailed design (application/system/network/DB) as necessitated by the project while ensuring complete architectural compliance Provide support for ensuring proper unit testing and/or dev integration testing is carried out and is 100% automated to help with creating a continuous integration environment Set & maintain very high quality in design, code and build quality and continuously strive to improve on the standards Assist QA and Production Support in troubleshooting technical issues and develop code fixes. Prepare reports, manuals, and other documentation on the status, operation and maintenance of software Follow the SLA for issues with respect to the severity Establish a strategy of continuous delivery risk management that enables proactive decisions and actions throughout the delivery life cycle. Measure and improve delivery productivity for all P1 and P2 support engineers. Participate in architecture, design, and code reviews with the software development teams. Collaborate with other support engineers to plan and organize the development of our systems. Proactively identify issues within the system or within international BU operations and/or infrastructure, security concerns, data concerns, and create a remediation plan to solve the issue permanently. Proactively creating tickets/escalation when needs are identified to correct recurrent issues with BU's; as well as modernize technology in application stack. Support, Manage, Optimize and Monitor all profiles, rules, configuration, certificates and software licenses on all environments and take appropriate action in the event of non-compliance with security requirements. Other duties as assigned Qualifications: A Bachelor's degree in Computer Science or related discipline with a significant software development component. 5-7 years of Production Support/ Software development experience using C/C++ and/or Java. 6+ years of experience with Oracle BRM (Portal Infranet/Integrate Billing Solution) 7.x is a must. Experience with BRM PCM C/Java development and customizations. Experience configuring and using various tools like Oracle Mediation Controller and integrating with third party apps like Vertex (O Series), payment processing systems (Chase Paymentech preferred), Invoice extraction systems & Oracle EBS (R12) is required. Experience in automating the jobs is a big plus Knowledge of data model and experience working with data warehouse feeds is required Experience with Oracle RDBMS database software and Oracle Weblogic. Experience with Unix/Linux operating systems and Bash/Korn Shell Scripting. Solid communication, organizational, and project management skills are required. Experience with data migration, import, and legacy conversion is valuable. Proven debugging and problem solving skills. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Position Title: AI/ML Engineer Company : Cyfuture India Pvt. Ltd. Industry : IT Services and IT Consulting Location : Sector 81, NSEZ, Noida (5 Days Work From Office) Website : www.cyfuture.com About Cyfuture Cyfuture is a trusted name in IT services and cloud infrastructure, offering state-of-the-art data center solutions and managed services across platforms like AWS, Azure, and VMWare. We are expanding rapidly in system integration and managed services, building strong alliances with global OEMs like VMWare, AWS, Azure, HP, Dell, Lenovo, and Palo Alto. Position Overview We are hiring an experienced AI/ML Engineer to lead and shape our AI/ML initiatives. The ideal candidate will have hands-on experience in machine learning and artificial intelligence, with strong leadership capabilities and a passion for delivering production-ready solutions. This role involves end-to-end ownership of AI/ML projects, from strategy development to deployment and optimization of large-scale systems. Key Responsibilities Lead and mentor a high-performing AI/ML team. Design and execute AI/ML strategies aligned with business goals. Collaborate with product and engineering teams to identify impactful AI opportunities. Build, train, fine-tune, and deploy ML models in production environments. Manage operations of LLMs and other AI models using modern cloud and MLOps tools. Implement scalable and automated ML pipelines (e.g., with Kubeflow or MLRun). Handle containerization and orchestration using Docker and Kubernetes. Optimize GPU/TPU resources for training and inference tasks. Develop efficient RAG pipelines with low latency and high retrieval accuracy. Automate CI/CD workflows for continuous integration and delivery of ML systems. Key Skills & Expertise 1. Cloud Computing & Deployment Proficiency in AWS, Google Cloud, or Azure for scalable model deployment. Familiarity with cloud-native services like AWS SageMaker, Google Vertex AI, or Azure ML. Expertise in Docker and Kubernetes for containerized deployments Experience with Infrastructure as Code (IaC) using tools like Terraform or CloudFormation. 2. Machine Learning & Deep Learning Strong command of frameworks: TensorFlow, PyTorch, Scikit-learn, XGBoost. Experience with MLOps tools for integration, monitoring, and automation. Expertise in pre-trained models, transfer learning, and designing custom architectures. 3. Programming & Software Engineering Strong skills in Python (NumPy, Pandas, Matplotlib, SciPy) for ML development. Backend/API development with FastAPI, Flask, or Django. Database handling with SQL and NoSQL (PostgreSQL, MongoDB, BigQuery). Familiarity with CI/CD pipelines (GitHub Actions, Jenkins). 4. Scalable AI Systems Proven ability to build AI-driven applications at scale. Handle large datasets, high-throughput requests, and real-time inference. Knowledge of distributed computing: Apache Spark, Dask, Ray. 5. Model Monitoring & Optimization Hands-on with model compression, quantization, and pruning. A/B testing and performance tracking in production. Knowledge of model retraining pipelines for continuous learning. 6. Resource Optimization Efficient use of compute resources: GPUs, TPUs, CPUs. Experience with serverless architectures to reduce cost. Auto-scaling and load balancing for high-traffic systems. 7. Problem-Solving & Collaboration Translate complex ML models into user-friendly applications. Work effectively with data scientists, engineers, and product teams. Write clear technical documentation and architecture reports. Show more Show less
Posted 1 week ago
2.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are seeking a highly skilled Development Lead with expertise in Generative AI and Large Language models to join our dynamic team. As a Development Lead, you will play a key role in developing cutting-edge AI models and systems for our clients. Your primary focus will be on driving innovation and leveraging generative AI techniques to create impactful solutions. The ideal candidate will have a strong technical background and a passion for pushing the boundaries of AI technology. Job Description: Responsibilities : Develop and implement creative experiences, campaigns, apps, and digital products, leveraging generative AI technologies at their core. Successful leadership and delivery of projects involving Cloud Gen-AI Platforms and Cloud AI Services, Data Pre-processing, Cloud AI PaaS Solutions, LLMs, Base Foundation Models, Fine Tuned models, working with a variety of different LLMs and LLM APIs. Conceptualize, Design, build and develop experiences and solutions which demonstrate the minimum required functionality within tight timelines. Collaborate with creative technology leaders and cross-functional teams to test feasibility of new ideas, help refine and validate client requirements and translate them into working prototypes, and from thereon to scalable Gen-AI solutions. Research and explore emerging trends and techniques in the field of generative AI and LLMs to stay at the forefront of innovation. Research and explore new products, platforms, and frameworks in the field of generative AI on an ongoing basis and stay on top of this very dynamic, evolving field Design and optimize Gen-AI Apps for efficient data processing and model leverage. Implement LLMOps processes, and the ability to manage Gen-AI apps and models across the lifecycle from prompt management to results evaluation. Evaluate and fine-tune models to ensure high performance and accuracy. Collaborate with engineers to develop and integrate AI solutions into existing systems. Stay up-to-date with the latest advancements in the field of Gen-AI and contribute to the company's technical knowledge base. Must-Have: Strong Expertise in Python development, and the Python Dev ecosystem, including various frameworks/libraries for front-end and back-end Python dev, data processing, API integration, and AI/ML solution development. Minimum 2 years hands-on experience in working with Generative AI Applications and Solutions. Minimum 2 years hands-on experience in working with Large Language Models Solid understanding of Transformer Models and how they work. Reasonable understanding of Diffusion Models and how they work. Hands-on Experience with building production solutions using a variety of different. Experience with multiple LLMs and models - including GPT-4o, Gemini, Claude, Llama, etc. Deep Experience and Expertise in Cloud Gen-AI platforms, services, and APIs, including Azure OpenAI, AWS Bedrock, and/or GCP Vertex AI. Solid Hands-on Experience working with Enterprise RAG technologies and solutions / frameworks - including LangChain, Llama Index, etc. Solid Hands-on Experience with developing end-to-end RAG Pipelines. Solid Hands-on Experience with Agent-driven Gen-AI architectures and solutions, and working with AI Agents. Hands-on experience with Single-Agent and Multi-Agent Orchestration solutions Solid Hands-on Experience with AI and LLM Workflows Experience with LLM model registries (Hugging Face), LLM APIs, embedding models, etc. Experience with vector databases (Azure AI Search, AWS Kendra, FAISS, Milvus etc.). Experience in data preprocessing, and post-processing model / results evaluation. Hands-on Experience with Diffusion Models and AI. Art models including SDXL, DALL-E 3, Adobe Firefly, Midjourney, is highly desirable. Hands-on Experience with Image Processing and Creative Automation at scale, using AI models. Hands-on experience with image and media transformation and adaptation at scale, using AI Art and Diffusion models. Hands-on Experience with dynamic creative use cases, using AI Art and Diffusion Models. Hands-on Experience with Fine-Tuning LLM models at scale. Hands-on Experience with Fine-Tuning Diffusion models and Fine-tuning techniques such as LoRA for AI Art models as well. Hands-on Experience with AI Speech models and services, including Text-to-Speech and Speech-to-Text. Ability to lead design and development teams, for Full-Stack Gen-AI Apps and Products/Solutions, built on LLMs and Diffusion models. Ability to lead design and development for Creative Experiences and Campaigns, built on LLMs and Diffusion models. Nice-to-Have: Good Background and Foundation with Machine Learning solutions and algorithms Experience with designing, developing, and deploying production-grade machine learning solutions. Experience with Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). Experience with custom ML model development and deployment Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or Keras. Strong knowledge of machine learning algorithms and their practical applications. Experience with Cloud ML Platforms such as Azure ML Service, AWS Sage maker, and NVidia AI Foundry. Hands-on Experience with Video Generation models. Hands-on Experience with 3D Generation Models. Location: DGS India - Pune - Kharadi EON Free Zone Brand: Dentsu Creative Time Type: Full time Contract Type: Consultant Show more Show less
Posted 1 week ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are seeking a highly skilled Development Lead with expertise in Generative AI and Large Language models to join our dynamic team. As a Development Lead, you will play a key role in developing cutting-edge AI models and systems for our clients. Your primary focus will be on driving innovation and leveraging generative AI techniques to create impactful solutions. The ideal candidate will have a strong technical background and a passion for pushing the boundaries of AI technology. Job Description: Responsibilities : Develop and implement creative experiences, campaigns, apps, and digital products, leveraging generative AI technologies at their core. Successful leadership and delivery of projects involving Cloud Gen-AI Platforms and Cloud AI Services, Data Pre-processing, Cloud AI PaaS Solutions, LLMs, Base Foundation Models, Fine Tuned models, working with a variety of different LLMs and LLM APIs. Conceptualize, Design, build and develop experiences and solutions which demonstrate the minimum required functionality within tight timelines. Collaborate with creative technology leaders and cross-functional teams to test feasibility of new ideas, help refine and validate client requirements and translate them into working prototypes, and from thereon to scalable Gen-AI solutions. Research and explore emerging trends and techniques in the field of generative AI and LLMs to stay at the forefront of innovation. Research and explore new products, platforms, and frameworks in the field of generative AI on an ongoing basis and stay on top of this very dynamic, evolving field Design and optimize Gen-AI Apps for efficient data processing and model leverage. Implement LLMOps processes, and the ability to manage Gen-AI apps and models across the lifecycle from prompt management to results evaluation. Evaluate and fine-tune models to ensure high performance and accuracy. Collaborate with engineers to develop and integrate AI solutions into existing systems. Stay up-to-date with the latest advancements in the field of Gen-AI and contribute to the company's technical knowledge base. Must-Have: Strong Expertise in Python development, and the Python Dev ecosystem, including various frameworks/libraries for front-end and back-end Python dev, data processing, API integration, and AI/ML solution development. Minimum 2 years hands-on experience in working with Generative AI Applications and Solutions. Minimum 2 years hands-on experience in working with Large Language Models Solid understanding of Transformer Models and how they work. Reasonable understanding of Diffusion Models and how they work. Hands-on Experience with building production solutions using a variety of different. Experience with multiple LLMs and models - including GPT-4o, Gemini, Claude, Llama, etc. Deep Experience and Expertise in Cloud Gen-AI platforms, services, and APIs, including Azure OpenAI, AWS Bedrock, and/or GCP Vertex AI. Solid Hands-on Experience working with Enterprise RAG technologies and solutions / frameworks - including LangChain, Llama Index, etc. Solid Hands-on Experience with developing end-to-end RAG Pipelines. Solid Hands-on Experience with Agent-driven Gen-AI architectures and solutions, and working with AI Agents. Hands-on experience with Single-Agent and Multi-Agent Orchestration solutions Solid Hands-on Experience with AI and LLM Workflows Experience with LLM model registries (Hugging Face), LLM APIs, embedding models, etc. Experience with vector databases (Azure AI Search, AWS Kendra, FAISS, Milvus etc.). Experience in data preprocessing, and post-processing model / results evaluation. Hands-on Experience with Diffusion Models and AI. Art models including SDXL, DALL-E 3, Adobe Firefly, Midjourney, is highly desirable. Hands-on Experience with Image Processing and Creative Automation at scale, using AI models. Hands-on experience with image and media transformation and adaptation at scale, using AI Art and Diffusion models. Hands-on Experience with dynamic creative use cases, using AI Art and Diffusion Models. Hands-on Experience with Fine-Tuning LLM models at scale. Hands-on Experience with Fine-Tuning Diffusion models and Fine-tuning techniques such as LoRA for AI Art models as well. Hands-on Experience with AI Speech models and services, including Text-to-Speech and Speech-to-Text. Ability to lead design and development teams, for Full-Stack Gen-AI Apps and Products/Solutions, built on LLMs and Diffusion models. Ability to lead design and development for Creative Experiences and Campaigns, built on LLMs and Diffusion models. Nice-to-Have: Good Background and Foundation with Machine Learning solutions and algorithms Experience with designing, developing, and deploying production-grade machine learning solutions. Experience with Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). Experience with custom ML model development and deployment Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or Keras. Strong knowledge of machine learning algorithms and their practical applications. Experience with Cloud ML Platforms such as Azure ML Service, AWS Sage maker, and NVidia AI Foundry. Hands-on Experience with Video Generation models. Hands-on Experience with 3D Generation Models. Location: DGS India - Pune - Kharadi EON Free Zone Brand: Dentsu Creative Time Type: Full time Contract Type: Consultant Show more Show less
Posted 1 week ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
🚨 We are Hiring 🚨 https://grhombustech.com/jobs/job-description-senior-test-automation-lead-playwright-ai-ml-focus/ Job Description Job Title: Senior Test Automation Lead – Playwright (AI/ML Focus) Location: Hyderabad Experience: 10 - 12 years Job Type: Full-Time Company Overview: GRhombus Technologies Pvt Ltd, a pioneer in Software Solutions – Especially on Test Automation, Cyber Security, Full Stack Development, DevOps, Salesforce, Performance Testing and Manual Testing. GRhombus delivery centres are located in India at Hyderabad, Chennai, Bengaluru and Pune. In the Middle East, we are located in Dubai. Our partner offices are located in the USA and the Netherlands. About the Role: We are seeking a passionate and technically skilled Senior Test Automation Lead with deep experience in Playwright-based frameworks and a solid understanding of AI/ML-driven applications. In this role, you will lead the automation strategy and quality engineering practices for next-generation AI products that integrate large-scale machine learning models, data pipelines, and dynamic, intelligent UIs. You will define, architect, and implement scalable automation solutions across AI-enhanced features such as recommendation engines, conversational UIs, real-time analytics, and predictive workflows, ensuring both functional correctness and intelligent behavior consistency. Key Responsibilities: Test Automation Framework Design & Implementation Design and implement robust, modular, and extensible Playwright automation frameworks using TypeScript/JavaScript. Define automation design patterns and utilities that can handle complex AI-driven UI behaviors (e.g., dynamic content, personalization, chat interfaces). Implement abstraction layers for easy test data handling, reusable components, and multi-browser/platform execution. AI/ML-Specific Testing Strategy Partner with Data Scientists and ML Engineers to understand model behaviors, inference workflows, and output formats. Develop strategies for testing non-deterministic model outputs (e.g., chat responses, classification labels) using tolerance ranges, confidence intervals, or golden datasets. Design tests to validate ML integration points: REST/gRPC API calls, feature flags, model versioning, and output accuracy. Include bias, fairness, and edge-case validations in test suites where applicable (e.g., fairness in recommendation engines or NLP sentiment analysis). End-to-End Test Coverage Lead the implementation of end-to-end automation for: Web interfaces (React, Angular, or other SPA frameworks) Backend services (REST, GraphQL, WebSockets) ML model integration endpoints (real-time inference APIs, batch pipelines) Build test utilities for mocking, stubbing, and simulating AI inputs and datasets. CI/CD & Tooling Integration Integrate automation suites into CI/CD pipelines using GitHub Actions, Jenkins, GitLab CI, or similar. Configure parallel execution, containerized test environments (e.g., Docker), and test artifact management. Establish real-time dashboards and historical reporting using tools like Allure, ReportPortal, TestRail, or custom Grafana integrations. Quality Engineering & Leadership Define KPIs and QA metrics for AI/ML product quality: functional accuracy, model regression rates, test coverage %, time-to-feedback, etc. Lead and mentor a team of automation and QA engineers across multiple projects. Act as the Quality Champion across the AI platform by influencing engineering, product, and data science teams on quality ownership and testing best practices. Agile & Cross-Functional Collaboration Work in Agile/Scrum teams; participate in backlog grooming, sprint planning, and retrospectives. Collaborate across disciplines: Frontend, Backend, DevOps, MLOps, and Product Management to ensure complete testability. Review feature specs, AI/ML model update notes, and data schemas for impact analysis. Required Skills and Qualifications: Technical Skills: Strong hands-on expertise with Playwright (TypeScript/JavaScript). Experience building custom automation frameworks and utilities from scratch. Proficiency in testing AI/ML-integrated applications: inference endpoints, personalization engines, chatbots, or predictive dashboards. Solid knowledge of HTTP protocols, API testing (Postman, Supertest, RestAssured). Familiarity with MLOps and model lifecycle management (e.g., via MLflow, SageMaker, Vertex AI). Experience in testing data pipelines (ETL, streaming, batch), synthetic data generation, and test data versioning. Domain Knowledge: Exposure to NLP, CV, recommendation engines, time-series forecasting, or tabular ML models. Understanding of key ML metrics (precision, recall, F1-score, AUC), model drift, and concept drift. Knowledge of bias/fairness auditing, especially in UI/UX contexts where AI decisions are shown to users. Leadership & Communication: Proven experience leading QA/Automation teams (4+ engineers). Strong documentation, code review, and stakeholder communication skills. Experience collaborating in Agile/SAFe environments with cross-functional teams. Preferred Qualifications: Experience with AI Explainability frameworks like LIME, SHAP, or What-If Tool. Familiarity with Test Data Management platforms (e.g., Tonic.ai, Delphix) for ML training/inference data. Background in performance and load testing for AI systems using tools like Locust, JMeter, or k6. Experience with GraphQL, Kafka, or event-driven architecture testing. QA Certifications (ISTQB, Certified Selenium Engineer) or cloud certifications (AWS, GCP, Azure). Education: Bachelor’s or Master’s degree in Computer Science, Software Engineering, or related technical discipline. Bonus for certifications or formal training in Machine Learning, Data Science, or MLOps. Why Join Us? At GRhombus, we are redefining quality assurance and software testing with cutting-edge methodologies and a commitment to innovation. As a test automation lead, you will play a pivotal role in shaping the future of automated testing, optimizing frameworks, and driving efficiency across our engineering ecosystem. Be part of a workplace that values experimentation, learning, and professional growth. Contribute to an organisation where your ideas drive innovation and make a tangible impact. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Google Cloud Platform Architecture Good to have skills : Google Cloud Machine Learning Services Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an AI/ML technical lead, you will be responsible for developing applications and systems that utilize AI tools and Cloud AI services. Your typical day will involve applying CCAI and GenAI models as part of the solution, utilizing deep learning, neural networks and chatbots. Roles & Responsibilities: - Design and develop CCAI applications and systems utilizing Google Cloud Machine Learning Services, dialogue flow CX, agent assist. - Develop and implement chatbot solutions that integrate seamlessly with CCAI and other Cloud services - Integrate Dialogflow agents with various platforms, such as Google Assistant, Facebook Messenger, Slack, and websites. Hands-on experience with IVR integration and telephony systems such as Twilio, Genesys, Avaya - Integrate with IVR systems and Proficiency in webhook setup and API integration. - Develop Dialogflow CX - flows, pages, webhook as well as playbook and integration of tool into playbooks. - Creation of agents in Agent builder and integrating them into end end to pipeline using python. - Apply GenAI-Vertex AI models as part of the solution, utilizing deep learning, neural networks, chatbots, and image processing. - Work with Google Vertex AI for building, training and deploying custom AI models to enhance chatbot capabilities - Implement and integrate backend services (using Google Cloud Functions or other APIs) to fulfill user queries and actions. - Document technical designs, processes, and setup for various integrations. - Experience with programming languages such as Python/Node.js Professional & Technical Skills: - Must To Have Skills: CCAI/Dialogflow CX hands on experience and generative AI understanding. - Good To Have Skills: Cloud Data Architecture, Cloud ML/PCA/PDE Certification, - Strong understanding of AI/ML algorithms and techniques. - Experience with chatbot , generative AI models, prompt Engineering - Experience with cloud or on-prem application pipeline with production-ready quality. Additional Information: - The candidate should have a minimum of 5 years of experience in Google Cloud Machine Learning Services/Gen AI/Vertex AI/CCAI. - The ideal candidate will possess a strong educational background in computer science, mathematics, or a related field, along with a proven track record of delivering impactful data-driven solutions. - A 15-year full time education is required Show more Show less
Posted 1 week ago
7.0 years
0 Lacs
India
On-site
WhizzHR is hiring Media Solution Architect – AI/ML & Automation Focus Role Summary: We are seeking a Media Solution Architect to lead the strategic design of AI-driven and automation-centric solutions across digital media operations. This role involves architecting intelligent, scalable systems that enhance efficiency across campaign setup, trafficking, reporting, QA, and billing processes. The ideal candidate will bring a strong blend of automation, AI/ML, and digital marketing expertise to drive innovation and operational excellence. Key Responsibilities: Identify and assess opportunities to apply AI/ML and automation across media operations workflows (e.g., intelligent campaign setup, anomaly detection in QA, dynamic taxonomy validation). Design scalable, intelligent architectures using a combination of machine learning models, RPA, Python-based automation, and media APIs (e.g., Meta, DV360, YouTube). Develop or integrate machine learning models for use cases such as performance prediction, media mix modeling, and anomaly detection in reporting or billing. Ensure adherence to best practices in data governance, compliance, and security, particularly around AI system usage. Partner with business stakeholders to prioritize high-impact AI/automation use cases and define clear ROI and success metrics. Stay informed on emerging trends in AI/ML and translate innovations into actionable media solutions. Ideal Profile: 7+ years of experience in automation, AI/ML, or data science, including 3+ years in marketing, ad tech, or digital media. Strong understanding of machine learning frameworks for predictive modeling, anomaly detection, and NLP-based insight generation. Proficiency in Python and libraries such as scikit-learn, TensorFlow, pandas, or PyTorch. Experience with cloud-based AI platforms (e.g., Google Vertex AI, Azure ML, AWS Sagemaker) and media API integrations. Ability to architect AI-enhanced automations that improve forecasting, QA, and decision-making in media operations. Familiarity with RPA tools (e.g., UiPath, Automation Anywhere); AI-first automation experience is a plus. Demonstrated success in developing or deploying ML models for campaign optimization, fraud detection, or process intelligence. Familiarity with digital media ecosystems such as Google Ads, Meta, TikTok, DSPs, and ad servers. Excellent communication and stakeholder management skills, with the ability to translate technical solutions into business value. Kindly share your Resume at Hello@whizzhr.com Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Are you an individual who wants to play a game changing role and make an impact in a fast-growing organization? We at Northern are waiting for you. Join us and unleash your potential!! We are hiring D365 Solution Architect!! Join the core group of founding members at the NTE India to build an organization from the ground up. PRIMARY OBJECTIVE OF POSITION: This position will be at the forefront of the continued ERP digital transformation journey acting as subject matter expert for the company’s Microsoft Dynamics 365 (D365) ERP migration from JD Edwards ERP as well as go forward D365 maintenance and feature management. This role will be overall responsible for driving architectural design and decisions, development, and implementation of scalable and secure architecture across multiple systems and platforms. MAJOR AREAS OF ACCOUNTABILITY: Lead consultant on the architecture, design, and implementation of Microsoft Dynamics 365 (D365) ERP solutions including but not limited to D365 F&O, D365 Commerce, Microsoft CRM, and Microsoft ISVs. Collaborate with business stakeholders, product owners, business analysts, and technical teams to understand requirements, and design robust solutions that meet business and technical best practices and support organization strategies. Provide technical and functional direction to systems and integration software engineers (internal and external to NTE) by designing end-to-end solutions that align with business needs, industry standards, enterprise architecture guidelines, and support digital transformation agenda by working to replace legacy systems with modern cloud-based solutions. Ensure seamless data flow and process automation across systems and business functions. Create, implement, and lead D365 feature management process. Consult on D365 release management topics such as but not limited to release notes reviews and upgrade strategies. Implement and oversee D365 licensing management process. Develop and lead future D365 storage management process. Provide input to Enterprise Architecture that helps build reference architecture, frameworks, and toolkits to drive scale of adoption in the ecosystem. Ensure that Enterprise Architecture guidelines and recommendations are followed. Develop and maintain a road map of the evolution of the ERP portfolio from current to future state with input provided from business, technology and vendor teams Deliver persuasive presentations to decision makers when introducing new/changing concepts, practices, and features. Develop high-quality documentation including architecture diagrams, solution blueprints, and technical/functional specifications. Promote and enforce standards and best practices for scalability, reusability, and cost minimization. Apply advanced knowledge and understanding of architecture, application systems design and integration to develop enterprise level application, extensions and integration solutions including major enhancements and interfaces, functions, and features. Oversee and/or consult on application architecture implementation and modification activities, particularly for new and/or shared enterprise application solutions. Keep supervisor informed of important developments, potential problems, and related information necessary for effective management. Coordinates and communicates plans and activities with others, as appropriate, to ensure a coordinated work effort and team approach. Perform related work as apparent or assigned. QUALIFICATIONS: To perform this job successfully, an individual must be able to perform each essential duty satisfactorily. The requirements listed below are representative of the knowledge, skill and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. Bachelor's Degree in Engineering, Computer Science, Systems Analysis, or a related field. Master’s degree highly preferred. At least 5 years of D365 Architect experience. At least 5 years of ERP implementation consulting experience. Proven track record providing architecture leadership in ERP system migrations Strong grasp of integration technologies, middleware platforms, and system orchestration tools. Extensive experience with various information modeling techniques (such as data flow diagrams, entity-relationship diagrams, or create/read/update/delete matrices). Knowledge and experience with of all modules within D365 and in-depth knowledge in one or more modules such as Finance, Operations, or Commerce. Preferred candidates will have experience with implementing D365 Order Management. Preferred candidates will have certifications in: D365 Finance Functional Consultant Associate, D365 Supply Chain Management Functional Consultant Associate, D365 Finance and Operations Apps Developer Associate and Finance and Operations Apps Solution Architect Expert Extensive experience with D365 release/feature management, user roles/licensing, and database storage. Working knowledge of Microsoft Azure services including but not limited to API Gateway, function apps and Azure blob storage. High-level understanding of relational database management systems and other data structures. Knowledge of business process re-engineering principles and processes. Understanding of Infrastructure and Information/Data architecture. Exceptional interpersonal skills in areas such as teamwork, facilitation, and negotiation. Strong leadership skills through influencing. Excellent analytical and technical skills. Excellent written and verbal communication skills. Excellent planning and organizational skills. Ability to understand the long-term ("big picture") and short-term perspectives of situations. Ability to translate business needs into actionable architecture requirements. Ability to estimate the financial impact of architecture alternatives. Ability to apply multiple technical solutions to business problems. Ability to quickly comprehend the functions and capabilities of new technologies. TECHNICAL QUALIFICATIONS Understanding of the following: Azure DevOps (Repos using Git, Pipelines and CI/CD, Releases Management, Test Plans, Boards) Azure SQL Server Azure Logic Apps Azure Function Apps Azure API Management Azure Power Apps SSRS Development Power BI Dynamics 365 extensions development Dynamics 365 system administration including release/feature management, and user roles/licensing Dynamics 365 ISV integrations (MediusFlow, RF Smart, SK Global, Vertex) SQL Server Microsoft Data Factory v2 Microsoft Database Storage Management – Warehouse, Data Lakes, Dataverse Postman Visual Studio 2019 development including (Azure Function Apps, Logic Apps) D365 development using C#, .NET framework, and/or X++ Show more Show less
Posted 1 week ago
2.0 - 5.0 years
0 Lacs
Delhi, India
Remote
Job Title: AI Engineer Location: Remote Employment Type: Full-time About the Role: We are seeking a skilled and motivated AI Engineer to help us build intelligent, agentic systems that drive real-world impact. In this role, you will develop, deploy, and maintain AI models and pipelines - working with large language models (LLMs), vector databases, and orchestration frameworks like Langchain. You will collaborate across teams to build robust, scalable AI-driven solutions. Key Responsibilities- Design and develop intelligent systems using LLMs, retrieval-augmented generation (RAG), and agentic frameworks. Build and deploy AI pipelines using Langchain, vector stores, and custom tools.- Integrate models with production APIs and backend systems. Monitor, fine-tune, and improve performance of deployed AI systems.- Collaborate with data engineers, product managers, and UX designers to deliver AI-first user experiences. Stay up to date with advancements in generative AI, LLMs, and orchestration frameworks. Required Qualifications 2-5 years of experience in building and deploying machine learning or AI-based systems. Hands-on experience with Langchain in building agent workflows or RAG pipelines. Proficiency in Python and frameworks such as PyTorch, TensorFlow, or Scikit-learn. Experience with cloud platforms (AWS, GCP, Azure) and containerization (Docker, Kubernetes). Strong understanding of prompt engineering, embeddings, and vector database operations (e.g., FAISS, Pinecone, Weaviate). Familiarity with MLOps tools such as MLflow, SageMaker, or Vertex AI. Preferred Qualifications Experience with large language models (e.g., GPT, Claude, LLaMA) and GenAI platforms (e.g., OpenAI, Bedrock, Anthropic). Background in NLP, RAG architectures, or autonomous agents. Experience in deploying AI applications via APIs and microservices. Contributions to open-source Langchain or GenAI ecosystems. Why Join Us? Remote-first company working on frontier AI systems. Opportunity to shape production-grade AI experiences used globally. Dynamic, collaborative, and intellectually curious team. Competitive compensation with fast growth pot Show more Show less
Posted 1 week ago
7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Python Developer (GCP) Location: Chennai Experience: 7-12 Years Job Summary We are seeking a Python Developer (GCP) with deep expertise in Python , Google Cloud Platform (GCP) , and MLOps to lead end-to-end development and deployment of machine learning solutions. The ideal candidate is a versatile engineer capable of building both front-end and back-end systems, while also managing the automation and scalability of ML workflows and models in production. Required Skills & Experience Strong proficiency in Python, including OOP, data processing, and backend development. 3+ years of experience with Google Cloud Platform (GCP) and services relevant to ML & application deployment. Proven experience with MLOps practices and tools such as Vertex AI, MLflow, Kubeflow, TensorFlow Extended (TFX), Airflow, or similar. Hands-on experience in front-end development (React.js, Angular, or similar). Experience in building RESTful APIs and working with Flask/FastAPI/Django frameworks. Familiarity with Docker, Kubernetes, and CI/CD pipelines in cloud environments. Experience with Terraform, Cloud Build, and monitoring tools (e.g., Stackdriver, Prometheus). Understanding of version control (Git), Agile methodologies, and collaborative software development. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Company: A global client of KaN Talent Search (KTS) Location: Hyderabad, India (Hybrid - 3 days in office) Send resume to connect@kansearch.com Summary: We are seeking an experienced D365 Solution Architect to join our global client's team and lead their ERP digital transformation journey. This is a strategic role that will serve as the subject matter expert for Microsoft Dynamics 365 (D365) ERP migration and ongoing platform management. You'll be at the forefront of driving architectural design decisions, development, and implementation of scalable, secure solutions across multiple systems and platforms. Key Responsibilities Strategic Leadership & Architecture Lead architectural design and implementation of Microsoft Dynamics 365 (D365) ERP solutions including F&O, D365 Commerce, Microsoft CRM, and Microsoft ISVs Collaborate with business stakeholders, product owners, and technical teams to design robust solutions aligned with business strategies Provide technical direction to internal and external engineering teams, designing end-to-end solutions that meet industry standards and enterprise architecture guidelines D365 Platform Management Create and implement D365 feature management processes Consult on release management, upgrade strategies, and licensing optimization Develop and oversee storage management processes for D365 environments Enterprise Integration & Automation Ensure seamless data flow and process automation across business functions Design integration solutions including major enhancements, interfaces, and custom features Apply advanced knowledge of application systems design to develop enterprise-level solutions Documentation & Communication Develop comprehensive documentation including architecture diagrams, solution blueprints, and technical specifications Deliver persuasive presentations to decision-makers on new concepts and practices Maintain roadmaps for ERP portfolio evolution from current to future state Requirements Education & Experience Bachelor's degree in Engineering, Computer Science, Systems Analysis, or related field (Master's preferred) Minimum 5+ years of D365 Architect experience Minimum 5+ years of ERP implementation consulting experience Proven track record providing architecture leadership in ERP system migrations Experience in Retail and Commerce is critical Technical Expertise Comprehensive knowledge of all D365 modules with in-depth expertise in Finance, Operations, or Commerce Strong understanding of integration technologies, middleware platforms, and system orchestration tools Experience with information modeling techniques (data flow diagrams, entity-relationship diagrams, CRUD matrices) Working knowledge of Microsoft Azure services (API Gateway, Function Apps, Azure Blob Storage) Understanding of relational database management systems and data structures Preferred Certifications D365 Finance Functional Consultant Associate D365 Supply Chain Management Functional Consultant Associate D365 Finance and Operations Apps Developer Associate Finance and Operations Apps Solution Architect Expert Technical Skills Azure DevOps (Git Repos, CI/CD Pipelines, Release Management, Test Plans) Azure SQL Server, Logic Apps, Function Apps, API Management Power Apps, Power BI, SSRS Development D365 extensions development and system administration D365 ISV integrations (MediusFlow, RF Smart, SK Global, Vertex) Microsoft Data Factory v2, Database Storage Management Visual Studio development, C#, .NET framework, X++ SQL Server, Postman What we offer: Work-Life Balance & Flexibility Hybrid Work Model: Only 3 days per week in our state-of-the-art Hyderabad headquarters Traffic-Beat Flexibility: Option to leave office at 4 PM to avoid peak traffic, with flexibility to resume work from home Seamless Commute: Complimentary pick-up and drop-off transportation service provided Comprehensive Benefits Package Family Health Insurance: Complete medical coverage for you and your family Professional Development: Education and certification reimbursement programs to advance your career Best-in-Class Benefits: Industry-leading compensation and benefits programs designed for employee wellbeing Professional Growth Lead cutting-edge Microsoft D365 implementations for a global enterprise Work with the latest Azure technologies and cloud-based solutions Mentor technical teams and drive architectural decisions with worldwide impact Opportunity for continuous learning and professional development Join a dynamic team working with cutting-edge Microsoft technologies while enjoying excellent work-life balance, comprehensive benefits, and unmatched flexibility. This role offers the perfect blend of strategic technical leadership, personal convenience, and family security. About KaN Talent Search (KTS) KTS partners with leading global organizations to deliver exceptional talent solutions. Our client for this role is a prominent North American multinational corporation and leading retailer of tools, equipment, and outdoor power equipment. The company operates across multiple markets and is currently undergoing significant ERP modernization, offering exciting opportunities for professional growth and technical leadership in a dynamic, innovation-driven environment. Ready to lead the next generation of ERP architecture? Apply now to join our team and make a significant impact on enterprise digital transformation. Email your resume to connect@kansearch.com Show more Show less
Posted 1 week ago
2.0 - 7.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Job Title - Indirect Tax Analyst/Consultant - S&C GN-CFO&EV Management Level: 11-Analyst / 09-Consultant Location: Gurgaon, Mumbai, Bangalore, Pune, Hyderabad Must have skills: Onesource or Vertex or Sabrix implementation Good to have skills: Avalara, Indirect Tax functional experience Experience: 2-7 years Educational Qualification: MBA(Finance) or CA or CMA Job Summary: Provide leading practice on tax processes and blueprint for the clients. Interact with clients to gather business tax requirements through multiple workshops. Analyze business requirements and identify best practices to implement a technical solution. Facilitate design sessions related to tax determination, calculation, recording, reporting and compliance. Prepare key deliverables such as design documents, build, test documentation, training materials and administration and procedural guides. Assist leaders on day-to-day operations as well as help create assets, points of view and business proposals. Roles & Responsibilities: Ability to drive solutions independently Adept at Microsoft power point, spreadsheet and power BI applications Ability to work with cross streams associated with multiple business processes stakeholders Strong writing skills to build point of views on current industry trends Good analytical and problem-solving skills with an aptitude to learn quickly Excellent communication, interpersonal and presentation skills Cross-cultural competence with an ability to thrive in a dynamic consulting environment Professional & Technical Skills: MBA from a Tier-1 B-school or CA or CPA 2-7 years of work experience preferably in financial areas order to cash, source to pay, record to report with tax relevance Strong Hands-on experience in integration and tool implementations in the following platforms: Tax Type - VAT, GST, SUT, WHT, Digital Compliance Reporting ERP - SAP or Oracle Tax Technologies - Vertex O Series, OneSource, SOVOS Tax add-on tools - Vertex Accelerator, OneSource Global Next, LCR-Dixon In-depth experience in functional configuration or integration of ERPs with external tax technologies to achieve higher automation Good experience of working on multiple tax types and business processes knowledge of tax processing Good understanding of tax technology landscape, trends and architecture Deep experience in transformation project through multiple phases – such as planning, requirement gathering, designing, building, testing and deployment Experience in analysis and implementation of tax requirements for indirect taxes (VAT, GST, SUT) and withholding taxes including their integration with supply chain, procurement, purchase-to-pay, record-to-report, order-to-cash, and so on Additional Information: An opportunity to work on transformative projects with key G2000 clients Potential to Co-create with leaders in strategy, industry experts, enterprise function practitioners and, business intelligence professionals to shape and recommend innovative solutions that leverage emerging technologies. Ability to embed responsible business into everything—from how you service your clients to how you operate as a responsible professional. Personalized training modules to develop your strategy & consulting acumen to grow your skills, industry knowledge and capabilities Opportunity to thrive in a culture that is committed to accelerate equality for all. Engage in boundaryless collaboration across the entire organization. About Our Company | Accenture Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
What You’ll Do Handle data: pull, clean, and shape structured & unstructured data. Manage pipelines: Airflow / Step Functions / ADF… your call. Deploy models: build, tune, and push to production on SageMaker, Azure ML, or Vertex AI. Scale: Spark / Databricks for the heavy lifting. Automate processes: Docker, Kubernetes, CI/CD, MLFlow, Seldon, Kubeflow. Collaborate effectively: work with engineers, architects, and business professionals to solve real problems promptly. What You Bring 3+ years hands-on MLOps (4-5 yrs total software experience). Proven experience with one hyperscaler (AWS, Azure, or GCP). Confidence with Databricks / Spark, Python, SQL, TensorFlow / PyTorch / Scikit-learn. Extensive experience handling and troubleshooting Kubernetes and proficiency in Dockerfile management. Prototyping with open-source tools, selecting the appropriate solution, and ensuring scalability. Analytical thinker, team player, with a proactive attitude. Nice-to-Haves Sagemaker, Azure ML, or Vertex AI in production. Dedication to clean code, thorough documentation, and precise pull requests. Skills: mlflow,ml ops,scikit-learn,airflow,mlops,sql,pytorch,adf,step functions,kubernetes,gcp,kubeflow,python,databricks,tensorflow,aws,azure,docker,seldon,spark Show more Show less
Posted 1 week ago
17.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Introduction A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio Your Role And Responsibilities As a managing consultant, you will serve as a client-facing practitioner working collaboratively with clients to deliver high-quality solutions and be a trusted business advisor with deep understanding of SAP Accelerate delivery methodology or equivalent and associated work products. You will lead design workshops, support business development activities and mentor and coach team members to develop their skills and knowledge. There are opportunities for you to acquire new skills, work across different disciplines, take on new challenges, and develop a comprehensive understanding of various industries. There are opportunities for you to acquire new skills, work across different disciplines, take on new challenges, and develop a comprehensive understanding of various industries. Your Primary Responsibilities Include Strategic SAP Solution Leadership: Leading the technical design, development, and implementation of SAP solutions for simplicity, amplification, and maintainability that meet client needs. Team Delivery leadership: Lead and manage high performing team of SAP consultants to deliver work products on time, budget, and quality. Comprehensive Solution Delivery: Involvement in strategy development and solution implementation, leveraging your functional expertise of SAP with clients and team members and working with the latest technologies Preferred Education Master's Degree Required Technical And Professional Expertise 17+ Years Industry experience of performing necessary SAP configurations, writing detail specifications for development of custom programs, testing, co-ordination of transports to production and post go-live support. Strong hands-on experience of core S/4 HANA Sales and Service processes and functionalities. e.g. Business Partner, Sales Quote/Order processing, Sales Monitoring and Analytics; Claims, Returns and Refunds Management; Contract and Subscriptions; Delivery Processing, Invoice processing, Pricing, aATP, Settlement Management Architecture/Design/Detailing of Processes, designing and supporting integration with other SAP and non-SAP systems, e.g. GTS, BW, WMS, TM, EDI, Vertex etc Experience of minimum 2 end to end SAP S/4 HANA Sales & Service Implementation and core S/4 HANA Sales and Service processes and functionalities.’’ Led testing, data conversion, cutover and post go live, hypercare and Knowledge of business functions like Intercompany Sales, Third Party Sales, Consignments, Service and Maintenance, Advanced Variant configuration, International Trade etc Deep understanding of Business Process integration of SAP Sales and Service with other modules e.g. FI, MM, PP, PS, CO etc Preferred Technical And Professional Experience Knowledge of Fiori Apps from SAP Order to Cash perspective, SAP's implementation methodologies. Should be able to create functional specifications to bridge any gap in the solution design using custom development. For example, enhancements, Interfaces, Reports, data Conversion and Forms. Hands-on experience of designing and supporting integration with other SAP and non-SAP systems, e.g. GTS, BW, WMS, TM, EDI, Vertex etc Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
India has seen a rise in demand for professionals with expertise in Vertex, a cloud-based tax technology solution. Companies across various industries are actively seeking individuals with skills in Vertex to manage their tax compliance processes efficiently. If you are a job seeker looking to explore opportunities in this field, read on to learn more about the Vertex job market in India.
The salary range for Vertex professionals in India varies based on experience levels. Entry-level professionals can expect to earn around INR 4-6 lakhs per annum, while experienced professionals with several years in the industry can earn upwards of INR 12-15 lakhs per annum.
In the Vertex domain, a typical career progression path may include roles such as Tax Analyst, Tax Consultant, Tax Manager, and Tax Director. Professionals may advance from Junior Tax Analyst to Senior Tax Analyst, and eventually take on leadership roles as Tax Managers or Directors.
Alongside expertise in Vertex, professionals in this field are often expected to have skills in tax compliance, tax regulations, accounting principles, and data analysis. Knowledge of ERP systems and experience in tax software implementation can also be beneficial.
As you explore job opportunities in the Vertex domain in India, remember to showcase your expertise, skills, and experience confidently during interviews. Prepare thoroughly for technical questions and demonstrate your understanding of tax compliance processes. With dedication and continuous learning, you can build a successful career in Vertex roles. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2