Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 - 10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About VOIS: VO IS (Vodafone Intelligent Solutions) is a strategic arm of Vodafone Group Plc, creating value and enhancing quality and efficiency across 28 countries, and operating from 7 locations: Albania, Egypt, Hungary, India, Romania, Spain and the UK. Over 29,000 highly skilled individuals are dedicated to being Vodafone Group’s partner of choice for talent, technology, and transformation. We deliver the best services across IT, Business Intelligence Services, Customer Operations, Business Operations, HR, Finance, Supply Chain, HR Operations, and many more. Established in 2006, VO IS has evolved into a global, multi-functional organisation, a Centre of Excellence for Intelligent Solutions focused on adding value and delivering business outcomes for Vodafone. About VOIS India: In 2009, VO IS started operating in India and now has established global delivery centres in Pune, Bangalore and Ahmedabad. With more than 14,500 employees, VO IS India supports global markets and group functions of Vodafone, and delivers best-in-class customer experience through multi-functional services in the areas of Information Technology, Networks, Business Intelligence and Analytics, Digital Business Solutions (Robotics & AI), Commercial Operations (Consumer & Business), Intelligent Operations, Finance Operations, Supply Chain Operations and HR Operations and more Role: ML/GenAI Engineer Experience: 7-10 Years Job Location: Pune, EON IT Park (Hybrid) Must have skills: ML/AI/GenAI, Python, GCP, Github, LLM Job Summary: We are seeking a highly skilled and motivated Machine Learning Engineer / GenAI Engineer to join our AI/ML team. The ideal candidate will have hands-on experience in building, deploying, and maintaining machine learning models and GenAI solutions using modern cloud platforms and MLOps practices. You will work closely with cross-functional teams to design scalable AI pipelines and integrate LLM-based solutions into production environments. Key Responsibilities: Design, develop, and deploy ML and GenAI solutions using Python and LLM frameworks. Build and manage ML pipelines using Vertex AI Pipelines and CI/CD tools like Cloud Build and GitHub Actions. Containerize applications using Docker and manage deployments on Google Cloud Platform (GCP) and Azure. Collaborate with data engineers and DevOps teams to ensure seamless integration and deployment. Write and maintain infrastructure as code using YAML. Implement and manage CI/CD pipelines for ML workflows. Work with GitHub for version control and collaboration. Ensure model governance and compliance using protocols like Model Context Protocol and A2A Protocol. Participate in code reviews, design discussions, and team planning sessions. Mandatory Skills: Strong hands-on experience with Python. Proficiency in Google Cloud Platform (GCP) and Azure Cloud. Experience with Vertex AI Pipelines. Expertise in CI/CD setup, Cloud Build, and GitHub Actions. Proficient in Docker and container orchestration. Experience with YAML for configuration and infrastructure. Familiarity with LLM models and GenAI tools. Strong team collaboration and version control using GitHub. Good-to-Have Skills: GCP Certifications (e.g., Professional ML Engineer, Cloud Architect). Strong communication skills for cross-functional collaboration. Experience with Azure Data Factory. Knowledge of Model Context Protocol and A2A Protocol. Familiarity with Google ADK (AI Development Kit). Exposure to React for building front-end interfaces. VOIS Equal Opportunity Employer Commitment India: VOIS is proud to be an Equal Employment Opportunity Employer. We celebrate differences and we welcome and value diverse people and insights. We believe that being authentically human and inclusive powers our employees’ growth and enables them to create a positive impact on themselves and society. We do not discriminate based on age, colour, gender (including pregnancy, childbirth, or related medical conditions), gender identity, gender expression, national origin, race, religion, sexual orientation, status as an individual with a disability, or other applicable legally protected characteristics. As a result of living and breathing our commitment, our employees have helped us get certified as a Great Place to Work in India for four years running. We have been also highlighted among the Top 5 Best Workplaces for Diversity, Equity, and Inclusion, Top 10 Best Workplaces for Women, Top 25 Best Workplaces in IT & IT-BPM and 14th Overall Best Workplaces in India by the Great Place to Work Institute in 2023. These achievements position us among a select group of trustworthy and high-performing companies which put their employees at the heart of everything they do. By joining us, you are part of our commitment. We look forward to welcoming you into our family which represents a variety of cultures, backgrounds, perspectives, and skills! Apply now, and we’ll be in touch! Show more Show less
Posted 2 months ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role**: Gen AI Developer Required Technical Skill Set: Gen AI Developer Desired Experience Range: 03 - 05 yrs Notice Period: Immediate to 30Days only Location of Requirement: Hyderabad We are currently planning to do a Virtual Interview on 06 th June 2025 (Friday) Date – 06 th June 2025 (Friday) Job Description: Experience with Generative AI, LLMs (Large Language Models) & Transformer Models. Python or R programming Experience with any Cloud Platforms (AWS, Google, Azure) Proficiency in data analysis tools (SQL, Pandas etc) Very good experience in preparing low level design/build documents Very good analytical and communication skills Must-Have Knowledge in any of the below Generative AI Technologies: AWS - Bedrock, Sage Maker, App Studio Google – Vertex AI, GenAI App Builder, Gemini etc Azure – Open AI Service, Machine Learning Experience with any one Cloud Platforms (AWS, Google, Azure) Proficiency in data analysis tools (SQL, Pandas etc) Experience with deep learning frameworks (Tensor Flow, PyTorch etc) Python or R programming Integrating AI models into applications via APIs Expertise in machine learning, deep learning, NLP, statistical modelling, data visualization, experience with large datasets Good-to-Have Telecom or Media Domain Knowledge Proficient in client communication and high work-ethic Show more Show less
Posted 2 months ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
JOB DESCRIPTION • Strong in Python with libraries such as polars, pandas, numpy, scikit-learn, matplotlib, tensorflow, torch, transformers • Must have: Deep understanding of modern recommendation systems including two-tower , multi-tower , and cross-encoder architectures • Must have: Hands-on experience with deep learning for recommender systems using TensorFlow , Keras , or PyTorch • Must have: Experience generating and using text and image embeddings (e.g., CLIP , ViT , BERT , Sentence Transformers ) for content-based recommendations • Must have: Experience with semantic similarity search and vector retrieval for matching user-item representations • Must have: Proficiency in building embedding-based retrieval models , ANN search , and re-ranking strategies • Must have: Strong understanding of user modeling , item representations , temporal/contextual personalization • Must have: Experience with Vertex AI for training, tuning, deployment, and pipeline orchestration • Must have: Experience designing and deploying machine learning pipelines on Kubernetes (e.g., using Kubeflow Pipelines , Kubeflow on GKE , or custom Kubernetes orchestration ) • Should have experience with Vertex AI Matching Engine or deploying Qdrant , F AISS , ScaNN , on GCP for large-scale retrieval • Should have experience working with Dataproc (Spark/PySpark) for feature extraction, large-scale data prep, and batch scoring • Should have a strong grasp of cold-start problem solving using metadata and multi-modal embeddings • Good to have: Familiarity with Multi-Modal Retrieval Models combining text, image, and tabular features • Good to have: Experience building ranking models (e.g., XGBoost , LightGBM , DLRM ) for candidate re-ranking • Must have: Knowledge of recommender metrics (Recall@K, nDCG, HitRate, MAP) and offline evaluation frameworks • Must have: Experience running A/B tests and interpreting results for model impact • Should be familiar with real-time inference using Vertex AI , Cloud Run , or TF Serving • Should understand feature store concepts , embedding versioning , and serving pipelines • Good to have: Experience with streaming ingestion (Pub/Sub, Dataflow) for updating models or embeddings in near real-time • Good to have: Exposure to LLM-powered ranking or personalization , or hybrid recommender setups • Must follow MLOps practices — version control, CI/CD, monitoring, and infrastructure automation GCP Tools Experience: ML & AI : Vertex AI, Vertex Pipelines, Vertex AI Matching Engine, Kubeflow on GKE, AI Platform Embedding & Retrieval : Matching Engine, FAISS, ScaNN, Qdrant, GKE-hosted vector DBs (Milvus) Storage : BigQuery, Cloud Storage, Firestore Processing : Dataproc (PySpark), Dataflow (batch & stream) Ingestion : Pub/Sub, Cloud Functions, Cloud Run Serving : Vertex AI Online Prediction, TF Serving, Kubernetes-based custom APIs, Cloud Run CI/CD & IaC : GitHub Actions, GitLab CI Show more Show less
Posted 2 months ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
JOB DESCRIPTION • Strong in Python and experience with Jupyter notebooks , Python packages like polars, pandas, numpy, scikit-learn, matplotlib, etc. • Must have: Experience with machine learning lifecycle , including data preparation , training , evaluation , and deployment • Must have: Hands-on experience with GCP services for ML & data science • Must have: Experience with Vector Search and Hybrid Search techniques • Must have: Experience with embeddings generation using models like BERT , Sentence Transformers , or custom models • Must have: Experience in embedding indexing and retrieval (e.g., Elastic, FAISS, ScaNN, Annoy) • Must have: Experience with LLMs and use cases like RAG (Retrieval-Augmented Generation) • Must have: Understanding of semantic vs lexical search paradigms • Must have: Experience with Learning to Rank (LTR) techniques and libraries (e.g., XGBoost, LightGBM with LTR support) • Should be proficient in SQL and BigQuery for analytics and feature generation • Should have experience with Dataproc clusters for distributed data processing using Apache Spark or PySpark • Should have experience deploying models and services using Vertex AI , Cloud Run , or Cloud Functions • Should be comfortable working with BM25 ranking (via Elasticsearch or OpenSearch ) and blending with vector-based approaches • Good to have: Familiarity with Vertex AI Matching Engine for scalable vector retrieval • Good to have: Familiarity with TensorFlow Hub , Hugging Face , or other model repositories • Good to have: Experience with prompt engineering , context windowing , and embedding optimization for LLM-based systems • Should understand how to build end-to-end ML pipelines for search and ranking applications • Must have: Awareness of evaluation metrics for search relevance (e.g., precision@k , recall , nDCG , MRR ) • Should have exposure to CI/CD pipelines and model versioning practices GCP Tools Experience: ML & AI : Vertex AI, Vertex AI Matching Engine, AutoML, AI Platform Storage : BigQuery, Cloud Storage, Firestore Ingestion : Pub/Sub, Cloud Functions, Cloud Run Search : Vector Databases (e.g., Matching Engine, Qdrant on GKE), Elasticsearch/OpenSearch Compute : Cloud Run, Cloud Functions, Vertex Pipelines , Cloud Dataproc (Spark/PySpark) CI/CD & IaC : GitLab/GitHub Actions Show more Show less
Posted 2 months ago
3.0 - 7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Role - Backend Developer Experience - 3-7 yrs Location - Bangalore ● Bachelors/Masters in Computer science from a reputed institute/university ● 3-7 years of strong experience in building Java/golang/python based server side solutions ● Strong in data structure, algorithm and software design ● Experience in designing and building RESTful micro services ● Experience with Server side frameworks such as JPA (HIbernate/SpringData), Spring, vertex, Springboot, Redis, Kafka, Lucene/Solr/ElasticSearch etc. ● Experience in data modeling and design, database query tuning ● Experience in MySQL and strong understanding of relational databases. ● Comfortable with agile, iterative development practices ● Excellent communication (verbal & written), interpersonal and leadership skills ● Previous experience as part of a Start-up or a Product company. ● Experience with AWS technologies would be a plus ● Experience with reactive programming frameworks would be a plus · Contributions to opensource are a plus ● Familiarity with deployment architecture principles and prior experience with container orchestration platforms, particularly Kubernetes, would be a significant advantage Show more Show less
Posted 2 months ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About Spyne At Spyne, we are transforming how cars are marketed and sold with cutting-edge Generative AI. What started as a bold idea—using AI-powered visuals to help auto dealers sell faster online—has now evolved into a full-fledged, AI-first automotive retail ecosystem. Backed by $16M in Series A funding from Accel, Vertex Ventures, and other top investors, we’re scaling at breakneck speed: Launched industry-first AI-powered Image, Video & 360° solutions for Automotive dealers Launching Gen AI powered Automotive Retail Suite to power Inventory, Marketing, CRM for dealers Onboarded 1500+ dealers across US, EU and other key markets in the past 2 years of launch Gearing up to onboard 10K+ dealers across global market of 200K+ dealers 150+ members team with near equal split on R&D and GTM Learn more about our products: Spyne AI Products - StudioAI , RetailAI Series A Announcement - CNBC-TV18 , Yourstory Role Overview We're looking for a strategic, execution-first Head of Brand & Communit y who can Build a powerful U.S. brand presence from scratch Grow a community of car dealers (India-first, then global) Drive thought leadership & Tier-1 PR Manage and grow multi-platform social media that drives narrative and conversion. This is a first-principles, on-groun d role — not for brand managers who rely on large budgets or external playbooks. You must be both the architec t and executo r of brand trust, visibility, and influence. 📍 Locatio n: Gurugram (Work from Office, 5 days a week) 🖥 Rol e: Full-time. Founder’s Office - Community, Branding & PR Key Responsibilities 1. Brand Strategy & Positioning (U.S. Focus) Define and evolve Spyne’s U.S. brand promise in under 10 words Build messaging playbooks for different dealer personas Own all brand touchpoints — copy, tone, visual identity, narrative flow Translate GTM goals into distinct, high-recall brand moments Work closely with product, design, and founders to create proof-led storytelling 2. Dealer Community Building (India-first) Build the dealer community on WhatsApp and Telegram from 0 to 5,000 engaged members Personally lead offline dealer meetups in India (starting with 3–4 key auto hubs) Identify brand evangelists from within the dealer base and formalize an Ambassador Prog ram Launch India’s first dealer-led “NADA-style” event experience by end of year 3. PR & Comms Ownership Own media coverage in Tier-1 publications across India and the U.S. (TechCrunch, Automotive News, YourStory, etc.) Create full media kits: founder bios, press releases, op-eds, quote banks Build and leverage a strong network of journalists and editors Run press cycles across product launches, funding events, and founder thought leadership 4. Social Media Strategy & Execution Build and execute a content calendar across LinkedIn, Twitter/X, Instagram, Reddit, YouTube Prioriti ze founder-led cont ent, UGC from dealers, memes, micro-videos, and visual carousels Track and grow key metrics: organic reach, CTR, DMs initiated, share velocity Monitor Reddit communities (e.g., r/AskCarSales) and embed brand voice authentically Ideal Candidate You’re a great fit if you… Have built a brand from scratch in the U.S. or other developed markets (not just maintained one) Have executed a campaign that got 100K+ organic views without paid media Know how to get a journalist to care — and have proof you’ve done it before Have led community or grassroots engagement efforts — even without calling it that Are as comfortable with Google Docs and Canva as you are with podcast interviews and pitch decks Can go from “zero to 100 engaged users” in a city by hustling, listening, and storytelling Are obsessive about tone, timing, and storytelling — and allergic to fluff You’re Likely not a fit If You... Rely on agencies to generate narratives or distribute content Think branding = logo and ad copy Need layers of approvals to act Can’t write your own press pitch or run social yourself Preferred background: 6–10 years of experience in Brand/PR/Comms roles, preferably in B2B SaaS or Mobility/Auto-Tech. Tier 1 MBA with post MBA experience in Marketing & Branding Ground-up experience building engaged communities or customer ecosystems. Strong storytelling, copywriting, and media handling skills. Demonstrated success managing global brand initiatives and media coverage. Hands-on experience managing social media channels and content pipelines. Bonus: Experience working with U.S./EU markets or automotive domain Why Spyne? Culture: High-ownership, zero-politics, execution-first Growth: $5M to $20M ARR trajectory Learning: Work with top GTM leaders and startup veterans Exposure: Global exposure across U.S., EU, and India markets Compensation: Competitive base + performance incentives + stock options Show more Show less
Posted 2 months ago
8.0 years
0 Lacs
Haveli, Maharashtra, India
On-site
We use cookies to offer you the best possible website experience. Your cookie preferences will be stored in your browser’s local storage. This includes cookies necessary for the website's operation. Additionally, you can freely decide and change any time whether you accept cookies or choose to opt out of cookies to improve website's performance, as well as cookies used to display content tailored to your interests. Your experience of the site and the services we are able to offer may be impacted if you do not accept all cookies. Press Tab to Move to Skip to Content Link Skip to main content Home Page Home Page Life At YASH Core Values Careers Business Consulting Jobs Digital Jobs ERP IT Infrastructure Jobs Sales & Marketing Jobs Software Development Jobs Solution Architects Jobs Join Our Talent Community Social Media LinkedIn Twitter Instagram Facebook Search by Keyword Search by Location Home Page Home Page Life At YASH Core Values Careers Business Consulting Jobs Digital Jobs ERP IT Infrastructure Jobs Sales & Marketing Jobs Software Development Jobs Solution Architects Jobs Join Our Talent Community Social Media LinkedIn Twitter Instagram Facebook View Profile Employee Login Search by Keyword Search by Location Show More Options Loading... Requisition ID All Skills All Select How Often (in Days) To Receive An Alert: Create Alert Select How Often (in Days) To Receive An Alert: Apply now » Apply Now Start apply with LinkedIn Please wait... Consultant - SAP FICO- Tax Job Date: May 4, 2025 Job Requisition Id: 58500 Location: Pune, IN IN Hyderabad, TG, IN Bangalore, KA, IN Indore, IN Gurgaon, IN YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. At YASH, we’re a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth – bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future. We are looking forward to hire SAP FICO Professionals in the following areas : Job Description SAP FICO Tax Experience required : 8 -10 years Key Responsibilities Technology Perform tax configuration and testing, any Tax Reporting Application, SAP Experience in baseline configuration in SAP / Vertex Experience in SAP – FI, MM, and SD Provide support to business users in resolving tax-related questions and issues, Troubleshoot and handle remedy ticket Proficient in handling Product Mapping file creation and mass upload in Vertex. Business Interact with tax departments and other business segments for overall design/implementation and maintenance Work with deployment teams to deploy the solution to end users Qualifications And Requirements Essential qualifications Bachelor’s Degree or professional qualification in tax, accounting, finance, or related field required Computer science, information technology, and project management certification is a plus Good communication, verbal, and written skills Key competencies in technologies 5+ years of progressive relevant experience in SAP and OneSource Indirect Tax Application Knowledge of Indirect Tax and Withholding tax (Preferred) SAP experience in FI, MM, and SD (Preferred) Hands-on experience with Vertex integration Advanced MS Excel capabilities and an ability to manage significant amounts of data Strong communication and interpersonal skills, with a proven ability to communicate tax issues and requirements clearly to non-tax team members and work effectively across functional teams Excellent project management, organizational, and documentation skills, including the ability to multitask and prioritize Should be conversant with support processes and methodologies Demonstrated ability to work well in a team environment, and collaborate with various people and organizations in an effort to develop win/win results. High level of proficiency and understanding of information technology and the linkage between business processes, people, and systems. Should have worked on a large project with multiple teams Other Skills And Abilities Ability to work in a global distributed setting without supervision Self-driven, Proactive, Systems Thinking At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded upon four principles Flexible work arrangements, Free spirit, and emotional positivity Agile self-determination, trust, transparency, and open collaboration All Support needed for the realization of business goals, Stable employment with a great atmosphere and ethical corporate culture Apply now » Apply Now Start apply with LinkedIn Please wait... Find Similar Jobs: Careers Home View All Jobs Top Jobs Quick Links Blogs Events Webinars Media Contact Contact Us Copyright © 2020. YASH Technologies. All Rights Reserved. Show more Show less
Posted 2 months ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
SquareShift Technologies is a Google Cloud Premier Partner helping enterprises across industries modernize using Cloud, Data, and GenAI solutions. We're expanding our India GTM team and looking for a Business Development Representative (BDR) to drive top-of-funnel activities, especially in partnership with Google Cloud field teams. Your Responsibilities Identify and qualify new business opportunities across BFSI, Healthcare, SaaS, and Technology sectors in India Collaborate closely with Google Cloud FSRs, Partner Engineers (PEs), and Partner Managers to drive joint sales opportunities Execute outbound campaigns via emails, cold calls, LinkedIn, and virtual events Leverage tools like HubSpot, Apollo.io , Clay.io , Instantly, Nook, and LinkedIn Sales Navigator for lead generation and tracking Follow up on partner referrals, Google-led events, and webinars to convert interest into meetings Support go-to-market motions for key offerings such as Looker migrations, Vertex AI solutions, and GenAI-powered custom applications What We’re Looking For 1–3 years of experience in B2B sales, inside sales, or BDR roles focused on Indian enterprises Prior experience selling professional services or IT consulting (not product/SaaS sales) Exposure to cloud platforms (Google Cloud or AWS) , and understanding of services-led sales Familiarity with data analytics, AI/ML, or GenAI projects Excellent written and verbal communication; confident in engaging mid-to-senior level stakeholders Hands-on with CRM and outbound sales tools like HubSpot, Apollo, and LinkedIn Sales Navigator Willingness to learn, grow, and transition into a quota-carrying inside sales role in the future Bonus if you have experience in Google Cloud’s partner ecosystem Why Join SquareShift? Strong co-sell motion with Google Cloud field teams Real opportunities to work on cloud, data, and GenAI transformation projects Transparent culture with high ownership and fast growth Clear path to move from BDR to a full-time inside sales role with revenue targets Exposure to cutting-edge enterprise tech and emerging AI solutions Interested candidates, please share your CV to: sruthi.c@squareshift.co Show more Show less
Posted 2 months ago
10.0 - 13.0 years
5 - 9 Lacs
Hyderābād
On-site
DT-US Product Engineering - Engineering Manager We are seeking an exceptional Engineering Manager who combines strong technical leadership with a proven track record of delivering customer-centric solutions. This role requires demonstrated experience in leading engineering teams, fostering engineering excellence, and driving outcomes through incremental and iterative delivery approaches. Work you will do The Engineering Manager will be responsible for leading engineering teams to deliver high-quality solutions while ensuring proper planning, code integrity, and alignment with customer goals. This role requires extensive experience in modern software engineering practices and methodologies, with a focus on customer outcomes and business impact. Project Leadership and Management: Lead engineering teams to deliver solutions that solve complex problems with valuable, viable, feasible, and maintainable outcomes Establish and maintain coding standards, quality metrics, and technical debt management processes Design and implement evolutionary release plans including alpha, beta, and MVP stages Strategic Development: Be the technical advocate for engineering teams throughout the end-to-end lifecycle of product development Drive engineering process improvements and innovation initiatives Develop and implement strategies for continuous technical debt management Team Mentoring and Development: Lead and mentor engineering teams, fostering a culture of engineering excellence and continuous learning Actively contribute to team velocity through hands-on involvement in design, configuration, and coding Establish performance metrics and career development pathways for team members Drive knowledge sharing initiatives and best practices across the organization Provide technical guidance and code reviews to ensure high-quality deliverables Customer Engagement and Delivery: Lead customer engagement initiatives before, during, and after delivery Drive rapid, inexpensive experimentation to arrive at optimal solutions Implement incremental and iterative delivery approaches to navigate complexity Foster high levels of customer engagement throughout the development lifecycle Technical Implementation: Ensure proper implementation of DevSecOps practices and CI/CD pipelines Oversee deployment techniques including Blue-Green and Canary deployments Drive the adoption of modern software engineering practices and methodologies Maintain oversight of architecture designs and non-functional requirements Technical Expertise Requirements: Must Have: Modern Software Engineering: Advanced knowledge of Agile methodologies, DevSecOps, and CI/CD practices Technical Leadership: Proven experience in leading engineering teams and maintaining code quality Customer-Centric Development: Experience in delivering solutions through experimentation and iteration Architecture & Design: Strong understanding of software architecture principles and patterns Quality Assurance: Experience with code review processes and quality metrics Cloud Platforms: Strong experience with at least one major cloud platform (AWS/Azure/GCP) and their ML services (SageMaker/Azure ML/Vertex AI) Version Control & Collaboration: Strong proficiency with Git and collaborative development practices Deployment & Operations: Experience with modern deployment techniques and operational excellence AI/ML Engineering: Experience with machine learning frameworks (TensorFlow, PyTorch), MLOps practices, and AI model deployment Data Processing: Knowledge of data processing tools and pipelines for AI/ML applications Domain-Specific Knowledge and experience: Custom, Mobile, Data & Analytics, RPA, or Packages Good to Have: Cloud Platforms: Experience with major cloud providers and their services Package Implementations: Experience with enterprise software package configurations Test Automation: Knowledge of automated testing frameworks and practices Container Technologies: Experience with Docker and Kubernetes Infrastructure as Code: Knowledge of infrastructure automation tools Advanced AI/ML: Experience with large language models, deep learning architectures, and AI model optimization AI Platforms: Familiarity with enterprise AI platforms like Databricks, SageMaker, or Azure ML Education: Advanced degree in Computer Science, Software Engineering, or related field, or equivalent experience. Qualifications: 10-13 years of software engineering experience with at least 5 years in technical leadership roles Proven track record of leading and delivering large-scale software projects Strong experience in modern software development methodologies and practices Demonstrated ability to drive engineering excellence and team performance Experience in stakeholder management and cross-functional collaboration Expert-level proficiency in software development and technical leadership Strong track record of implementing engineering best practices and quality standards Excellent oral and written communication skills, including presentation abilities. The Team Information Technology Services (ITS) helps power Deloitte’s success. ITS drives Deloitte, which serves many of the world’s largest, most respected organizations. We develop and deploy cutting-edge internal and go-to-market solutions that help Deloitte operate effectively and lead in the market. Our reputation is built on a tradition of delivering with excellence. The ~3,000 professionals in ITS deliver services including: Security, risk & compliance Technology support Infrastructure Applications Relationship management Strategy Deployment PMO Financials Communications Product Engineering (PxE) Product Engineering (PxE) team is the internal software and applications development team responsible for delivering leading-edge technologies to Deloitte professionals. Their broad portfolio includes web and mobile productivity tools that empower our people to log expenses, enter timesheets, book travel and more, anywhere, anytime. PxE enables our client service professionals through a comprehensive suite of applications across the business lines. In addition to application delivery, PxE offers full-scale design services, a robust mobile portfolio, cutting-edge analytics, and innovative custom development. Work Location: Hyderabad Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 303079
Posted 2 months ago
8.0 years
0 Lacs
Bangalore Urban, Karnataka, India
On-site
We are seeking a Technical Lead with strong Application Development expertise in Google Cloud Platform (GCP). The successful candidate will provide technical leadership in designing and implementing robust, scalable cloud-based solutions. If you are an experienced professional passionate about GCP technologies and committed to staying abreast of emerging trends, apply today. Responsibilities Design, develop, and deploy cloud-based solutions using GCP, establishing and adhering to cloud architecture standards and best practices Hands on coding experience in building Java Applications using GCP Native Services like GKE, CloudRun, Functions, Firestore, CloudSQL, PubSub, etc. Develop low-level application architecture designs based on enterprise standards Choose appropriate GCP services meeting functional and non-functional requirements Demonstrate comprehensive knowledge with GCP PaaS, Serverless, and Database services Provide technical leadership to development and infrastructure teams, guiding them throughout the project lifecycle Ensure all cloud-based solutions comply with security and regulatory standards Enhance cloud-based solutions optimizing performance, cost, and scalability Stay up-to-date with the latest cloud technologies and trends in the industry Familiarity with GCP GenAI solutions and models including Vertex.ai, Codebison, and Gemini models is preferred, but not required Having hands on experience in front end technologies like Angular or React will be added advantage Requirements Bachelor's or Master's degree in Computer Science, Information Technology, or a similar field Must have 8 + years of extensive experience in designing, implementing, and maintaining applications on GCP Comprehensive expertise in GCP services such as GKE, Cloudrun, Functions, Cloud SQL, Firestore, Firebase, Apigee, GCP App Engine, Gemini Code Assist, Vertex AI, Spanner, Memorystore, Service Mesh, and Cloud Monitoring Solid understanding of cloud security best practices and experience in implementing security controls in GCP Thorough understanding of cloud architecture principles and best practices Experience with automation and configuration management tools like Terraform and a sound understanding of DevOps principles Proven leadership skills and the ability to mentor and guide a technical team Show more Show less
Posted 2 months ago
10.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
Who We Are Samsara (NYSE: IOT) is the pioneer of the Connected Operations™ Cloud, which is a platform that enables organizations that depend on physical operations to harness Internet of Things (IoT) data to develop actionable insights and improve their operations. At Samsara, we are helping improve the safety, efficiency and sustainability of the physical operations that power our global economy. Representing more than 40% of global GDP, these industries are the infrastructure of our planet, including agriculture, construction, field services, transportation, and manufacturing — and we are excited to help digitally transform their operations at scale. Working at Samsara means you’ll help define the future of physical operations and be on a team that’s shaping an exciting array of product solutions, including Video-Based Safety, Vehicle Telematics, Apps and Driver Workflows, Equipment Monitoring, and Site Visibility. As part of a recently public company, you’ll have the autonomy and support to make an impact as we build for the long term. About the role: Samsara Technologies India Private Limited is looking for a Senior Manager, Business Applications Engineering - Corporate Systems As a Senior Manager, Business Applications Engineering, you will be responsible for delivering transformational initiatives across both business and IT. You will work with multiple cross-functional teams on large, highly complex projects in addition to mentoring our growing Automation Center of Excellence. As a member of the Samsara IT Management team, the Senior Manager is an experienced thought leader who drives complex, transformational solution opportunities which will deliver value to our customers. This is a hybrid position requiring 3 days per week in our Bangalore office and 2 days working remotely. Relocation assistance will not be provided for this role. You should apply if: You want to impact the industries that run our world: Your efforts will result in real-world impact—helping to keep the lights on, get food into grocery stores, reduce emissions, and most importantly, ensure workers return home safely. You are the architect of your own career: If you put in the work, this role won’t be your last at Samsara. We set up our employees for success and have built a culture that encourages rapid career development, countless opportunities to experiment and master your craft in a hyper growth environment. You’re energized by our opportunity: The vision we have to digitize large sectors of the global economy requires your full focus and best efforts to bring forth creative, ambitious ideas for our customers. You want to be with the best: At Samsara, we win together, celebrate together and support each other. You will be surrounded by a high-calibre team that will encourage you to do your best. In this role, you will: Manage the Business System Engineering (BSE) team, located in Bangaluru. You will build and deliver the future-state system's capabilities for the Finance, Accounting, and Supply Chain business organizations. Additionally, this role will lead our Automation Center of Excellence responsible for GenAI, Robotic Processing (Bot), and workflow automation Work across internal stakeholders at all levels to gain alignment and establish metrics across the team to measure initiative ROI Support our transition to Agile as Business Systems move to a more ‘Product’ based organization Perform all people management functions: performance reviews, staff calls, 1x1’s, recruiting, hiring, onboarding, career development, talent retention, motivation, etc. Own requirements and solutioning as needed, and establish funding and resource plan to execute Partner with other IT and Engineering Managers across the organization to ensure interdependencies between teams are closely managed and to leverage expertise and skills from other teams as needed Communicate effectively with technical and non-technical audiences and develop and deliver engaging presentations that influence and persuade leadership on recommendations. Review program requirements and deployment plans/launch orchestration plans and provide appropriate recommendations to improve quality, de-risk, or pre-empt issues Continually question the current model and methods: track productivity and quality metrics, and come up with creative ideas to improve the efficiency and effectiveness of the team for continuous improvement Champion, role model, and embed Samsara’s cultural principles (Focus on Customer Success, Build for the Long Term, Adopt a Growth Mindset, Be Inclusive, Win as a Team) as we scale globally and across new offices Hire, develop and lead an inclusive, engaged, and high performing team Minimum requirements for the role: 10+ years of relevant experience in software engineering, and/or product development or equivalent combination of education and work experience Experience in implementing NetSuite ERP applications that provide great Customer and User Experience 5+ years of managerial/leadership experience Experience supporting and driving operational excellence and run-the-business activities Curiosity to learn how AI will drive change in Business Technology organizations and then drive that change Ability to manage a multi-disciplinary team and succeed in partnering and driving virtual (globally distributed) teams with proven knowledge in depth and experience leveraging agile development practices (Scrum, Pivotal Labs, SAFE, etc) while assigning roles and coaching employees for optimal team performance/dynamics Demonstrated accomplishments managing projects and software development delivery teams working with diverse technology stacks (custom development, package integration) in the delivery of large-scale and complex projects Notable oral and written communication skills, remarkable analytic, problem-solving, negotiation, and prioritization skills Experience leading cross-functional agile teams using appropriate measures and KPIs to validate and report on software quality and ensuring quality is driven into the heart of the development process from requirements definition through to delivery An ideal candidate also has: Experience in driving teams that have implemented other Financial Systems or ERP tools - Xactly, Vertex, Workato, Stripe, Zip are a plus! Experience in using GenAI or other artificial intelligence tools to drive business efficiency At Samsara, we welcome everyone regardless of their background. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, gender, gender identity, sexual orientation, protected veteran status, disability, age, and other characteristics protected by law. We depend on the unique approaches of our team members to help us solve complex problems and want to ensure that Samsara is a place where people from all backgrounds can make an impact. Benefits Full time employees receive a competitive total compensation package along with employee-led remote and flexible working, health benefits, Samsara for Good charity fund, and much, much more. Take a look at our Benefits site to learn more. Accommodations Samsara is an inclusive work environment, and we are committed to ensuring equal opportunity in employment for qualified persons with disabilities. Please email accessibleinterviewing@samsara.com or click here if you require any reasonable accommodations throughout the recruiting process. Flexible Working At Samsara, we embrace a flexible working model that caters to the diverse needs of our teams. Our offices are open for those who prefer to work in-person and we also support remote work where it aligns with our operational requirements. For certain positions, being close to one of our offices or within a specific geographic area is important to facilitate collaboration, access to resources, or alignment with our service regions. In these cases, the job description will clearly indicate any working location requirements. Our goal is to ensure that all members of our team can contribute effectively, whether they are working on-site, in a hybrid model, or fully remotely. All offers of employment are contingent upon an individual’s ability to secure and maintain the legal right to work at the company and in the specified work location, if applicable. Fraudulent Employment Offers Samsara is aware of scams involving fake job interviews and offers. Please know we do not charge fees to applicants at any stage of the hiring process. Official communication about your application will only come from emails ending in ‘@samsara.com’ or ‘@us-greenhouse-mail.io’. For more information regarding fraudulent employment offers, please visit our blog post here. Show more Show less
Posted 2 months ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
As a global leader in cybersecurity, CrowdStrike protects the people, processes and technologies that drive modern organizations. Since 2011, our mission hasn’t changed — we’re here to stop breaches, and we’ve redefined modern security with the world’s most advanced AI-native platform. We work on large scale distributed systems, processing almost 3 trillion events per day. We have 3.44 PB of RAM deployed across our fleet of C* servers - and this traffic is growing daily. Our customers span all industries, and they count on CrowdStrike to keep their businesses running, their communities safe and their lives moving forward. We’re also a mission-driven company. We cultivate a culture that gives every CrowdStriker both the flexibility and autonomy to own their careers. We’re always looking to add talented CrowdStrikers to the team who have limitless passion, a relentless focus on innovation and a fanatical commitment to our customers, our community and each other. Ready to join a mission that matters? The future of cybersecurity starts with you. About The Role The charter of the Data + ML Platform team is to harness all the data that is ingested and cataloged within the Data LakeHouse for exploration, insights, model development, ML Engineering and Insights Activation. This team is situated within the larger Data Platform group, which serves as one of the core pillars of our company. We process data at a truly immense scale. Our processing is composed of various facets including threat events collected via telemetry data, associated metadata, along with IT asset information, contextual information about threat exposure based on additional processing, etc. These facets comprise the overall data platform, which is currently over 200 PB and maintained in a hyper scale Data Lakehouse, built and owned by the Data Platform team. The ingestion mechanisms include both batch and near real-time streams that form the core Threat Analytics Platform used for insights, threat hunting, incident investigations and more. As an engineer in this team, you will play an integral role as we build out our ML Experimentation Platform from the ground up. You will collaborate closely with Data Platform Software Engineers, Data Scientists & Threat Analysts to design, implement, and maintain scalable ML pipelines that will be used for Data Preparation, Cataloging, Feature Engineering, Model Training, and Model Serving that influence critical business decisions. You’ll be a key contributor in a production-focused culture that bridges the gap between model development and operational success. Future plans include generative AI investments for use cases such as modeling attack paths for IT assets. What You’ll Do Help design, build, and facilitate adoption of a modern Data+ML platform Modularize complex ML code into standardized and repeatable components Establish and facilitate adoption of repeatable patterns for model development, deployment, and monitoring Build a platform that scales to thousands of users and offers self-service capability to build ML experimentation pipelines Leverage workflow orchestration tools to deploy efficient and scalable execution of complex data and ML pipelines Review code changes from data scientists and champion software development best practices Leverage cloud services like Kubernetes, blob storage, and queues in our cloud first environment What You’ll Need B.S. in Computer Science, Data Science, Statistics, Applied Mathematics, or a related field and 7 + years related experience; or M.S. with 5+ years of experience; or Ph.D with 6+ years of experience. 3+ years experience developing and deploying machine learning solutions to production. Familiarity with typical machine learning algorithms from an engineering perspective (how they are built and used, not necessarily the theory); familiarity with supervised / unsupervised approaches: how, why, and when and labeled data is created and used 3+ years experience with ML Platform tools like Jupyter Notebooks, NVidia Workbench, MLFlow, Ray, Vertex AI etc. Experience building data platform product(s) or features with (one of) Apache Spark, Flink or comparable tools in GCP. Experience with Iceberg is highly desirable. Proficiency in distributed computing and orchestration technologies (Kubernetes, Airflow, etc.) Production experience with infrastructure-as-code tools such as Terraform, FluxCD Expert level experience with Python; Java/Scala exposure is recommended. Ability to write Python interfaces to provide standardized and simplified interfaces for data scientists to utilize internal Crowdstrike tools Expert level experience with CI/CD frameworks such as GitHub Actions Expert level experience with containerization frameworks Strong analytical and problem solving skills, capable of working in a dynamic environment Exceptional interpersonal and communication skills. Work with stakeholders across multiple teams and synthesize their needs into software interfaces and processes. Experience With The Following Is Desirable Go Iceberg Pinot or other time-series/OLAP-style database Jenkins Parquet Protocol Buffers/GRPC VJ1 Benefits Of Working At CrowdStrike Remote-friendly and flexible work culture Market leader in compensation and equity awards Comprehensive physical and mental wellness programs Competitive vacation and holidays for recharge Paid parental and adoption leaves Professional development opportunities for all employees regardless of level or role Employee Resource Groups, geographic neighbourhood groups and volunteer opportunities to build connections Vibrant office culture with world class amenities Great Place to Work Certified™ across the globe CrowdStrike is proud to be an equal opportunity employer. We are committed to fostering a culture of belonging where everyone is valued for who they are and empowered to succeed. We support veterans and individuals with disabilities through our affirmative action program. CrowdStrike is committed to providing equal employment opportunity for all employees and applicants for employment. The Company does not discriminate in employment opportunities or practices on the basis of race, color, creed, ethnicity, religion, sex (including pregnancy or pregnancy-related medical conditions), sexual orientation, gender identity, marital or family status, veteran status, age, national origin, ancestry, physical disability (including HIV and AIDS), mental disability, medical condition, genetic information, membership or activity in a local human rights commission, status with regard to public assistance, or any other characteristic protected by law. We base all employment decisions--including recruitment, selection, training, compensation, benefits, discipline, promotions, transfers, lay-offs, return from lay-off, terminations and social/recreational programs--on valid job requirements. If you need assistance accessing or reviewing the information on this website or need help submitting an application for employment or requesting an accommodation, please contact us at recruiting@crowdstrike.com for further assistance. Show more Show less
Posted 2 months ago
7.0 years
0 Lacs
Kerala, India
Remote
As a global leader in cybersecurity, CrowdStrike protects the people, processes and technologies that drive modern organizations. Since 2011, our mission hasn’t changed — we’re here to stop breaches, and we’ve redefined modern security with the world’s most advanced AI-native platform. We work on large scale distributed systems, processing almost 3 trillion events per day. We have 3.44 PB of RAM deployed across our fleet of C* servers - and this traffic is growing daily. Our customers span all industries, and they count on CrowdStrike to keep their businesses running, their communities safe and their lives moving forward. We’re also a mission-driven company. We cultivate a culture that gives every CrowdStriker both the flexibility and autonomy to own their careers. We’re always looking to add talented CrowdStrikers to the team who have limitless passion, a relentless focus on innovation and a fanatical commitment to our customers, our community and each other. Ready to join a mission that matters? The future of cybersecurity starts with you. About The Role The charter of the Data + ML Platform team is to harness all the data that is ingested and cataloged within the Data LakeHouse for exploration, insights, model development, ML Engineering and Insights Activation. This team is situated within the larger Data Platform group, which serves as one of the core pillars of our company. We process data at a truly immense scale. Our processing is composed of various facets including threat events collected via telemetry data, associated metadata, along with IT asset information, contextual information about threat exposure based on additional processing, etc. These facets comprise the overall data platform, which is currently over 200 PB and maintained in a hyper scale Data Lakehouse, built and owned by the Data Platform team. The ingestion mechanisms include both batch and near real-time streams that form the core Threat Analytics Platform used for insights, threat hunting, incident investigations and more. As an engineer in this team, you will play an integral role as we build out our ML Experimentation Platform from the ground up. You will collaborate closely with Data Platform Software Engineers, Data Scientists & Threat Analysts to design, implement, and maintain scalable ML pipelines that will be used for Data Preparation, Cataloging, Feature Engineering, Model Training, and Model Serving that influence critical business decisions. You’ll be a key contributor in a production-focused culture that bridges the gap between model development and operational success. Future plans include generative AI investments for use cases such as modeling attack paths for IT assets. What You’ll Do Help design, build, and facilitate adoption of a modern Data+ML platform Modularize complex ML code into standardized and repeatable components Establish and facilitate adoption of repeatable patterns for model development, deployment, and monitoring Build a platform that scales to thousands of users and offers self-service capability to build ML experimentation pipelines Leverage workflow orchestration tools to deploy efficient and scalable execution of complex data and ML pipelines Review code changes from data scientists and champion software development best practices Leverage cloud services like Kubernetes, blob storage, and queues in our cloud first environment What You’ll Need B.S. in Computer Science, Data Science, Statistics, Applied Mathematics, or a related field and 7 + years related experience; or M.S. with 5+ years of experience; or Ph.D with 6+ years of experience. 3+ years experience developing and deploying machine learning solutions to production. Familiarity with typical machine learning algorithms from an engineering perspective (how they are built and used, not necessarily the theory); familiarity with supervised / unsupervised approaches: how, why, and when and labeled data is created and used 3+ years experience with ML Platform tools like Jupyter Notebooks, NVidia Workbench, MLFlow, Ray, Vertex AI etc. Experience building data platform product(s) or features with (one of) Apache Spark, Flink or comparable tools in GCP. Experience with Iceberg is highly desirable. Proficiency in distributed computing and orchestration technologies (Kubernetes, Airflow, etc.) Production experience with infrastructure-as-code tools such as Terraform, FluxCD Expert level experience with Python; Java/Scala exposure is recommended. Ability to write Python interfaces to provide standardized and simplified interfaces for data scientists to utilize internal Crowdstrike tools Expert level experience with CI/CD frameworks such as GitHub Actions Expert level experience with containerization frameworks Strong analytical and problem solving skills, capable of working in a dynamic environment Exceptional interpersonal and communication skills. Work with stakeholders across multiple teams and synthesize their needs into software interfaces and processes. Experience With The Following Is Desirable Go Iceberg Pinot or other time-series/OLAP-style database Jenkins Parquet Protocol Buffers/GRPC VJ1 Benefits Of Working At CrowdStrike Remote-friendly and flexible work culture Market leader in compensation and equity awards Comprehensive physical and mental wellness programs Competitive vacation and holidays for recharge Paid parental and adoption leaves Professional development opportunities for all employees regardless of level or role Employee Resource Groups, geographic neighbourhood groups and volunteer opportunities to build connections Vibrant office culture with world class amenities Great Place to Work Certified™ across the globe CrowdStrike is proud to be an equal opportunity employer. We are committed to fostering a culture of belonging where everyone is valued for who they are and empowered to succeed. We support veterans and individuals with disabilities through our affirmative action program. CrowdStrike is committed to providing equal employment opportunity for all employees and applicants for employment. The Company does not discriminate in employment opportunities or practices on the basis of race, color, creed, ethnicity, religion, sex (including pregnancy or pregnancy-related medical conditions), sexual orientation, gender identity, marital or family status, veteran status, age, national origin, ancestry, physical disability (including HIV and AIDS), mental disability, medical condition, genetic information, membership or activity in a local human rights commission, status with regard to public assistance, or any other characteristic protected by law. We base all employment decisions--including recruitment, selection, training, compensation, benefits, discipline, promotions, transfers, lay-offs, return from lay-off, terminations and social/recreational programs--on valid job requirements. If you need assistance accessing or reviewing the information on this website or need help submitting an application for employment or requesting an accommodation, please contact us at recruiting@crowdstrike.com for further assistance. Show more Show less
Posted 2 months ago
0 years
0 Lacs
India
On-site
Job Summery: We are seeking an innovative AI Engineer to develop and integrate AI assistants and automation with auto-trading systems. The ideal candidate has strong expertise in Generative AI, LLMs, NLP, and conversational AI. Responsibilities include designing NLP-powered agents, building AI-driven trading automation, fine-tuning models, creating trade-execution scripts, ensuring system reliability, and collaborating with backend and quantitative teams for seamless trade execution. key accountabilities and activities: Design and fine-tune generative AI models and LLMs for conversational agents and workflows. Develop and maintain intelligent financial chatbots with multi-turn, context-aware conversations. Optimize prompt engineering and dialogue management to improve output quality and safety. Build NLP pipelines for tasks like classification, entity extraction, and sentiment analysis. Create autonomous AI agents and microservices for automated trading workflows. Collaborate with quantitative analysts and backend teams to integrate AI signals with trade execution. Integrate AI features via APIs across various backend systems. Monitor, optimize, and implement fail-safes for AI models and trading automation. Perform rigorous testing and quality assurance, including rollback plans. Research and apply latest AI and fintech innovations. Perform additional duties as assigned by management. Core Competencies Summary: Expertise in designing, fine-tuning, and deploying Large Language Models (e.g., GPT-3/4) and generative AI frameworks. Skilled in prompt engineering to optimize AI responses, especially for financial applications. Experience building conversational AI/chatbots with domain-specific knowledge. Proficient in NLP techniques: classification, entity extraction, sentiment analysis, intent detection. Familiar with AI agent frameworks and orchestration tools (LangChain, Rasa). Strong backend and API integration skills (REST, GraphQL) across Python, Node.js, Java. Experience with cloud AI platforms (OpenAI, Anthropic, AWS, Google Vertex AI). Knowledge of vector databases and retrieval augmented generation (Pinecone, FAISS). Skilled in monitoring, performance optimization, and AI model testing/QA. Research-driven, staying updated on AI/NLP trends and innovations. Excellent communication and collaboration abilities. Advanced proficiency in English. Job Specifications: Industry/Domain: AI & Machine Learning in Capital Markets and Trading Automation Experience: Multilingual/low-resource NLP, DevOps tools, AI deployment, ethical AI, prompt security, data privacy Education: Bachelor’s or Master’s in Computer Science, AI, Machine Learning, Data Science, or related field Show more Show less
Posted 2 months ago
7.0 - 12.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Summary Experience: 7-12 Years Location-Bangalore/Pune Job Description 7-12 Years’ experience in implementing IT Operations Management, Discovery, Monitoring, and AIOps solutions Experience of implementing BMC Helix ITOM / ITSM for at least 1 customer Design, Engineering and development experience on AIOPS and analytics solutions Strong understanding and experience on Machine Learning algorithms?for?various AIOPS use cases such as classification, clustering, and anomaly detections. Understanding of ITOM strategic approach to generative AI and Knowledge of configuring and obtaining the appropriate license to use GenAI capabilities Working Experience with one or more Hyper scalar AI/ GenAI offerings such as Azure OpenAI, Google Vertex, Llama. Understanding on GenAI implementations – Foundation models, Fine tuning, RAG, Prompt engineering. Working knowledge and understanding on development of?ML?models based on a widely used variety of?ML?algorithms like classifier algorithms (Linear, Logistics, decision tree, Naïve Bayes, Support Vector, KNN, Random?forest), clustering / outlier algorithms (K-means, mean-shitfing, DBSCAN, GMM, Hierarchical Clustering, etc), neural networks (ANN, RNN, etc) Development experience on?GenAI/ML/NLP for?IT Operations use cases. Strong prior experience on development and or/implementation of ITOM, ITSM and APM tools like OpenText OpsBridge, IBM Netcool, Sciencelogic, ServiceNow, BMC Remedy, OTRS, CA APM, Appdynamics, Dynatrace, ServiceNow Orchestrator, BMC Atrium Orchestrator, OpenText Operations Orchestrator Strong experience and background of IT Operations Management processes, IT Service Management Strong understanding of IT Operations Automation concepts and ability to leverage GenAI techniques for enabling it Strong understanding of Cloud & infrastructure technologies such as virtualization, hyper converged infrastructure, as-a-service, Software defined networks, servers, and storages Show more Show less
Posted 2 months ago
5.0 - 7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Summary Job Description 5-7 Years’ experience in implementing IT Operations Management, Discovery, Monitoring, and AIOps solutions Experience of implementing BMC Helix ITOM / ITSM for at least 1 customer Design, Engineering and development experience on AIOPS and analytics solutions Strong understanding and experience on Machine Learning algorithms?for?various AIOPS use cases such as classification, clustering, and anomaly detections. Understanding of ITOM strategic approach to generative AI and Knowledge of configuring and obtaining the appropriate license to use GenAI capabilities Working Experience with one or more Hyper scalar AI/ GenAI offerings such as Azure OpenAI, Google Vertex, Llama. Understanding on GenAI implementations – Foundation models, Fine tuning, RAG, Prompt engineering. Working knowledge and understanding on development of?ML?models based on a widely used variety of?ML?algorithms like classifier algorithms (Linear, Logistics, decision tree, Naïve Bayes, Support Vector, KNN, Random?forest), clustering / outlier algorithms (K-means, mean-shitfing, DBSCAN, GMM, Hierarchical Clustering, etc), neural networks (ANN, RNN, etc) Development experience on?GenAI/ML/NLP for?IT Operations use cases. Strong prior experience on development and or/implementation of ITOM, ITSM and APM tools like OpenText OpsBridge, IBM Netcool, Sciencelogic, ServiceNow, BMC Remedy, OTRS, CA APM, Appdynamics, Dynatrace, ServiceNow Orchestrator, BMC Atrium Orchestrator, OpenText Operations Orchestrator Strong experience and background of IT Operations Management processes, IT Service Management Strong understanding of IT Operations Automation concepts and ability to leverage GenAI techniques for enabling it Strong understanding of Cloud & infrastructure technologies such as virtualization, hyper converged infrastructure, as-a-service, Software defined networks, servers, and storages Show more Show less
Posted 2 months ago
10.0 - 13.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Summary Position Summary DT-US Product Engineering - Engineering Manager We are seeking an exceptional Engineering Manager who combines strong technical leadership with a proven track record of delivering customer-centric solutions. This role requires demonstrated experience in leading engineering teams, fostering engineering excellence, and driving outcomes through incremental and iterative delivery approaches . Work you will do The Engineering Manager will be responsible for leading engineering teams to deliver high-quality solutions while ensuring proper planning, code integrity, and alignment with customer goals. This role requires extensive experience in modern software engineering practices and methodologies, with a focus on customer outcomes and business impact. Project Leadership and Management: Lead engineering teams to deliver solutions that solve complex problems with valuable, viable, feasible, and maintainable outcomes Establish and maintain coding standards, quality metrics, and technical debt management processes Design and implement evolutionary release plans including alpha, beta, and MVP stages Strategic Development: Be the technical advocate for engineering teams throughout the end-to-end lifecycle of product development Drive engineering process improvements and innovation initiatives Develop and implement strategies for continuous technical debt management Team Mentoring and Development: Lead and mentor engineering teams, fostering a culture of engineering excellence and continuous learning Actively contribute to team velocity through hands-on involvement in design, configuration, and coding Establish performance metrics and career development pathways for team members Drive knowledge sharing initiatives and best practices across the organization Provide technical guidance and code reviews to ensure high-quality deliverables Customer Engagement and Delivery: Lead customer engagement initiatives before, during, and after delivery Drive rapid, inexpensive experimentation to arrive at optimal solutions Implement incremental and iterative delivery approaches to navigate complexity Foster high levels of customer engagement throughout the development lifecycle Technical Implementation: Ensure proper implementation of DevSecOps practices and CI/CD pipelines Oversee deployment techniques including Blue-Green and Canary deployments Drive the adoption of modern software engineering practices and methodologies Maintain oversight of architecture designs and non-functional requirements Technical Expertise Requirements: Must Have: Modern Software Engineering: Advanced knowledge of Agile methodologies, DevSecOps, and CI/CD practices Technical Leadership: Proven experience in leading engineering teams and maintaining code quality Customer-Centric Development: Experience in delivering solutions through experimentation and iteration Architecture & Design: Strong understanding of software architecture principles and patterns Quality Assurance: Experience with code review processes and quality metrics Cloud Platforms: Strong experience with at least one major cloud platform (AWS/Azure/GCP) and their ML services (SageMaker/Azure ML/Vertex AI) Version Control & Collaboration: Strong proficiency with Git and collaborative development practices Deployment & Operations: Experience with modern deployment techniques and operational excellence AI/ML Engineering: Experience with machine learning frameworks (TensorFlow, PyTorch), MLOps practices, and AI model deployment Data Processing: Knowledge of data processing tools and pipelines for AI/ML applications Domain-Specific Knowledge and experience: Custom, Mobile, Data & Analytics, RPA, or Packages Good to Have: Cloud Platforms: Experience with major cloud providers and their services Package Implementations: Experience with enterprise software package configurations Test Automation: Knowledge of automated testing frameworks and practices Container Technologies: Experience with Docker and Kubernetes Infrastructure as Code: Knowledge of infrastructure automation tools Advanced AI/ML: Experience with large language models, deep learning architectures, and AI model optimization AI Platforms: Familiarity with enterprise AI platforms like Databricks, SageMaker, or Azure ML Education: Advanced degree in Computer Science, Software Engineering, or related field, or equivalent experience. Qualifications: 10-13 years of software engineering experience with at least 5 years in technical leadership roles Proven track record of leading and delivering large-scale software projects Strong experience in modern software development methodologies and practices Demonstrated ability to drive engineering excellence and team performance Experience in stakeholder management and cross-functional collaboration Expert-level proficiency in software development and technical leadership Strong track record of implementing engineering best practices and quality standards Excellent oral and written communication skills, including presentation abilities. The Team Information Technology Services (ITS) helps power Deloitte’s success. ITS drives Deloitte, which serves many of the world’s largest, most respected organizations. We develop and deploy cutting-edge internal and go-to-market solutions that help Deloitte operate effectively and lead in the market. Our reputation is built on a tradition of delivering with excellence. The ~3,000 professionals in ITS deliver services including: Security, risk & compliance Technology support Infrastructure Applications Relationship management Strategy Deployment PMO Financials Communications Product Engineering (PxE) Product Engineering (PxE) team is the internal software and applications development team responsible for delivering leading-edge technologies to Deloitte professionals. Their broad portfolio includes web and mobile productivity tools that empower our people to log expenses, enter timesheets, book travel and more, anywhere, anytime. PxE enables our client service professionals through a comprehensive suite of applications across the business lines. In addition to application delivery, PxE offers full-scale design services, a robust mobile portfolio, cutting-edge analytics, and innovative custom development. Work Location: Hyderabad Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 303079 Show more Show less
Posted 2 months ago
5.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Techvantage.ai is a next-generation technology and product engineering company at the forefront of innovation in Generative AI, Agentic AI , and autonomous intelligent systems . We build intelligent, cutting-edge solutions designed to scale and evolve with the future of artificial intelligence. Role Overview: We are looking for a skilled and versatile AI Infrastructure Engineer (DevOps/MLOps) to build and manage the cloud infrastructure, deployment pipelines, and machine learning operations behind our AI-powered products. You will work at the intersection of software engineering, ML, and cloud architecture to ensure that our models and systems are scalable, reliable, and production-ready. Key Responsibilities: Design and manage CI/CD pipelines for both software applications and machine learning workflows. Deploy and monitor ML models in production using tools like MLflow, SageMaker, Vertex AI, or similar. Automate the provisioning and configuration of infrastructure using IaC tools (Terraform, Pulumi, etc.). Build robust monitoring, logging, and alerting systems for AI applications. Manage containerized services with Docker and orchestration platforms like Kubernetes . Collaborate with data scientists and ML engineers to streamline model experimentation, versioning, and deployment. Optimize compute resources and storage costs across cloud environments (AWS, GCP, or Azure). Ensure system reliability, scalability, and security across all environments. Requirements: 5+ years of experience in DevOps, MLOps , or infrastructure engineering roles. Hands-on experience with cloud platforms ( AWS, GCP, or Azure ) and services related to ML workloads. Strong knowledge of CI/CD tools (e.g., GitHub Actions, Jenkins, GitLab CI). Proficiency in Docker , Kubernetes , and infrastructure-as-code frameworks. Experience with ML pipelines , model versioning, and ML monitoring tools. Scripting skills in Python , Bash , or similar for automation tasks. Familiarity with monitoring/logging tools (Prometheus, Grafana, ELK, CloudWatch, etc.). Understanding of ML lifecycle management and reproducibility. Preferred Qualifications: Experience with Kubeflow , MLflow , DVC , or Triton Inference Server . Exposure to data versioning , feature stores , and model registries . Certification in AWS/GCP DevOps or Machine Learning Engineering is a plus. Background in software engineering, data engineering, or ML research is a bonus. What We Offer: Work on cutting-edge AI platforms and infrastructure Cross-functional collaboration with top ML, research, and product teams Competitive compensation package – no constraints for the right candidate Show more Show less
Posted 2 months ago
5.0 - 8.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
The Data Scientist/AI-ML Engineer Senior Analyst plays a pivotal role in exploring, developing, and integrating Machine Learning (ML) and Artificial Intelligence (AI) capabilities into Internal Audit applications. This role requires a strong technical foundation and hands-on expertise with AI/ML concepts, agile methodologies, cloud computing, and AI agents. A critical component of this role is leveraging AI and Large Language Model (LLM) techniques, such as Retrieval-Augmented Generation (RAG), to build advanced models and applications. The successful candidate will have experience designing and refining prompts to optimize generative AI model outputs for technical applications, ensuring high-quality, contextually relevant content. Alongside technical expertise, a creative, forward-thinking, and inquisitive mindset is key to thriving in this position. Key Responsibilities: Design, test, deploy, and maintain ML models using cloud-based solutions and various ML platforms. Enhance existing AI/ML capabilities through exploration, testing, and implementation with tools like Azure and Google Vertex AI. Develop complex LLM applications by designing, testing, and refining prompts to achieve clear, accurate, and contextually relevant outputs. Collaborate with stakeholders to develop and deploy AI/ML use cases. Communicate project progress, timelines, and milestones effectively to diverse audiences. Use strong problem-solving skills to overcome challenges and meet project goals. Key Qualifications and Competences: 5-8 years of experience in Python coding, AI/ML model development, and deployment. Demonstrable expertise in generative AI models, including prompt engineering, performance optimization, and understanding of model limitations. Experience with advanced analytics, AI, ML, NLP, NLU, and LLM capabilities. Extensive experience with API integration and testing. Strong statistical foundation with expertise in supervised, unsupervised, semi-supervised, and reinforcement learning models, particularly for NLP/NLU applications. Strong interpersonal skills and cultural sensitivity to collaborate with diverse stakeholders. Detail-oriented and diligent in ensuring the accuracy and completeness of work. Self-driven, proactive, and capable of thriving in a fast-paced, results-oriented environment. Innovative and adaptable, with a focus on continuous improvement. Familiarity with the financial services industry and/or auditing processes is a plus. Positive, solution-oriented mindset with strong accountability for project outcomes. Education: Bachelor's degree in data science, Computer Science, MIS, or a related discipline. CISA, PMP, Agile/Scrum, Cloud Computing, or related certifications are a plus ------------------------------------------------------ Job Family Group: Decision Management ------------------------------------------------------ Job Family: Specialized Analytics (Data Science/Computational Statistics) ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 2 months ago
3.0 years
0 Lacs
Gurgaon
Remote
Who we are We are Fluxon, a product development team founded by ex-Googlers and startup founders. We offer full-cycle software development: from ideation and design to build and go-to-market. We partner with visionary companies, ranging from fast-growing startups to tech leaders like Google and Stripe, to turn bold ideas into products with the power to transform the world. The role is open to candidates based in Gurgaon, India. About the role As a Senior Software Engineer at Fluxon, you'll have the opportunity to bring products to market while learning, contributing, and growing with our team. You'll be responsible for: Driving end-to-end implementations all the way to the user, collaborating with your team to build and iterate in a dynamic environment Engaging directly with clients to understand business goals, give demos, and debug production issues Informing product requirements, identifying appropriate technical designs in partnership with our Product and Design teams Proactively communicating progress and challenges in your work and seeking help when you need it Performing code reviews and cross-feature validations Providing mentorship in your areas of expertise You'll work with a diversity of technologies, including: Languages TypeScript/JavaScript, Java, .Net, Python, Golang, Rust, Ruby on Rails, Kotlin, Swift Frameworks Next.js, React, Angular, Spring, Expo, FastAPI, Django, SwiftUI Cloud Service Providers Google Cloud Platform, Amazon Web Services, Microsoft Azure Cloud Services Compute Engine, AWS Amplify, Fargate, Cloud Run Apache Kafka, SQS, GCP CMS S3, GCS Technologies AI/ML, LLMs, Crypto, SPA, Mobile apps, Architecture redesign Google Gemini, OpenAI ChatGPT, Vertex AI, Anthropic Claude, Huggingface Databases Firestore(Firebase), PostgreSQL, MariaDB, BigQuery, Supabase Redis, Memcache Qualifications 3+years of industry experience in software development Experienced with the full product lifecycle, including CI/CD, testing, release management, deployment, monitoring and incident response Fluent in software design patterns, scalable system architectures, tooling, fundamentals of data structures and algorithms What we offer Exposure to high-profile SV startups and enterprise companies Competitive salary Fully remote work with flexible hours Flexible paid time off Profit-sharing program Healthcare Parental leave, including adoption and fostering. Gym membership and tuition reimbursement. Hands-on career development.
Posted 2 months ago
7.0 - 10.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
Who We Are: We are a Digital Customer Experience organization, with a comprehensive coverage of IT Services from Traditional Services to Next Gen Digital Services. At TELUS International, we focus on lean, agile, human-centered design. We have been in the technology business since 2002, with HQs in California, USA. TELUS International also invests in R&D where innovators, researchers and visionaries collaborate to explore emerging customer experience tech to disrupt the future. We are about 70,000 employees working across 35 delivery centers across Asia, Europe, North America & Near shore in Central America & Canada. We are focused on enabling Digital Transformation for our customers by driving Innovation & Automation through self-service options like AI Bots, Robotic Process Automation etc. for hyper personalized, secure, on demand, and elastic solutions. Our workforce is connected to drive customer experience in Media & Communications, Travel & Hospitality, eCommerce, Technology, Fintech & Financial services & Healthcare domains. How we Help you Grow: Our development programmers are designed to promote technical growth, enhance leadership and relationship skills across individuals. To stimulate your career growth, a vast array of in-house training programs which are listed below, but not limited to:- Job Title: Sr.GCP Data Engineer(SAS to GCP Migration) Work Mode : Hybrid/Remote Years of Exp: 7- 10year Shift Time : 3PM-12AM Notice- Immediate to 15 Days Prefered Immediate Joiners Preferred As a Sr. Data Engineer with a focus on pipeline migration from SAS to Google Cloud Platform (GCP) technologies, you will tackle intricate problems and create value for our business by designing and deploying reliable, scalable solutions tailored to the company’s data landscape. You will be responsible for the development of custom-built data pipelines on the GCP stack, ensuring seamless migration of existing SAS pipelines. Responsibilities: Design, develop, and implement data pipelines on the GCP stack, with a focus on migrating existing pipelines from SAS to GCP technologies. Develop modular and reusable code to support complex ingestion frameworks, simplifying the process of loading data into data lakes or data warehouses from multiple sources. Collaborate with analysts and business process owners to translate business requirements into technical solutions. Utilize your coding expertise in scripting languages (Python, SQL, PySpark) to extract, manipulate, and process data effectively. Leverage your expertise in various GCP technologies, including BigQuery, Dataproc, GCP Workflows, Dataflow, Cloud Scheduler, Secret Manager, Batch, Cloud Logging, Cloud SDK, Google Cloud Storage, IAM, and Vertex AI, to enhance data warehousing solutions. Maintain high standards of development practices, including technical design, solution development, systems configuration, testing, documentation, issue identification, and resolution, writing clean, modular, and sustainable code. Understand and implement CI/CD processes using tools like Pulumi, GitHub, Cloud Build, Cloud SDK, and Docker. Participate in data quality and validation processes to ensure data integrity and reliability. Optimize performance of data pipelines and storage solutions, addressing bottlenecks. Collaborate with security teams to ensure compliance with industry standards for data security and governance. Communicate technical solutions engineering teams and business stakeholders. Required Skills & Qualifications: 7-10 years of experience in software development, data engineering, business intelligence, or a related field, with a proven track record in manipulating, processing, and extracting value from large datasets. Extensive experience with GCP technologies in the data warehousing space, including BigQuery, Dataproc, GCP Workflows, Dataflow, Cloud Scheduler, Secret Manager, Batch, Cloud Logging, Cloud SDK, Google Cloud Storage, IAM, and Vertex AI. Proficient in Python, SQL, and PySpark for data manipulation and pipeline creation. Experience with SAS, SQL Server, and SSIS is a significant advantage, particularly for transitioning legacy systems to modern GCP solutions. Ability to develop reusable, modular code for complex ingestion frameworks and multi-use pipelines. Understanding of CI/CD processes and tools, such as Pulumi, GitHub, Cloud Build, Cloud SDK, and Docker. Proven experience in migrating data pipelines from SAS to GCP technologies. Strong problem-solving abilities and a proactive approach to identifying and implementing solutions. Familiarity with industry best practices for data security, data governance, and compliance in cloud environments. Bachelor's degree in Computer Science, Information Technology, or a related technical field, or equivalent practical experience. GCP Certified Data Engineer (preferred). Excellent verbal and written communication skills, with the ability to advocate for technical solutions to a diverse audience including engineering teams, and business stakeholders. Willingness to work in the afternoon shift from 3 PM to 12 AM IST. How will this opportunity be a catalyst in your career graph? To stimulate your career growth, a vast array of in-house training programs which are listed below, but not limited to:- Trending technical skills Business domain & customer interaction Behavioral & effective communication Transparent work culture to lift your ideas & initiatives at enterprise level & investment to execute successfully. Equal Opportunity Employer: At TELUS International, we are proud to be an equal opportunity employer and are committed to creating a diverse and inclusive workplace. All aspects of employment, including the decision to hire and promote, are based on applicants’ qualifications, merits, competence, and performance without regard to any characteristic related to diversity. Show more Show less
Posted 2 months ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Minimum qualifications: Bachelor's degree in Electrical Engineering or equivalent practical experience. 4 years of experience with verification methodology such as Universal verification methodology (UVM). 2 years of experience in the verification of IP designs such as IP, SoC, vector CPUs, etc. Experience with SystemVerilog, SVA, and functional coverage. Preferred qualifications: Master's degree in Electrical Engineering or a related field. Experience with industry-standard simulators, revision control systems, and regression systems. Experience with the full verification life cycle. Experience in Artificial Intelligence/Machine Learning (AI/ML) Accelerators or vector processing units. Excellent problem solving and communication skills. About The Job In this role, you’ll work to shape the future of AI/ML hardware acceleration. You will have an opportunity to drive cutting-edge TPU (Tensor Processing Unit) technology that powers Google's most demanding AI/ML applications. You’ll be part of a team that pushes boundaries, developing custom silicon solutions that power the future of Google's TPU. You'll contribute to the innovation behind products loved by millions worldwide, and leverage your design and verification expertise to verify complex digital designs, with a specific focus on TPU architecture and its integration within AI/ML-driven systems. In this role, you will own the full verification life cycle from verification planning and test execution to coverage closure, with an emphasis on meeting stringent AI/ML performance and accuracy goals, build constrained-random verification environments capable of exposing corner-case bugs and ensuring the reliability of Artificial Intelligence/Machine Learning (AI/ML) workloads on Tensor Processing Unit (TPU) hardware. You will collaborate closely with design and verification engineers in active projects and perform verification. The ML, Systems, & Cloud AI (MSCA) organization at Google designs, implements, and manages the hardware, software, machine learning, and systems infrastructure for all Google services (Search, YouTube, etc.) and Google Cloud. Our end users are Googlers, Cloud customers and the billions of people who use Google services around the world. We prioritize security, efficiency, and reliability across everything we do - from developing our latest TPUs to running a global network, while driving towards shaping the future of hyperscale computing. Our global impact spans software and hardware, including Google Cloud’s Vertex AI, the leading AI platform for bringing Gemini models to enterprise customers. Responsibilities Plan the verification of digital design blocks and interact with design engineers to identify important verification scenarios. Identify and write all types of coverage measures for stimulus and corner-cases. Debug tests with design engineers to deliver functionally correct design blocks. Measure to identify verification holes and to show progress towards tape-out. Create a constrained-random verification environment using SystemVerilog and Universal verification methodology (UVM). Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form . Show more Show less
Posted 2 months ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Minimum qualifications: Bachelor's degree in Electrical Engineering, Computer Engineering, Computer Science, a related field, or equivalent practical experience. 8 years of experience in Application-Specific Integrated Circuit (ASIC) development, with power optimization. Experience with low power schemes, power roll up, and power estimations. Experience in ASIC design verification, synthesis, timing analysis. Preferred qualifications: Experience with coding languages (e.g., Python or Perl). Experience in System on a Chip (SoC) designs and integration flows. Experience with power optimization and power modeling tools. Knowledge of high performance and low power design techniques. About The Job In this role, you’ll work to shape the future of AI/ML hardware acceleration. You will have an opportunity to drive cutting-edge TPU (Tensor Processing Unit) technology that powers Google's most demanding AI/ML applications. You’ll be part of a team that pushes boundaries, developing custom silicon solutions that power the future of Google's TPU. You'll contribute to the innovation behind products loved by millions worldwide, and leverage your design and verification expertise to verify complex digital designs, with a specific focus on TPU architecture and its integration within AI/ML-driven systems. In this role, you will be part of a team developing Application-specific integrated circuits (ASICs) used to accelerate machine learning computation in data centers. You will collaborate with members of architecture, verification, power and performance, physical design, etc. to specify and deliver quality designs for next generation data center accelerators. You will solve problems with micro-architecture and practical reasoning solutions, and evaluate design options with performance, power and area in mind. The ML, Systems, & Cloud AI (MSCA) organization at Google designs, implements, and manages the hardware, software, machine learning, and systems infrastructure for all Google services (Search, YouTube, etc.) and Google Cloud. Our end users are Googlers, Cloud customers and the billions of people who use Google services around the world. We prioritize security, efficiency, and reliability across everything we do - from developing our latest TPUs to running a global network, while driving towards shaping the future of hyperscale computing. Our global impact spans software and hardware, including Google Cloud’s Vertex AI, the leading AI platform for bringing Gemini models to enterprise customers. Responsibilities Participate in defining power management schemes and low power modes. Create power specifications and Unified Power Format (UPF) definition for System on a Chip (SoC) and subsystems. Estimate and track through all phases of the project. Run power optimization tools, suggest ways to improve power and drive convergence. Work with cross-functional teams for handoff of power intent and power projections. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form . Show more Show less
Posted 2 months ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Minimum qualifications: Bachelor's degree in Computer Science, Management Information Systems, or a related technical field, or equivalent practical experience. 8 years of experience in business application development or implementation. Experience in implementing, customizing or integrating third-party applications within business enterprise software. Preferred qualifications: Experience with programming languages Java and web services technology (e.g., REST, SOAP, etc.). Experience in Google Cloud Platform (GCP) or other public cloud platforms. Experience in multiple full system implementation life cycles (e.g., analyze, design, build, test, implement, support). Experience in Python programming language, with the knowledge of object-oriented principles. Knowledge of systems related to Supply Chain and Enterprise Resource Planning (ERP). About the job At Google, we work at lightning speed. So when things get in the way of progress, the Business Systems Integration team steps in to remove those roadblocks. The team identifies time-consuming internal processes and then builds solutions that are reliable and scalable enough to work within the size and scope of the company. You listen to and translate Googler needs into high-level technical specifications, design and develop recommended systems and consult with Google executives to ensure smooth implementation. Whether battling large system processes or leveraging our homegrown suite of Google products for Googlers themselves, you help Googlers work faster and more efficiently. The ML, Systems, & Cloud AI (MSCA) organization at Google designs, implements, and manages the hardware, software, machine learning, and systems infrastructure for all Google services (Search, YouTube, etc.) and Google Cloud. Our end users are Googlers, Cloud customers and the billions of people who use Google services around the world. We prioritize security, efficiency, and reliability across everything we do - from developing our latest TPUs to running a global network, while driving towards shaping the future of hyperscale computing. Our global impact spans software and hardware, including Google Cloud’s Vertex AI, the leading AI platform for bringing Gemini models to enterprise customers. Responsibilities Partner with internal teams to define and implement solutions that improve internal business processes. Identify, assess, estimate, and solve business issues, where analysis of situations/scenarios or data requires an evaluation of variable factors including security. Develop, build, and deploy applications using various platforms and technologies. Work with analysts to translate business requirements into technical solutions. Maintain the development practices including technical design, solution development, systems configuration, test documentation/execution, issue identification and resolution, writing modular, and self-sustaining code. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form . Show more Show less
Posted 2 months ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Minimum qualifications: Bachelor's degree or equivalent practical experience. 8 years of experience with software development in one or more programming languages (e.g., Python, C, C++, Java, JavaScript). 3 years of experience in a technical leadership role; overseeing projects, with 2 years of experience in a people management, supervision/team leadership role. 3 years of experience with full stack development, across back-end such as Java, Python, GO, or C++ codebases, and front-end experience including JavaScript or TypeScript, HTML, CSS or equivalent. Preferred qualifications: Experience in capacity planning and management. Experience leading and growing medium-to-large engineering teams. Experience with object-oriented programming languages, distributed systems, and reliable systems/cloud services. Passionate about delivering impact in an enterprise-facing and customer-facing environment. About The Job Like Google's own ambitions, the work of a Software Engineer goes beyond just Search. Software Engineering Managers have not only the technical expertise to take on and provide technical leadership to major projects, but also manage a team of Engineers. You not only optimize your own code but make sure Engineers are able to optimize theirs. As a Software Engineering Manager you manage your project goals, contribute to product strategy and help develop your team. Teams work all across the company, in areas such as information retrieval, artificial intelligence, natural language processing, distributed computing, large-scale system design, networking, security, data compression, user interface design; the list goes on and is growing every day. Operating with scale and speed, our exceptional software engineers are just getting started -- and as a manager, you guide the way. With technical and leadership expertise, you manage engineers across multiple teams and locations, a large product budget and oversee the deployment of large-scale projects across multiple sites internationally. The ML, Systems, & Cloud AI (MSCA) organization at Google designs, implements, and manages the hardware, software, machine learning, and systems infrastructure for all Google services (Search, YouTube, etc.) and Google Cloud. Our end users are Googlers, Cloud customers and the billions of people who use Google services around the world. We prioritize security, efficiency, and reliability across everything we do - from developing our latest TPUs to running a global network, while driving towards shaping the future of hyperscale computing. Our global impact spans software and hardware, including Google Cloud’s Vertex AI, the leading AI platform for bringing Gemini models to enterprise customers. Responsibilities Provide technical leadership across teams of engineers focused on building a new platform. Define, iterate, and communicate team goals, strategy, and roadmap. Collaborate and navigate through organizational complexity to solve problems and drive cross-functional execution. Coach, mentor, and grow engineers on the team. Provide technical expertise throughout the product lifecycle including design, implementation, and delivery of services and infrastructure. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form . Show more Show less
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough