Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5.0 - 10.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
We are seeking a highly skilled and motivated Lead DS/ML engineer to join our team. The role is critical to the development of a cutting-edge reporting platform designed to measure and optimize online marketing campaigns. We are seeking a highly skilled Data Scientist / ML Engineer with a strong foundation in data engineering (ELT, data pipelines) and advanced machine learning to develop and deploy sophisticated models. The role focuses on building scalable data pipelines, developing ML models, and deploying solutions in production to support a cutting-edge reporting, insights, and recommendations platform for measuring and optimizing online marketing campaigns. The ideal candidate should be comfortable working across data engineering, ML model lifecycle, and cloud-native technologies. Job Description: Key Responsibilities: Data Engineering & Pipeline Development Design, build, and maintain scalable ELT pipelines for ingesting, transforming, and processing large-scale marketing campaign data. Ensure high data quality, integrity, and governance using orchestration tools like Apache Airflow, Google Cloud Composer, or Prefect. Optimize data storage, retrieval, and processing using BigQuery, Dataflow, and Spark for both batch and real-time workloads. Implement data modeling and feature engineering for ML use cases. Machine Learning Model Development & Validation Develop and validate predictive and prescriptive ML models to enhance marketing campaign measurement and optimization. Experiment with different algorithms (regression, classification, clustering, reinforcement learning) to drive insights and recommendations. Leverage NLP, time-series forecasting, and causal inference models to improve campaign attribution and performance analysis. Optimize models for scalability, efficiency, and interpretability. MLOps & Model Deployment Deploy and monitor ML models in production using tools such as Vertex AI, MLflow, Kubeflow, or TensorFlow Serving. Implement CI/CD pipelines for ML models, ensuring seamless updates and retraining. Develop real-time inference solutions and integrate ML models into BI dashboards and reporting platforms. Cloud & Infrastructure Optimization Design cloud-native data processing solutions on Google Cloud Platform (GCP), leveraging services such as BigQuery, Cloud Storage, Cloud Functions, Pub/Sub, and Dataflow. Work on containerized deployment (Docker, Kubernetes) for scalable model inference. Implement cost-efficient, serverless data solutions where applicable. Business Impact & Cross-functional Collaboration Work closely with data analysts, marketing teams, and software engineers to align ML and data solutions with business objectives. Translate complex model insights into actionable business recommendations. Present findings and performance metrics to both technical and non-technical stakeholders. Qualifications & Skills: Educational Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Machine Learning, Artificial Intelligence, Statistics, or a related field. Certifications in Google Cloud (Professional Data Engineer, ML Engineer) is a plus. Must-Have Skills: Experience: 5-10 years with the mentioned skillset & relevant hands-on experience Data Engineering: Experience with ETL/ELT pipelines, data ingestion, transformation, and orchestration (Airflow, Dataflow, Composer). ML Model Development: Strong grasp of statistical modeling, supervised/unsupervised learning, time-series forecasting, and NLP. Programming: Proficiency in Python (Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch) and SQL for large-scale data processing. Cloud & Infrastructure: Expertise in GCP (BigQuery, Vertex AI, Dataflow, Pub/Sub, Cloud Storage) or equivalent cloud platforms. MLOps & Deployment: Hands-on experience with CI/CD pipelines, model monitoring, and version control (MLflow, Kubeflow, Vertex AI, or similar tools). Data Warehousing & Real-time Processing: Strong knowledge of modern data platforms for batch and streaming data processing. Nice-to-Have Skills: Experience with Graph ML, reinforcement learning, or causal inference modeling. Working knowledge of BI tools (Looker, Tableau, Power BI) for integrating ML insights into dashboards. Familiarity with marketing analytics, attribution modeling, and A/B testing methodologies. Experience with distributed computing frameworks (Spark, Dask, Ray). Location: Bengaluru Brand: Merkle Time Type: Full time Contract Type: Permanent Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Sahibzada Ajit Singh Nagar, Punjab, India
On-site
Job Title: US Payroll & Taxation Specialist Location: Ludhiana/Mohali Timing: 4pm to 1am Website: www.dnagrowth.com Key Responsibilities US Payroll Management End-to-end processing of bi-weekly/monthly payrolls using platforms like Gusto, ADP, Paychex, QuickBooks Payroll, or similar. Ensure compliance with multi-state wage and hour laws. Prepare and file Forms 941, 940, W-2s, and other IRS/state forms. Set up new hires and manage terminations including final paychecks, severance, and benefits-related compliance. Handle garnishments, deductions, and reimbursements. Coordinate with HR and finance teams to ensure accurate payroll inputs and employee classification. US Taxation (Federal & State) Prepare and review federal and state tax filings for S-corps, C-corps, and LLCs. Support clients during tax season with 1099/1096 preparation and filing. Reconcile year-end financials with tax filings. Liaise with CPAs and external advisors for strategic tax planning. Assist clients with IRS/state correspondence and audits. Proficient in managing tax scrutiny, assessments, and reassessments, Client Relationship & Compliance Act as the primary point of contact for US clients for all payroll and tax-related queries. Keep clients informed on updates to tax laws and payroll regulations. Ensure timely responses and resolutions to client concerns. Requirements Minimum 5 years of hands-on experience with US payroll processing and tax filing. Strong understanding of IRS rules, multi-state payroll, and small business tax structures (LLC, S-Corp, C-Corp). Experience with Gusto, ADP, Paychex, QuickBooks Online, Xero, or similar tools. Familiarity with tools like Avalara, TaxJar, or Vertex is a plus. Excellent written and verbal communication skills. Strong analytical and documentation skills. Detail-oriented with the ability to handle multiple clients simultaneously. Preferred Qualifications EA (Enrolled Agent) license preferred but not mandatory. CPA or MBA Finance is a plus. Prior experience working in a US outsourcing firm Show more Show less
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
FW: FORD Requirement - Order Number: 34189 - Budget - 27 L PA Chennai - Contract - Position Title: Architect Consultant Original Duration: 365 Days Notice Period : Immediate Joiners Only / Serving NP up to 40 Days Standard Shift: DAY Division Position Description: Provide technical leadership and architectural strategy for enterprise-scale data, analytics, and cloud initiatives. This role partners with business and product teams to design scalable, secure, and high-performing solutions that align with enterprise architecture standards and business goals. The Solution Architect will help GDIA teams with the architecture of new and existing applications using Cloud architecture patterns and processes. Works with product teams to define, assemble and integrate components based on Ford standards and business requirements. Supports the product team in the development of the technical design and documentation. Participate in proof of concepts and support the product solution evaluation processes. Provides architecture guidance and technical design leadership. Demonstrated ability to work on multiple projects simultaneously. Skills Required GCP, Cloud Architecture, API, Enterprise Architecture, Solution Architecture, CI-CD, Data/Analytics Skills Preferred Big Query, Java, React, Python, LLM, Angular, GCS, GCP Cloud Run, Vertex, Tekton, TERRAFORM, Problem Solving Experience Required Proficiency and direct hands-on experience in Google Cloud Platform Architecture Strong understanding of enterprise integration patterns, security architecture, and DevOps practices. Demonstrated ability to lead complex technical initiatives and influence stakeholders across business and IT. Experience Preferred 0 Education Required Bachelor's Degree Education Preferred Additional Safety Training/Licensing/Personal Protection Requirements: Additional Information Define and evolve architecture for new and existing applications using enterprise standards and cloud-native patterns Collaborate with cross-functional teams to translate business strategies into executable technical solutions Lead architecture reviews, proof-of-concept efforts, and solution evaluations for GCP, AI/ML, and analytics platforms Ensure integration of business, data, application, and technology architectures across portfolios Maintain architectural documentation and contribute to enterprise knowledge bases Provide consulting and mentorship to development teams, ensuring alignment with architectural principles Skills: cloud,llm,python,react,terraform,gcs,angular,gcp cloud run,java,cloud architecture,enterprise,vertex,data/analytics,solution architecture,enterprise architecture,big query,ci-cd,gcp,architecture,problem solving,teams,tekton,api Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
FORD Requirement - Order Number: 33929-26 L PA Chennai - Contract - Non-Hacker rank Notice Period - Immediate Joiners / serving upto 30 days Position Title: Specialty Development Consultant Duration: 658 days Interview Required: N Estimated Regular Hours: 40 Estimated Overtime Hours: 0 Division: Global Data Insight & Analytics Position Description > Train, Build and Deploy ML, DL Models > Software development using Python, > work with Tech Anchors, Product Managers and the Team internally and across other Teams > Ability to understand technical, functional, non-functional, security aspects of business requirements and delivering them end-to-end > Software development using TDD approach > Experience using GCP products & services > Ability to adapt quickly with opensource products & tools to integrate with ML Platforms Skills Required 3+ years of experience in Python software development 3+ years’ experience in Cloud technologies & services, preferably GCP 3+ years of experience of practicing statistical methods and their accurate application e.g. ANOVA, principal component analysis, correspondence analysis, k-means clustering, factor analysis, multi-variate analysis, Neural Networks, causal inference, Gaussian regression, etc. 3+ years’ experience with Python, SQL, BQ. Experience in SonarQube, CICD, Tekton, terraform, GCS, GCP Looker, Vertex AI, Airflow, TensorFlow, etc., Experience in Train, Build and Deploy ML, DL Models Ability to understand technical, functional, non-functional, security aspects of business requirements and delivering them end-to-end. Ability to adapt quickly with opensource products & tools to integrate with ML Platforms Building and deploying Models (Scikit learn, DataRobots, TensorFlow PyTorch, etc.) Developing and deploying On-Prem & Cloud environments Kubernetes, Tekton, OpenShift, Terraform, Vertex AI Skills Preferred Good Communication, Presentation and Collaboration Skills Experience Required 2 to 5 yrs Experience Preferred GCP products & services Education Required BE, BTech, MCA, M.Sc, ME Education Preferred Additional Information: HackerRank Test on Python, Cloud and Machine Learning is must. Skills: ml,cicd,bigquery,pytorch,python,sonarqube,kubernetes,gcp looker,scikit learn,tekton,gcs,cloud,vertex ai,gcp,tensorflow,datarobots,openshift,sql,cloud technologies,airflow,statistical methods,terraform Show more Show less
Posted 1 week ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are seeking a seasoned and experienced Multi Modal AI Senior Developer with a deep expertise in leveraging Generative AI for creative and content generation at scale. The ideal candidate would have a deep understanding of Multi Modal AI, the ability to leverage Gen-AI models for all creative output and content types, and the ability to work across all content and interaction modalities – including text, visuals, audio/speech and video. A strong foundation in Large Language models and Vision Language Models (VLM) is also highly desirable. As a Multi Modal AI Senior Developer, you will play a key role in building cutting-edge products and innovative solutions, combining the full power of Creative AI workflows, Generative AI, LLMs, and Agentic AI. Your primary focus will be on building bespoke products and creative workflows leveraging Gen-AI models to help build out our creative product portfolio for some of our largest, most strategic enterprise product solutions. The candidate should have a good technical background in Product Development, Cloud-Native App Dev, Front-End and Back-End Web Application Development, and the ability to build these solutions in cloud environments such as Azure and AWS, integrating with the appropriate multi modal AI services. The candidate will also need to have strong expertise with Cloud AI services such as Azure OpenAI, AWS Bedrock, and Google Gemini, and all the foundation models hosted within those services. Additionally, hands-on experience with a variety of foundation models, Vision Language Models (VLMs), Creative AI Services and APIs, and the ability to seamlessly integrate all of these together, in an automated workflow using APIs and AI Assistants, will be an essential skillset. Job Description: Key Skills required : Generative AI, Multi Modal AI Creative AI solutions and workflows across all creative content types including Copy/Text, Imagery, Key Visuals, Characters, Avatars, Audio, Speech and Video AI Creative AI Automation workflows with content creation and content editing at scale, using AI services and AI APIs Experience with multiple Multi Modal AI Foundation Models LLM, LLM App Dev AI Agents, Agentic AI Workflows Responsibilities : Design and build web apps and solutions that leverage Creative AI Services, Multi Modal AI models, and Generative AI workflows Leverage Multi modal AI capabilities supporting all content types and modalities, including text, imagery, audio, speech and video Build creative automation workflows that help produce creative concepts, creative production deliverables, and integrated creative outputs, leveraging AI and Gen-AI models Integrate AI Image Gen Models and AI Image Editing models from key technology partners Integrate Text / Copy Gen Models for key LLM providers Integrate Speech / Audio Gen and Editing models for use cases such as transcription, translation, and AI generated audio narration Integrate AI enabled Video Gen and Video Editing models Fine-Tune Multi Modal AI models for brand specific usage and branded content generation Constantly Research and explore emerging trends and techniques in the field of generative AI and LLMs to stay at the forefront of innovation. Drive product development and delivery within tight timelines Collaborate with full-stack developers, engineers, and quality engineers, to develop and integrate solutions into existing enterprise products. Collaborate with technology leaders and cross-functional teams to develop and validate client requirements and rapidly translate them into working solutions. Develop, implement and optimize scalable AI-enabled products Integrate Gen-AI and Multi Modal AI solutions into Cloud Platforms, Cloud Native Apps, and custom Web Apps Execute implementation across all layers of the application stack – including front-end, back-end, APIs, data and AI services Build enterprise products and full-stack applications on the MERN + Python stack, with a clear separation of concerns across layers Skills and Competencies: Deep Hands-on Experience in Multi modal AI models and tools. Hands-on Experience in API integration with AI services Multi Modal AI – competencies : Hands-on Experience with intelligent document processing and document indexing + document content extraction and querying, using multi modal AI Models Hands-on Experience with using Multi modal AI models and solutions for Imagery and Visual Creative – including text-to-image, image-to-image, image composition, image variations, etc. Hands-on Experience with popular AI Image Composition and Editing models from providers such as Adobe Firefly, Getty Images, ShutterStock, Flux and Flux Pro, and Stable Diffusion, and the ability to integrate them programmatically over API calls and workflows Hands-on Experience with Computer Vision and Image Processing using Multi-modal AI – for use cases such as object detection, automated captioning, automated masking, and image segmentation – again all done programmatically over API calls and Workflows Hands-on Experience with using Multi modal AI for Speech – including Text to Speech, Speech to Text, and use of Pre-built vs. Custom Voices Hands-on Experience with building Voice-enabled and Voice-activated experiences, using Speech AI and Voice AI solutions Hands-on Experience with AI Character and AI Avatar development, using a variety of different tools and platforms Fine-Tuning Creative AI Content models for Custom Styles, Custom Characters, and Custom Brand specific imagery Fine-Tuning Speech Models for Custom Voices Good understanding of advanced fine-tuning techniques such as LoRA Ability to execute and run fine-tuning workflows, end-to-end, in particular for Image Gen and Image Editing models Hands-on Experience with leveraging APIs to orchestrate across Multi Modal AI models Hands-on Experience with building workflows that orchestrate across Multi Modal AI models Good Experience with using AI Assistants to drive natural language interactions and orchestration with Multi Modal AI models Good Experience with use of AI Agents and Agentic AI workflows to drive dynamic orchestration across Multi Modal AI services and models Programming Skills : Good Expertise in MERN stack (JavaScript) including client-side and server-side JavaScript Good Expertise in Python based development, including Python App Dev for Multi Modal AI Integration Well-rounded in both programming languages Strong experience in client-side JavaScript Apps and building Static Web Apps + Dynamic Web Apps both in JavaScript Hands-on Experience in front-end and back-end development Minimum 2+ years hands-on experience in working with Full-Stack MERN apps, using both client-side and server-side JavaScript Minimum 2 years hands-on experience in Python development Minimum 2 years hands-on experience in working with LLMs and LLM models, using Python LLM Dev Skills : Solid Hands-on Experience with building end-to-end RAG pipelines and custom AI indexing solutions to ground LLMs and enhance LLM output Good Experience with building AI and LLM enabled Workflows Hands-on Experience integrating LLMs with external tools such as Web Search Ability to leverage advanced concepts such as tool calling and function calling, with LLM models Hands-on Experience with Conversational AI solutions and chat-driven experiences Experience with multiple LLMs and models – primarily GPT-4o, GPT o1, and o3 mini, and preferably also Gemini, Claude Sonnet, etc. Experience and Expertise in Cloud Gen-AI platforms, services, and APIs, primarily Azure OpenAI, and perferably also AWS Bedrock, and/or GCP Vertex AI. Hands-on Experience with Assistants and the use of Assistants in orchestrating with LLMs Hands-on Experience working with AI Agents and Agent Services. Nice-to-Have capabilities (Not essential) : Hands-on Experience with building Agentic AI workflows that enable iterative improvement of output Hands-on experience with both Single-Agent and Multi-Agent Orchestration solutions and frameworks Hands-on experience with different Agent communication and chaining patterns Ability to leverage LLMs for Reasoning and Planning workflows, that enable higher order “goals” and automated orchestration across multiple apps and tools Ability to leverage Graph Databases and “Knowledge Graphs” as an alternate method / replacement of Vector Databases, for enabling more relevant semantic querying and outputs via LLM models. Good Background with Machine Learning solutions Good foundational understanding of Transformer Models Good foundational understanding of Diffusion Models Some Experience with custom ML model development and deployment is desirable. Proficiency in deep learning frameworks such as PyTorch, or Keras. Experience with Cloud ML Platforms such as Azure ML Service, AWS Sage maker, and NVidia AI Foundry. Location: DGS India - Pune - Kharadi EON Free Zone Brand: Dentsu Creative Time Type: Full time Contract Type: Permanent Show more Show less
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
FORD Requirement - Order Number: 34170-23 L PA Chennai - Contract - Position Title: Architect Senior Target Start Date: 01-JUL-2025 Original Duration: 334 Days Notice Period - Immediate Joiners / Serving upto 30 days Work Hours: 02:00 PM to 11:30 PM Standard Shift: Night Travel Required? N Travel %: 0 Division Position Description: Materials Management Platform (MMP) is a multi-year transformation initiative aimed at transforming Ford's Materials Requirement Planning & Inventory Management capabilities. This is part of a larger Industrial Systems IT Transformation effort. This position responsibility is to design & deploy Data Centric Architecture in GCP for Materials Management platform which would get / give data from multiple applications modern & Legacy in Product Development, Manufacturing, Finance, Purchasing, N-Tier Supply Chain, Supplier Collaboration Skills Required GCP, Data Architecture Skills Preferred Cloud Architecture Experience Required 8 to 12 years Experience Preferred Requires a bachelor’s or foreign equivalent degree in computer science, information technology or a technology related field 8 years of professional experience in: o Data engineering, data product development and software product launches o At least three of the following languages: Java, Python, Spark, Scala, SQL and experience performance tuning. 4 years of cloud data/software engineering experience building scalable, reliable, and cost-effective production batch and streaming data pipelines using: o Data warehouses like Google BigQuery. o Workflow orchestration tools like Airflow. o Relational Database Management System like MySQL, PostgreSQL, and SQL Server. o Real-Time data streaming platform like Apache Kafka, GCP Pub/Sub o Microservices architecture to deliver large-scale real-time data processing application. o REST APIs for compute, storage, operations, and security. o DevOps tools such as Tekton, GitHub Actions, Git, GitHub, Terraform, Docker. o Project management tools like Atlassian JIRA Automotive experience is preferred Support in an onshore/offshore model is preferred Excellent at problem solving and prevention. Knowledge and practical experience of agile delivery Education Required Bachelor's Degree Education Preferred Certification Program Additional Safety Training/Licensing/Personal Protection Requirements Additional Information : Design and implement data-centric solutions on Google Cloud Platform (GCP) using various GCP tools like Big Query, Google Cloud Storage, Cloud SQL, Memory Store, Dataflow, Dataproc, Artifact Registry, Cloud Build, Cloud Run, Vertex AI, Pub/Sub, GCP APIs. Build ETL pipelines to ingest the data from heterogeneous sources into our system Develop data processing pipelines using programming languages like Java and Python to extract, transform, and load (ETL) data Create and maintain data models, ensuring efficient storage, retrieval, and analysis of large datasets Deploy and manage databases, both SQL and NoSQL, such as Bigtable, Firestore, or Cloud SQL, based on project requirements Optimize data workflows for performance, reliability, and cost-effectiveness on the GCP infrastructure. Implement version control and CI/CD practices for data engineering workflows to ensure reliable and efficient deployments. Utilize GCP monitoring and logging tools to proactively identify and address performance bottlenecks and system failures Troubleshoot and resolve issues related to data processing, storage, and retrieval. Promptly address code quality issues using SonarQube, Checkmarx, Fossa, and Cycode throughout the development lifecycle Implement security measures and data governance policies to ensure the integrity and confidentiality of data Collaborate with stakeholders to gather and define data requirements, ensuring alignment with business objectives. Develop and maintain documentation for data engineering processes, ensuring knowledge transfer and ease of system maintenance. Participate in on-call rotations to address critical issues and ensure the reliability of data engineering systems. Provide mentorship and guidance to junior team members, fostering a collaborative and knowledge-sharing environment. Skills: airflow,data warehouses,cloud architecture,rdbms,spark,postgresql,real-time data streaming,microservices architecture,gcp,python,tekton,cloud,java,data architecture,terraform,management,rest apis,git,data,docker,github actions,sql,google bigquery,atlassian jira,sql server,workflow orchestration,scala,devops tools,github,apache kafka,mysql,gcp pub/sub Show more Show less
Posted 1 week ago
12.0 years
0 Lacs
Mysuru, Karnataka, India
On-site
About ISOCRATES Since 2015, iSOCRATES advises on, builds and manages mission-critical Marketing, Advertising and Data technologies, platforms, and processes as the Global Leader in MADTECH Resource Planning and Execution(TM). iSOCRATES delivers globally proven, reliable, and affordable Strategy and Operations Consulting and Managed Services for marketers, agencies, publishers, and the data/tech providers that enable them. iSOCRATES is staffed 24/7/365 with its proven specialists who save partners money, and time and achieve transparent, accountable, performance while delivering extraordinary value. Savings stem from a low-cost, focused global delivery model at scale that benefits from continuous re-investment in technology and specialized training. About MADTECH.AI MADTECH.AI is the Unified Marketing, Advertising, and Data Decision Intelligence Platform purpose-built to deliver speed to value for marketers. At MADTECH.AI, we make real-time AI-driven insights accessible to everyone. Whether you’re a global or emerging brand, agency, publisher, or data/tech provider, we give you a single source of truth - so you can capture sharper insights that drive better marketing decisions faster and more affordable than ever before. MADTECH.AI unifies and transforms MADTECH data and centralizes decision intelligence in a single, affordable platform. Leave data wrangling, data model building, proactive problem solving, and data visualization to MADTECH.AI. Job Description We are seeking a highly skilled, results-oriented Product Manager - AI & BI to lead the growth and development of iSOCRATES' MADTECH.AI™ platform. As a core member of the product team, you will play an instrumental role in shaping the future of our AI-powered Marketing, Advertising, and Data Decision Intelligence solutions. Your focus will be on driving innovation in AI and BI capabilities, ensuring that our product meets the evolving needs of our B2B customers and enhances their marketing and data analytics capabilities. Key Responsibilities Product Strategy & Roadmap Development: Lead the creation and execution of the MADTECH.AI™ product roadmap, with a focus on incorporating AI and BI technologies to deliver value for B2B customers. Collaborate with internal stakeholders to define product features, prioritize enhancements, and ensure alignment with iSOCRATES’ long-term business objectives. AI & BI Product Development: Spearhead the design and development of innovative AI and BI features to enhance the MADTECH.AI™ platform’s scalability, functionality, and user experience. Leverage cutting-edge technologies such as machine learning, predictive analytics, natural language processing (NLP), data visualization, reinforcement learning, and other advanced AI techniques to deliver powerful marketing, advertising, and data decision intelligence solutions. Cross-Functional Collaboration: Collaborate with cross-functional teams, including engineering, design, marketing, sales, and customer success, to ensure seamless product development and delivery. Facilitate communication between technical and business teams to ensure product features align with customer needs and market trends. Customer & Market Insights: Engage with customers and other stakeholders to gather feedback, identify pain points, and stay on top of market trends. Use this data to shape product development and enhance MADTECH.AI™ capabilities, ensuring they are well-positioned in the evolving market landscape. Product Lifecycle Management: Oversee the complete product lifecycle from ideation through launch and beyond. Manage ongoing iterations of the product based on customer feedback and performance metrics to ensure that MADTECH.AI™ remains competitive and meets user expectations. Data-Driven Decision Making: Use customer analytics, usage patterns, and performance data to inform key product decisions. Define success metrics, monitor product performance, and make adjustments as needed to drive product success. AI/BI Thought Leadership: Stay current on the latest trends in AI, BI, MarTech, and AdTech. Act as a thought leader both internally and externally to position iSOCRATES as an innovator in the MADTECH.AI space. Promote best practices and contribute to the company’s overall strategy for AI and BI product development. Qualifications & Skills Bachelor's or Master's degree in Computer Science, Engineering, Data Science, Business, or a related field. At least 12 years of experience in product management, with a minimum of 7 years focused on B2B SaaS solutions and strong expertise in AI and BI technologies. Prior experience in marketing, advertising, or data analytics platforms is highly preferred. AI & BI Expertise: Deep understanding of Artificial Intelligence, Machine Learning, Natural Language Processing (NLP), Predictive Analytics, Data Visualization, Business Intelligence tools (e.g., Tableau, Power BI, Qlik), and their application in SaaS products, especially within the context of MarTech, AdTech, or DataTech. AI Tools and Technologies : Hands-on experience with AI and BI tools such as: Data Science Libraries: TensorFlow, PyTorch, Scikit-learn, Keras, XGBoost, LightGBM, Hugging Face Transformers, CatBoost, H2O.ai BI Platforms: Tableau, Power BI, Qlik, Looker, Domo, Sisense, MicroStrategy Machine Learning Tools: Azure ML, Google AI Platform, AWS Sagemaker, Databricks, H2O.ai, Vertex AI Data Analytics Tools: Apache Hadoop, Apache Spark, Apache Flink, SQL-based tools, dbt, Snowflake Data Visualization Tools: D3.js, Plotly, Matplotlib, Seaborn, Chart.js, Superset Cloud-Based AI Services: Google AI, AWS AI/ML services, IBM Watson, Microsoft Azure Cognitive Services, Oracle Cloud AI Emerging Tools: AutoML platforms, MLOps tools, Explainable AI (XAI) tools Product Development: Proven experience in leading AI/BI-driven product development within SaaS platforms, including managing the full product lifecycle, from ideation to launch and post-launch iterations. Agile Methodology: Experience working in Agile product development environments, with the ability to prioritize and manage multiple initiatives and product features simultaneously. Analytical & Data-Driven: Strong analytical skills with a focus on leveraging data, performance metrics, and customer feedback to inform product decisions. Ability to translate complex data into actionable insights. Customer-Centric: Experience in working directly with customers to understand their needs, pain points, and feedback. A customer-first mindset with a focus on building products that provide measurable value. Excellent Communication Skills: Exceptional communication, presentation, and interpersonal skills, with the ability to engage and influence both technical teams and business stakeholders across different geographies. Industry Knowledge: Familiarity with MADTECH.AI platforms and technologies. Understanding of customer journey analytics, predictive analytics, and decision intelligence platforms like MADTECH.AI™ is a plus. Cloud & SaaS Architecture: Familiarity with cloud-based solutions and large-scale SaaS architecture. Understanding of how AI and BI features integrate with cloud infrastructure is beneficial. Experience with AI-powered decision intelligence platforms like MADTECH.AI™ or similar MarTech, AdTech, or DataTech tools. In-depth knowledge of cloud technologies, including AWS, Azure, or Google Cloud, and their integration with SaaS platforms. Exposure to customer journey analytics, predictive analytics, and other advanced AI/BI tools. Willingness to work from Mysore/Bangalore or travel to Mysore as per business requirement. Show more Show less
Posted 1 week ago
6.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Summary We are seeking an experienced AI Solution Architect to lead the design and implementation of scalable AI/ML systems and solutions. The ideal candidate will bridge the gap between business needs and technical execution, translating complex problems into AI-powered solutions that drive strategic value. ⸻ Key Responsibilities Solution Design & Architecture Design end-to-end AI/ML architectures, integrating with existing enterprise systems and cloud infrastructure. Lead technical planning, proof-of-concepts (POCs), and solution prototyping for AI use cases. AI/ML System Development Collaborate with data scientists, engineers, and stakeholders to define solution requirements. Guide the selection and use of machine learning frameworks (e.g., TensorFlow, PyTorch, Hugging Face). Evaluate and incorporate LLMs, computer vision, NLP, and generative AI models as appropriate. Technical Leadership Provide architectural oversight and code reviews to ensure high-quality delivery. Establish best practices for AI model lifecycle management (training, deployment, monitoring). Advocate for ethical and responsible AI practices including bias mitigation and model transparency. Stakeholder Engagement Partner with business units to understand goals and align AI solutions accordingly. Communicate complex AI concepts to non-technical audiences and influence executive stakeholders. Innovation & Strategy Stay updated on emerging AI technologies, trends, and industry standards. Drive innovation initiatives to create competitive advantage through AI. ⸻ Required Qualifications Bachelor’s or Master’s in Computer Science, Engineering, Data Science, or related field. 6 years of experience in software development or data engineering roles. 3+ years of experience designing and delivering AI/ML solutions. Proficiency in cloud platforms (AWS, Azure, GCP) and container orchestration (Kubernetes, Docker). Hands-on experience with MLOps tools (e.g., MLflow, Kubeflow, SageMaker, Vertex AI). Strong programming skills in Python and familiarity with CI/CD pipelines. ⸻ Preferred Qualifications Experience with generative AI, foundation models, or large language models (LLMs). Knowledge of data privacy regulations (e.g., GDPR, HIPAA) and AI compliance frameworks. Certifications in cloud architecture or AI (e.g., AWS Certified Machine Learning, Google Professional ML Engineer). Strong analytical, communication, and project management skills. Show more Show less
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Vertex AI Developer Experience: 3 - 5 Years Location: Chennai / Hyderabad Notice Period: Immediate Joiners Preferred Employment Type: Full-Time Job Description We are looking for a passionate and skilled Vertex AI Developer with hands-on experience in Google Cloud’s Vertex AI , Python , Machine Learning (ML) , and Generative AI . The ideal candidate will play a key role in designing, developing, and deploying scalable ML/GenAI models and workflows using GCP Vertex AI services. Key Responsibilities Develop, deploy, and manage ML/GenAI models using Vertex AI on Google Cloud Platform (GCP). Work with structured and unstructured data to create and train predictive and generative models. Integrate AI models into scalable applications using Python APIs and GCP components. Collaborate with data scientists, ML engineers, and DevOps teams to implement end-to-end ML pipelines. Monitor model performance and iterate on improvements as necessary. Document solutions, best practices, and technical decisions. Mandatory Skills 3 to 5 years of experience in Machine Learning/AI Development. Strong proficiency in Python and ML libraries such as TensorFlow, PyTorch, Scikit-learn. Hands-on experience with Vertex AI including AutoML, Pipelines, Model Deployment, and Monitoring. Experience in GenAI frameworks (e.g., PaLM, LangChain, LLMOps). Proficiency in using Google Cloud Platform tools and services. Strong understanding of MLOps, CI/CD, and model lifecycle management. Preferred Skills Experience with containerization tools like Docker, orchestration tools like Kubernetes. Exposure to Natural Language Processing (NLP) and LLMs. Familiarity with data engineering concepts (BigQuery, Dataflow). Show more Show less
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! Key Responsibilities Platform Development and Evangelism: Build scalable AI platforms that are customer-facing. Evangelize the platform with customers and internal stakeholders. Ensure platform scalability, reliability, and performance to meet business needs. Machine Learning Pipeline Design: Design ML pipelines for experiment management, model management, feature management, and model retraining. Implement A/B testing of models. Design APIs for model inferencing at scale. Proven expertise with MLflow, SageMaker, Vertex AI, and Azure AI. LLM Serving and GPU Architecture: Serve as an SME in LLM serving paradigms. Possess deep knowledge of GPU architectures. Expertise in distributed training and serving of large language models. Proficient in model and data parallel training using frameworks like DeepSpeed and service frameworks like vLLM. Model Fine-Tuning and Optimization: Demonstrate proven expertise in model fine-tuning and optimization techniques. Achieve better latencies and accuracies in model results. Reduce training and resource requirements for fine-tuning LLM and LVM models. LLM Models and Use Cases: Have extensive knowledge of different LLM models. Provide insights on the applicability of each model based on use cases. Proven experience in delivering end-to-end solutions from engineering to production for specific customer use cases. DevOps and LLMOps Proficiency: Proven expertise in DevOps and LLMOps practices. Knowledgeable in Kubernetes, Docker, and container orchestration. Deep understanding of LLM orchestration frameworks like Flowise, Langflow, and Langgraph. Skill Matrix LLM: Hugging Face OSS LLMs, GPT, Gemini, Claude, Mixtral, Llama LLM Ops: ML Flow, Langchain, Langraph, LangFlow, Flowise, LLamaIndex, SageMaker, AWS Bedrock, Vertex AI, Azure AI Databases/Datawarehouse: DynamoDB, Cosmos, MongoDB, RDS, MySQL, PostGreSQL, Aurora, Spanner, Google BigQuery. Cloud Knowledge: AWS/Azure/GCP Dev Ops (Knowledge): Kubernetes, Docker, FluentD, Kibana, Grafana, Prometheus Cloud Certifications (Bonus): AWS Professional Solution Architect, AWS Machine Learning Specialty, Azure Solutions Architect Expert Proficient in Python, SQL, Javascript Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio, including Software and Red Hat. Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in groundbreaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience. Your Role And Responsibilities As a Package Consultant at IBM, get ready to tackle numerous mission-critical company directives. Our team takes on the challenge of designing, developing, and re-engineering highly complex application components and integrating software packages using various tools. You will use a mix of consultative skills, business knowledge, and technical expertise to effectively integrate packaged technology into our clients' business environment and achieve business results. Assists clients in the selection, implementation, and support of SD for SAP. Lead multiple sized projects as team member or lead to implement new functionalities and improve existing functionalities including articulating, analyzing requirements and translating them into effective solutions. Prepare and conduct Unit Testing and User Acceptance Testing Preferred Education Master's Degree Required Technical And Professional Expertise performing necessary SAP configurations, writing detail specifications for development of custom programs, testing, co-ordination of transports to production and post go live support. Should be able to create requirement specifications based on Architecture/Design/Detailing of Processes Experience of minimum 2 end to end SAP S/4 HANA Sales & Service Implementation and core S/4 HANA Sales and Service processes and functionalities. Knowledge of Special business functions like Intercompany Sales, Third Party Sales, Consignments, Service and Maintenance, Advanced Variant configuration, International Trade etc Deep understanding of Business Process integration of SAP Sales and Service with other modules e.g. FI, MM, PP, PS, CO etc. Preferred Technical And Professional Experience Knowledge of Fiori Apps from SAP Order to Cash perspective, SAP's implementation methodologies. Should be able to create functional specifications to bridge any gap in the solution design using custom development. For example, enhancements, Interfaces, Reports, data Conversion and Forms. Hands-on experience of designing and supporting integration with other SAP and non-SAP systems, e.g. GTS, BW, WMS, TM, EDI, Vertex etc. Show more Show less
Posted 1 week ago
12.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. Years of Experience: Candidates with 12+ years of hands on experience Position: Senior Manager Required Skills: Successful candidates will have demonstrated the following skills and characteristics: Must Have Deep expertise in AI/ML solution design, including supervised and unsupervised learning, deep learning, NLP, and optimization. Strong hands-on experience with ML/DL frameworks like TensorFlow, PyTorch, scikit-learn, H2O, and XGBoost. Solid programming skills in Python, PySpark, and SQL, with a strong foundation in software engineering principles. Proven track record of building end-to-end AI pipelines, including data ingestion, model training, testing, and production deployment. Experience with MLOps tools such as MLflow, Airflow, DVC, and Kubeflow for model tracking, versioning, and monitoring. Understanding of big data technologies like Apache Spark, Hive, and Delta Lake for scalable model development. Expertise in AI solution deployment across cloud platforms like GCP, AWS, and Azure using services like Vertex AI, SageMaker, and Azure ML. Experience in REST API development, NoSQL database design, and RDBMS design and optimizations. Familiarity with API-based AI integration and containerization technologies like Docker and Kubernetes. Proficiency in data storytelling and visualization tools such as Tableau, Power BI, Looker, and Streamlit. Programming skills in Python and either Scala or R, with experience using Flask and FastAPI. Experience with software engineering practices, including use of GitHub, CI/CD, code testing, and analysis. Proficient in using AI/ML frameworks such as TensorFlow, PyTorch, and SciKit-Learn. Skilled in using Apache Spark, including PySpark and Databricks, for big data processing. Strong understanding of foundational data science concepts, including statistics, linear algebra, and machine learning principles. Knowledgeable in integrating DevOps, MLOps, and DataOps practices to enhance operational efficiency and model deployment. Experience with cloud infrastructure services like Azure and GCP. Proficiency in containerization technologies such as Docker and Kubernetes. Familiarity with observability and monitoring tools like Prometheus and the ELK stack, adhering to SRE principles and techniques. Cloud or Data Engineering certifications or specialization certifications (e.g. Google Professional Machine Learning Engineer, Microsoft Certified: Azure AI Engineer Associate – Exam AI-102, AWS Certified Machine Learning – Specialty (MLS-C01), Databricks Certified Machine Learning) Nice To Have Experience implementing generative AI, LLMs, or advanced NLP use cases Exposure to real-time AI systems, edge deployment, or federated learning Strong executive presence and experience communicating with senior leadership or CXO-level clients Roles And Responsibilities Lead and oversee complex AI/ML programs, ensuring alignment with business strategy and delivering measurable outcomes. Serve as a strategic advisor to clients on AI adoption, architecture decisions, and responsible AI practices. Design and review scalable AI architectures, ensuring performance, security, and compliance. Supervise the development of machine learning pipelines, enabling model training, retraining, monitoring, and automation. Present technical solutions and business value to executive stakeholders through impactful storytelling and data visualization. Build, mentor, and lead high-performing teams of data scientists, ML engineers, and analysts. Drive innovation and capability development in areas such as generative AI, optimization, and real-time analytics. Contribute to business development efforts, including proposal creation, thought leadership, and client engagements. Partner effectively with cross-functional teams to develop, operationalize, integrate, and scale new algorithmic products. Develop code, CI/CD, and MLOps pipelines, including automated tests, and deploy models to cloud compute endpoints. Manage cloud resources and build accelerators to enable other engineers, with experience in working across two hyperscale clouds. Demonstrate effective communication skills, coaching and leading junior engineers, with a successful track record of building production-grade AI products for large organizations. Professional And Educational Background BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA from reputed institute Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. Years of Experience: Candidates with 4+ years of hands on experience Position: Senior Associate Industry: Supply Chain/Forecasting/Financial Analytics Required Skills: Successful candidates will have demonstrated the following skills and characteristics: Must Have Strong supply chain domain knowledge (inventory planning, demand forecasting, logistics) Well versed and hands-on experience of working on optimization methods like linear programming, mixed integer programming, scheduling optimization. Having understanding of working on third party optimization solvers like Gurobi will be an added advantage Proficiency in forecasting techniques (e.g., Holt-Winters, ARIMA, ARIMAX, SARIMA, SARIMAX, FBProphet, NBeats) and machine learning techniques (supervised and unsupervised) Experience using at least one major cloud platform (AWS, Azure, GCP), such as: AWS: Experience with AWS SageMaker, Redshift, Glue, Lambda, QuickSight Azure: Experience with Azure ML Studio, Synapse Analytics, Data Factory, Power BI GCP: Experience with BigQuery, Vertex AI, Dataflow, Cloud Composer, Looker Experience developing, deploying, and monitoring ML models on cloud infrastructure Expertise in Python, SQL, data orchestration, and cloud-native data tools Hands-on experience with cloud-native data lakes and lakehouses (e.g., Delta Lake, BigLake) Familiarity with infrastructure-as-code (Terraform/CDK) for cloud provisioning Knowledge of visualization tools (PowerBI, Tableau, Looker) integrated with cloud backends Strong command of statistical modeling, testing, and inference Advanced capabilities in data wrangling, transformation, and feature engineering Familiarity with MLOps, containerization (Docker, Kubernetes), and orchestration tools (e.g., Airflow) Strong communication and stakeholder engagement skills at the executive level Roles And Responsibilities Assist analytics projects within the supply chain domain, driving design, development, and delivery of data science solutions Develop and execute on project & analysis plans under the guidance of Project Manager Interact with and advise consultants/clients in US as a subject matter expert to formalize data sources to be used, datasets to be acquired, data & use case clarifications that are needed to get a strong hold on data and the business problem to be solved Drive and Conduct analysis using advanced analytics tools and coach the junior team members Implement necessary quality control measures in place to ensure the deliverable integrity like data quality, model robustness, and explainability for deployments. Validate analysis outcomes, recommendations with all stakeholders including the client team Build storylines and make presentations to the client team and/or PwC project leadership team Contribute to the knowledge and firm building activities Professional And Educational Background BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA from reputed institute Show more Show less
Posted 1 week ago
8.0 - 10.0 years
15 - 25 Lacs
Pune, Gurugram, Chennai
Work from Office
Duration: 12 months Timings: Full Time (As per company timings) Notice Period: (Immediate Joiner - Only) Job Description: Looking for a very strong Java expert. hands-on resource who can lead & interact with the client directly and understand the requirements & contribute to technical discussions apart from development work. Should have very good communication skills.? Must-Have Skills: Java, Spring Boot, Microservices, Reactive programming, Vertex Qualification: Bachelor's or Masters degrees in Computer Science, Computer Engineering, or a related technical discipline. Ability to work independently and to adapt to a fast-changing environment. Creative, self-disciplined, and capable of identifying and completing critical tasks independently and with a sense of urgency. Driving Results: A good single contributor and a good team player. Flexible attitude towards work, as per the needs. Proactively identify & communicate issues and risks. Other Personal Characteristics: Dynamic, engaging, self-reliant developer Ability to deal with ambiguity Manage a collaborative and analytical approach Self-confident and humble Open to continuous learning Intelligent, rigorous thinker who can operate successfully amongst bright people Location- Pune,Gurugram,Chennai,Bangalore (Preferred), Bhubaneshwar
Posted 1 week ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Solutions Architect / Technical Lead - AI & Automation1 Key Responsibilities Solution Architecture & Development: Design end-to-end solutions using Node.JS (backend) and Vue.JS (frontend) for custom portals and administration interfaces. Integrate Azure AI services, Google OCR, and Azure OCR into client workflows. AI/ML Engineering Develop and optimize vision-based AI models (Layout Parsing/LP, Layout Inference/LI, Layout Transformation/LT) using Python. Implement NLP pipelines for document extraction, classification, and data enrichment. Cloud & Database Management Architect and optimize MongoDB databases hosted on Azure for scalability, security, and performance. Manage cloud infrastructure (Azure) for AI workloads, including containerization and serverless deployments. Technical Leadership Lead cross-functional teams (AI engineers, DevOps, BAs) in solution delivery. Troubleshoot complex technical issues in OCR accuracy, AI model drift, or system integration. Client Enablement Advise clients on technical best practices for scaling AI solutions. Document architectures, conduct knowledge transfers, and mentor junior engineers. Required Technical Expertise Frontend/Portal: Vue.JS (advanced components, state management), Node.JS (Express, REST/GraphQL APIs). AI/ML Stack: Python (PyTorch/TensorFlow), Azure AI (Cognitive Services, Computer Vision), NLP techniques (NER, summarization). Layout Engineering: LP/LI/LT for complex documents (invoices, contracts). OCR Technologies: Production experience with Google Vision OCR and Azure Form Recognizer. Database & Cloud: MongoDB (sharding, aggregation, indexing) hosted on Azure (Cosmos DB, Blob Storage, AKS). Infrastructure-as-Code (Terraform/Bicep), CI/CD pipelines (Azure DevOps). Experience: 10+ years in software development, including 5+ years specializing in AI/ML, OCR, or document automation. Proven track record deploying enterprise-scale solutions in cloud environments (Azure preferred). Preferred Qualifications Certifications: Azure Solutions Architect Expert, MongoDB Certified Developer, or Google Cloud AI/ML. Experience with alternative OCR tools (ABBYY, Tesseract) or AI platforms (GCP Vertex AI, AWS SageMaker). Knowledge of DocuSign CLM, Coupa, or SAP Ariba integrations. Familiarity with Kubernetes, Docker, and MLOps practices. Show more Show less
Posted 1 week ago
10.0 years
2 - 7 Lacs
Hyderābād
On-site
Key Responsibilities Solution Architecture & Development: Design end-to-end solutions using Node.JS (backend) and Vue.JS (frontend) for custom portals and administration interfaces. Integrate Azure AI services , Google OCR , and Azure OCR into client workflows. AI/ML Engineering: Develop and optimize vision-based AI models ( Layout Parsing/LP, Layout Inference/LI, Layout Transformation/LT ) using Python . Implement NLP pipelines for document extraction, classification, and data enrichment. Cloud & Database Management: Architect and optimize MongoDB databases hosted on Azure for scalability, security, and performance. Manage cloud infrastructure (Azure) for AI workloads, including containerization and serverless deployments. Technical Leadership: Lead cross-functional teams (AI engineers, DevOps, BAs) in solution delivery. Troubleshoot complex technical issues in OCR accuracy, AI model drift, or system integration. Client Enablement: Advise clients on technical best practices for scaling AI solutions. Document architectures, conduct knowledge transfers, and mentor junior engineers. Required Technical Expertise Frontend/Portal: Vue.JS (advanced components, state management), Node.JS (Express, REST/GraphQL APIs). AI/ML Stack: Python (PyTorch/TensorFlow), Azure AI (Cognitive Services, Computer Vision), NLP techniques (NER, summarization). Layout Engineering : LP/LI/LT for complex documents (invoices, contracts). OCR Technologies: Production experience with Google Vision OCR and Azure Form Recognizer . Database & Cloud: MongoDB (sharding, aggregation, indexing) hosted on Azure (Cosmos DB, Blob Storage, AKS). Infrastructure-as-Code (Terraform/Bicep), CI/CD pipelines (Azure DevOps). Experience: 10+ years in software development, including 5+ years specializing in AI/ML, OCR, or document automation . Proven track record deploying enterprise-scale solutions in cloud environments (Azure preferred). Preferred Qualifications Certifications: Azure Solutions Architect Expert , MongoDB Certified Developer , or Google Cloud AI/ML. Experience with alternative OCR tools (ABBYY, Tesseract) or AI platforms (GCP Vertex AI, AWS SageMaker). Knowledge of DocuSign CLM , Coupa , or SAP Ariba integrations. Familiarity with Kubernetes , Docker , and MLOps practices.
Posted 1 week ago
5.0 - 8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About McDonald’s: One of the world’s largest employers with locations in more than 100 countries, McDonald’s Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe. Position Summary: We are seeking a highly skilled and detail-oriented individual with indirect tax compliance experience to join our Finance Operations Shared Service Center in Hyderabad. In this role, this individual will be responsible for understanding the data required for global tax compliance processes, ensuring timely tax filings, managing the team, performing value-add analysis, and enabling process optimization. This role involves close collaboration with the business, global Tax teams and external providers to support efficient and accurate tax activities. Who we’re looking for: A tax professional with 5-8 years of experience in indirect tax compliance and reporting, preferably in a shared service center or multinational organization. Strong leadership and team management skills including proven experience in managing teams and delivering results through others. In-depth knowledge of global indirect tax regulations and compliance requirements. High degree of business acumen, understanding end-to-end finance and indirect tax reporting processes and related dependencies Strong analytical and strategic thinking skills, with a proven ability to oversee detailed tax computations and compliance activities. Excellent interpersonal and communication skills, capable of building strong relationships with internal and external stakeholders. Experience working with third-party outsourced providers and familiarity with financial systems such as Oracle Cloud and Vertex. Strong project management skills, with the ability to prioritize tasks, meet deadlines, and manage multiple activities simultaneously. Bachelor’s degree in Accounting, Finance, Taxation, or a related field; CPA, CMA, or other relevant professional certifications preferred. Key Responsibilities: Team Leadership and Management: Lead and mentor a team of direct, indirect, and transfer pricing tax professionals, ensuring clarity in roles, expectations, and career development opportunities. Oversee daily operations of direct and indirect tax resources including providing guidance and resolving escalations as needed and providing regular updates to the Global Tax Process Lead (GPL). Foster a collaborative and high-performing team environment that aligns with McDonald’s values and objectives. Tax Compliance Oversight: Manage the activities performed by Tax members in the Hyderabad office on behalf of the Global Tax team, and gather regular feedback from GPL and market teams. Monitor and validate data prepared by staff for compliance purposes, including data extraction from Oracle Cloud and Vertex. Work closely with external providers, including managing timely deliverables to and from the provider. Stay updated on changes in tax regulations and communicate their implications to the team and other stakeholders. Process Improvement: Regularly review master data to identify changes to incorporate into downstream indirect tax reporting processes. Coordinate with resources in Global Taxation team to support Finance Functional Solutions Tax Lead in updating and maintaining Vertex based on regulatory or business changes Work with Global Tax Process Lead (GPL) to conduct regular reviews and audits to identify potential risks and areas for process improvement. Implement standardized procedures and controls to enhance the efficiency and accuracy of the indirect tax compliance process. Performance & Metrics: Adhere to Service Level Agreements (SLAs) and Key Performance Indicators (KPIs) related to indirect tax compliance and reporting. Monitor performance metrics to identify areas for improvement and take proactive measures to meet or exceed targets. Collaborate with team members to optimize workflow processes and ensure timely delivery of services. Communication and Collaboration: Act as the primary point of contact within the shared service center for internal stakeholders on tax matters, including audits, inquiries, and planning. Coordinate with market tax teams to support indirect audit requests and ensure compliance. Collaborate with external providers, managing the day-to-day relationship to ensure timely and effective outcomes. Build and maintain strong relationships with global Tax leaders, Finance, Legal, and IT teams. Work location: Hyderabad, India Work pattern: Full time role. Work mode: Hybrid. Additional Information: McDonald’s is committed to providing qualified individuals with disabilities with reasonable accommodations to perform the essential functions of their jobs. McDonald’s provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to sex, sex stereotyping, pregnancy (including pregnancy, childbirth, and medical conditions related to pregnancy, childbirth, or breastfeeding), race, color, religion, ancestry or national origin, age, disability status, medical condition, marital status, sexual orientation, gender, gender identity, gender expression, transgender status, protected military or veteran status, citizenship status, genetic information, or any other characteristic protected by federal, state or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training. McDonald’s Capability Center India Private Limited (“McDonald’s in India”) is a proud equal opportunity employer and is committed to hiring a diverse workforce and sustaining an inclusive culture. At McDonald’s in India, employment decisions are based on merit, job requirements, and business needs, and all qualified candidates are considered for employment. McDonald’s in India does not discriminate based on race, religion, colour, age, gender, marital status, nationality, ethnic origin, sexual orientation, political affiliation, veteran status, disability status, medical history, parental status, genetic information, or any other basis protected under state or local laws. Nothing in this job posting or description should be construed as an offer or guarantee of employment. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Hello, Truecaller is calling you from Bangalore, India! Ready to pick up? Our goal is to make communication smarter, safer, and more efficient, all while building trust everywhere. We're all about bringing you smart services with a big social impact, keeping you safe from fraud, harassment, scam calls or messages, so you can focus on the conversations that matter. Top 20 most downloaded apps globally, and world’s #1 caller ID and spam-blocking service for Android and iOS, with extensive AI capabilities, with more than 450 million active users per month. Founded in 2009, listed on Nasdaq OMX Stockholm and is categorized as a Large Cap. Our focus on innovation, operational excellence, sustainable growth, and collaboration has resulted in consistently high profitability and strong EBITDA margins. A team of 400 people from ~35 different nationalities spread across our headquarters in Stockholm and offices in Bangalore, Mumbai, Gurgaon and Tel Aviv with high ambitions. We in the Insights Team are responsible for SMS Categorization, Fraud detection and other Smart SMS features within the Truecaller app. The OTP & bank notifications, bill & travel reminder alerts are some examples of the Smart SMS features. The team has developed a patented offline text parser that powers all these features and the team is also exploring cutting edge technologies like LLM to enhance the Smart SMS features. The team’s mission is to become the World’s most loved and trusted SMS app which is aligned with Truecaller’s vision to make communication safe and efficient. Smart SMS is used by over 90M users every day. As an ML Engineer , you will be responsible for collecting, organizing, analyzing, and interpreting Truecaller data with a focus on NLP. In this role, you will be working hands-on to optimize the training and deployment of ML models to be quick and cost-efficient. Also, you will be pivotal in advancing our work with large language models and in-device models across diverse regions. Your expertise will enhance our natural language processing, machine learning, and predictive analytics capabilities. What You Bring In 3+ years in machine learning engineering, with hands-on involvement in feature engineering, model development, and deployment. Experience in Natural Language Processing (NLP), with a deep understanding of text processing, model development, and deployment challenges in the domain. Proven ability to develop, deploy, and maintain machine learning models in production environments, ensuring scalability, reliability, and performance. Strong familiarity with ML frameworks like TensorFlow, PyTorch, and ONNX, and experience in tech stack such as Kubernetes, Docker, APIs, Vertex AI, GCP. Experience deploying models across backend and mobile platforms. Fine-tune and optimize LLMs prompts for domain-specific applications Ability to optimize feature engineering, model training, and deployment strategies for performance and efficiency. Strong SQL and statistical skills. Programming knowledge in at least one language, such as Python or R. Preferably python. Knowledge of machine learning algorithms. Excellent teamwork and communication skills, with the ability to work cross-functionally with product, engineering, and data science teams. Good to have the knowledge in retrieval-based pipelines to enhance LLM performance The Impact You Will Create Collaborate with Product and Engineering to scope, design, and implement systems that solve complex business problems ensuring they are delivered on time and within scope. Design, develop, and deploy state-of-the-art NLP models, contributing directly to message classification and fraud detection at scale for millions of users. Leverage cutting-edge NLP techniques to enhance message understanding, spam filtering, and fraud detection, ensuring a safer and more efficient messaging experience. Build and optimize ML models that can efficiently handle large-scale data processing while maintaining accuracy and performance. Work closely with data scientists and data engineers to enable rapid experimentation, development, and productionization of models in a cost-effective manner. Streamline the ML lifecycle, from training to deployment, by implementing automated workflows, CI/CD pipelines, and monitoring tools for model health and performance. Stay ahead of advancements in ML and NLP, proactively identifying opportunities to enhance model performance, reduce latency, and improve user experience. Your work will directly impact millions of users, improving message classification, fraud detection, and the overall security of messaging platforms. It Would Be Great If You Also Have Understanding of Conversational AI Deploying NLP models in production Working knowledge of GCP components Cloud-based LLM inference with Ray, Kubernetes, and serverless architectures. Life at Truecaller - Behind the code: https://www.instagram.com/lifeattruecaller/ Sounds like your dream job? We will fill the position as soon as we find the right candidate, so please send your application as soon as possible. As part of the recruitment process, we will conduct a background check. This position is based in Bangalore , India. We only accept applications in English. What We Offer A smart, talented and agile team: An international team where ~35 nationalities are working together in several locations and time zones with a learning, sharing and fun environment. A great compensation package: Competitive salary, 30 days of paid vacation, flexible working hours, private health insurance, parental leave, telephone bill reimbursement, Udemy membership to keep learning and improving and Wellness allowance. Great tech tools: Pick the computer and phone that you fancy the most within our budget ranges. Office life: We strongly believe in the in-person collaboration and follow an office-first approach while offering some flexibility. Enjoy your days with great colleagues with loads of good stuff to learn from, daily lunch and breakfast and a wide range of healthy snacks and beverages. In addition, every now and then check out the playroom for a fun break or join our exciting parties and or team activities such as Lab days, sports meetups etc. There something for everyone! Come as you are: Truecaller is diverse, equal and inclusive. We need a wide variety of backgrounds, perspectives, beliefs and experiences in order to keep building our great products. No matter where you are based, which language you speak, your accent, race, religion, color, nationality, gender, sexual orientation, age, marital status, etc. All those things make you who you are, and that’s why we would love to meet you. Job info Location Bengaluru, Karnataka, India Category Data Science Team Insights Posted 15 days ago Show more Show less
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
Remote
When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You Will Be Doing... The Commercial Data & Analytics - Impact Analytics team is part of the Verizon Global Services (VGS) organization.The Impact Analytics team addresses high-impact, analytically driven projects focused within three core pillars: Customer Experience, Pricing & Monetization, Network & Sustainability. In this role, you will analyze large data sets to draw insights and solutions to help drive actionable business decisions. You will also apply advanced analytical techniques and algorithms to help us solve some of Verizon’s most pressing challenges. Use your analysis of large structured and unstructured datasets to draw meaningful and actionable insights Envision and test for corner cases. Build analytical solutions and models by manipulating large data sets and integrating diverse data sources Present the results and recommendations of statistical modeling and data analysis to management and other stakeholders Identify data sources and apply your knowledge of data structures, organization, transformation, and aggregation techniques to prepare data for in-depth analysis Deeply understand business requirements and translate them into well-defined analytical problems, identifying the most appropriate statistical techniques to deliver impactful solutions. Assist in building data views from disparate data sources which powers insights and business cases Apply statistical modeling techniques / ML to data and perform root cause analysis and forecasting Develop and implement rigorous frameworks for effective base management. Collaborate with cross-functional teams to discover the most appropriate data sources, fields which caters to the business need Design modular, reusable Python scripts to automate data processing Clearly and effectively communicate complex statistical concepts and model results to both technical and non-technical audiences, translating your findings into actionable insights for stakeholders. What We’re Looking For... You have strong analytical skills, and are eager to work in a collaborative environment with global teams to drive ML applications in business problems, develop end to end analytical solutions and communicate insights and findings to leadership. You work independently and are always willing to learn new technologies. You thrive in a dynamic environment and are able to interact with various partners and cross functional teams to implement data science driven business solutions. You Will Need To Have Bachelor’s degree in computer science or another technical field or four or more years of work experience Four or more years of relevant work experience Proficiency in SQL, including writing queries for reporting, analysis and extraction of data from big data systems (Google Cloud Platform, Teradata, Spark, Splunk etc) Curiosity to dive deep into data inconsistencies and perform root cause analysis Programming experience in Python (Pandas, NumPy, Scipy and Scikit-Learn) Experience with Visualization tools matplotlib, seaborn, tableau, grafana etc A deep understanding of various machine learning algorithms and techniques, including supervised and unsupervised learning Understanding of time series modeling and forecasting techniques Even better if you have one or more of the following: Experience with cloud computing platforms (e.g., AWS, Azure, GCP) and deploying machine learning models at scale using platforms like Domino Data Lab or Vertex AI Experience in applying statistical ideas and methods to data sets to answer business problems. Ability to collaborate effectively across teams for data discovery and validation Experience in deep learning, recommendation systems, conversational systems, information retrieval, computer vision Expertise in advanced statistical modeling techniques, such as Bayesian inference or causal inference. Excellent interpersonal, verbal and written communication skills. Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You Will Be Doing... The Commercial Data & Analytics - Impact Analytics team is part of the Verizon Global Services (VGS) organization.The Impact Analytics team addresses high-impact, analytically driven projects focused within three core pillars: Customer Experience, Pricing & Monetization, Network & Sustainability. In this role, you will analyze large data sets to draw insights and solutions to help drive actionable business decisions. You will also apply advanced analytical techniques and algorithms to help us solve some of Verizon’s most pressing challenges. Use your analysis of large structured and unstructured datasets to draw meaningful and actionable insights Envision and test for corner cases. Build analytical solutions and models by manipulating large data sets and integrating diverse data sources Present the results and recommendations of statistical modeling and data analysis to management and other stakeholders Identify data sources and apply your knowledge of data structures, organization, transformation, and aggregation techniques to prepare data for in-depth analysis Deeply understand business requirements and translate them into well-defined analytical problems, identifying the most appropriate statistical techniques to deliver impactful solutions. Assist in building data views from disparate data sources which powers insights and business cases Apply statistical modeling techniques / ML to data and perform root cause analysis and forecasting Develop and implement rigorous frameworks for effective base management. Collaborate with cross-functional teams to discover the most appropriate data sources, fields which caters to the business need Design modular, reusable Python scripts to automate data processing Clearly and effectively communicate complex statistical concepts and model results to both technical and non-technical audiences, translating your findings into actionable insights for stakeholders. What We’re Looking For... You have strong analytical skills, and are eager to work in a collaborative environment with global teams to drive ML applications in business problems, develop end to end analytical solutions and communicate insights and findings to leadership. You work independently and are always willing to learn new technologies. You thrive in a dynamic environment and are able to interact with various partners and cross functional teams to implement data science driven business solutions. You Will Need To Have Bachelor’s degree in computer science or another technical field or four or more years of work experience Four or more years of relevant work experience Proficiency in SQL, including writing queries for reporting, analysis and extraction of data from big data systems (Google Cloud Platform, Teradata, Spark, Splunk etc) Curiosity to dive deep into data inconsistencies and perform root cause analysis Programming experience in Python (Pandas, NumPy, Scipy and Scikit-Learn) Experience with Visualization tools matplotlib, seaborn, tableau, grafana etc A deep understanding of various machine learning algorithms and techniques, including supervised and unsupervised learning Understanding of time series modeling and forecasting techniques Even better if you have one or more of the following: Experience with cloud computing platforms (e.g., AWS, Azure, GCP) and deploying machine learning models at scale using platforms like Domino Data Lab or Vertex AI Experience in applying statistical ideas and methods to data sets to answer business problems. Ability to collaborate effectively across teams for data discovery and validation Experience in deep learning, recommendation systems, conversational systems, information retrieval, computer vision Expertise in advanced statistical modeling techniques, such as Bayesian inference or causal inference. Excellent interpersonal, verbal and written communication skills. Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Show more Show less
Posted 1 week ago
7.0 years
0 Lacs
Greater Kolkata Area
Remote
As a global leader in cybersecurity, CrowdStrike protects the people, processes and technologies that drive modern organizations. Since 2011, our mission hasn’t changed — we’re here to stop breaches, and we’ve redefined modern security with the world’s most advanced AI-native platform. We work on large scale distributed systems, processing almost 3 trillion events per day. We have 3.44 PB of RAM deployed across our fleet of C* servers - and this traffic is growing daily. Our customers span all industries, and they count on CrowdStrike to keep their businesses running, their communities safe and their lives moving forward. We’re also a mission-driven company. We cultivate a culture that gives every CrowdStriker both the flexibility and autonomy to own their careers. We’re always looking to add talented CrowdStrikers to the team who have limitless passion, a relentless focus on innovation and a fanatical commitment to our customers, our community and each other. Ready to join a mission that matters? The future of cybersecurity starts with you. About The Role The charter of the Data + ML Platform team is to harness all the data that is ingested and cataloged within the Data LakeHouse for exploration, insights, model development, ML Engineering and Insights Activation. This team is situated within the larger Data Platform group, which serves as one of the core pillars of our company. We process data at a truly immense scale. Our processing is composed of various facets including threat events collected via telemetry data, associated metadata, along with IT asset information, contextual information about threat exposure based on additional processing, etc. These facets comprise the overall data platform, which is currently over 200 PB and maintained in a hyper scale Data Lakehouse, built and owned by the Data Platform team. The ingestion mechanisms include both batch and near real-time streams that form the core Threat Analytics Platform used for insights, threat hunting, incident investigations and more. As an engineer in this team, you will play an integral role as we build out our ML Experimentation Platform from the ground up. You will collaborate closely with Data Platform Software Engineers, Data Scientists & Threat Analysts to design, implement, and maintain scalable ML pipelines that will be used for Data Preparation, Cataloging, Feature Engineering, Model Training, and Model Serving that influence critical business decisions. You’ll be a key contributor in a production-focused culture that bridges the gap between model development and operational success. Future plans include generative AI investments for use cases such as modeling attack paths for IT assets. What You’ll Do Help design, build, and facilitate adoption of a modern Data+ML platform Modularize complex ML code into standardized and repeatable components Establish and facilitate adoption of repeatable patterns for model development, deployment, and monitoring Build a platform that scales to thousands of users and offers self-service capability to build ML experimentation pipelines Leverage workflow orchestration tools to deploy efficient and scalable execution of complex data and ML pipelines Review code changes from data scientists and champion software development best practices Leverage cloud services like Kubernetes, blob storage, and queues in our cloud first environment What You’ll Need B.S. in Computer Science, Data Science, Statistics, Applied Mathematics, or a related field and 7 + years related experience; or M.S. with 5+ years of experience; or Ph.D with 6+ years of experience. 3+ years experience developing and deploying machine learning solutions to production. Familiarity with typical machine learning algorithms from an engineering perspective (how they are built and used, not necessarily the theory); familiarity with supervised / unsupervised approaches: how, why, and when and labeled data is created and used 3+ years experience with ML Platform tools like Jupyter Notebooks, NVidia Workbench, MLFlow, Ray, Vertex AI etc. Experience building data platform product(s) or features with (one of) Apache Spark, Flink or comparable tools in GCP. Experience with Iceberg is highly desirable. Proficiency in distributed computing and orchestration technologies (Kubernetes, Airflow, etc.) Production experience with infrastructure-as-code tools such as Terraform, FluxCD Expert level experience with Python; Java/Scala exposure is recommended. Ability to write Python interfaces to provide standardized and simplified interfaces for data scientists to utilize internal Crowdstrike tools Expert level experience with CI/CD frameworks such as GitHub Actions Expert level experience with containerization frameworks Strong analytical and problem solving skills, capable of working in a dynamic environment Exceptional interpersonal and communication skills. Work with stakeholders across multiple teams and synthesize their needs into software interfaces and processes. Experience With The Following Is Desirable Go Iceberg Pinot or other time-series/OLAP-style database Jenkins Parquet Protocol Buffers/GRPC VJ1 Benefits Of Working At CrowdStrike Remote-friendly and flexible work culture Market leader in compensation and equity awards Comprehensive physical and mental wellness programs Competitive vacation and holidays for recharge Paid parental and adoption leaves Professional development opportunities for all employees regardless of level or role Employee Resource Groups, geographic neighbourhood groups and volunteer opportunities to build connections Vibrant office culture with world class amenities Great Place to Work Certified™ across the globe CrowdStrike is proud to be an equal opportunity employer. We are committed to fostering a culture of belonging where everyone is valued for who they are and empowered to succeed. We support veterans and individuals with disabilities through our affirmative action program. CrowdStrike is committed to providing equal employment opportunity for all employees and applicants for employment. The Company does not discriminate in employment opportunities or practices on the basis of race, color, creed, ethnicity, religion, sex (including pregnancy or pregnancy-related medical conditions), sexual orientation, gender identity, marital or family status, veteran status, age, national origin, ancestry, physical disability (including HIV and AIDS), mental disability, medical condition, genetic information, membership or activity in a local human rights commission, status with regard to public assistance, or any other characteristic protected by law. We base all employment decisions--including recruitment, selection, training, compensation, benefits, discipline, promotions, transfers, lay-offs, return from lay-off, terminations and social/recreational programs--on valid job requirements. If you need assistance accessing or reviewing the information on this website or need help submitting an application for employment or requesting an accommodation, please contact us at recruiting@crowdstrike.com for further assistance. Show more Show less
Posted 1 week ago
8.0 - 12.0 years
14 - 24 Lacs
Pune, Gurugram, Bengaluru
Work from Office
Duration: 12 months Timings: Full Time (As per company timings) Notice Period: (Immediate Joiner - Only) Job Description: Looking for a very strong Java expert. hands-on resource who can lead & interact with the client directly and understand the requirements & contribute to technical discussions apart from development work. Should have very good communication skills. Must-Have Skills: Java, Spring Boot, Microservices, Reactive programming, Vertex Qualification: Bachelor's or Masters degrees in Computer Science, Computer Engineering, or a related technical discipline. Ability to work independently and to adapt to a fast-changing environment. Creative, self-disciplined, and capable of identifying and completing critical tasks independently and with a sense of urgency. Driving Results: A good single contributor and a good team player. Flexible attitude towards work, as per the needs. Proactively identify & communicate issues and risks. Other Personal Characteristics: Dynamic, engaging, self-reliant developer Ability to deal with ambiguity Manage a collaborative and analytical approach Self-confident and humble Open to continuous learning Intelligent, rigorous thinker who can operate successfully amongst bright people Location : - Bhubaneshwar,Bengaluru,Pune,Gurugram,Chennai
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Exp: 3 to 5 years Location: Chennai, Bangalore Must have skill: 3+ years of experience in full stack software development + years’ experience in Cloud technologies & services, preferably GCP 3+ years of experience of practicing statistical methods ANOVA, principal component analysis 3+ years’ experience with Python, SQL, BQ Experience in SonarQube, CICD, Tekton, terraform, GCS, GCP Looker, Google cloud build, cloud run, Vertex AI, Airflow, TensorFlow, etc Experience in Train, Build and Deploy ML, DL Models Experience in HuggingFace, Chainlit, Streamlit, React Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job description Job Role: Ai Engineer Experience: 3 to 5 Years Location : Client Office – Pune, India Job Type : Full-Time Department : Artificial Intelligence / Engineering Work Mode : On-site at client location About the Role We are seeking a highly skilled and versatile Senior AI Engineer with over 3 to 5 years of hands-on experience to join our client’s team in Pune. This role focuses on designing, developing, and deploying cutting-edge AI and machine learning solutions for high-scale, high-concurrency applications where security, scalability, and performance are paramount. You will work closely with cross-functional teams, including data scientists, DevOps engineers, security specialists, and business stakeholders, to deliver robust AI solutions that drive measurable business impact in dynamic, large-scale environments. Job Summary: We are seeking a passionate and experienced Node.js Developer to join our backend engineering team. As a key contributor, you will be responsible for building scalable, high-performance APIs, microservices, and backend systems that power our products and services. You will leverage modern technologies and best practices to design and implement robust, maintainable, and efficient solutions. You should have a deep understanding of Node.js, NestJS, Express.js, along with hands-on experience designing and building complex backend systems. Key Responsibilities Architect, develop, and deploy advanced machine learning and deep learning models across domains like NLP, computer vision, predictive analytics, or reinforcement learning, ensuring scalability and performance under high-traffic conditions. Preprocess, clean, and analyze large-scale structured and unstructured datasets using advanced statistical, ML, and big data techniques. Collaborate with data engineering and DevOps teams to integrate AI/ML models into production-grade pipelines, ensuring seamless operation under high concurrency. Optimize models for latency, throughput, accuracy, and resource efficiency, leveraging distributed computing and parallel processing where necessary. Implement robust security measures, including data encryption, secure model deployment, and adherence to compliance standards (e.g., GDPR, CCPA). Partner with client-side technical teams to translate complex business requirements into scalable, secure AI-driven solutions. Stay at the forefront of AI/ML advancements, experimenting with emerging tools, frameworks, and techniques (e.g., generative AI, federated learning, or AutoML). Write clean, modular, and maintainable code, along with comprehensive documentation and reports for model explainability, reproducibility, and auditability. Proactively monitor and maintain deployed models, ensuring reliability and performance in production environments with millions of concurrent users. Required Qualifications Bachelor’s or Master’s degree in Computer Science, Machine Learning, Data Science, or a related technical field. 5+ years of experience building and deploying AI/ML models in production environments with high-scale traffic and concurrency. Advanced proficiency in Python and modern AI/ML frameworks, including TensorFlow, PyTorch, Scikit-learn, and JAX. Hands-on expertise in at least two of the following domains: NLP, computer vision, time-series forecasting, or generative AI. Deep understanding of the end-to-end ML lifecycle, including data preprocessing, feature engineering, hyperparameter tuning, model evaluation, and deployment. Proven experience with cloud platforms (AWS, GCP, or Azure) and their AI/ML services (e.g., SageMaker, Vertex AI, or Azure ML). Strong knowledge of containerization (Docker, Kubernetes) and RESTful API development for secure and scalable model deployment. Familiarity with secure coding practices, data privacy regulations, and techniques for safeguarding AI systems against adversarial attacks. Preferred Skills Expertise in MLOps frameworks and tools such as MLflow, Kubeflow, or SageMaker for streamlined model lifecycle management. Hands-on experience with large language models (LLMs) or generative AI frameworks (e.g., Hugging Face Transformers, LangChain, or Llama). Proficiency in big data technologies and orchestration tools (e.g., Apache Spark, Airflow, or Kafka) for handling massive datasets and real-time pipelines. Experience with distributed training techniques (e.g., Horovod, Ray, or TensorFlow Distributed) for large-scale model development. Knowledge of CI/CD pipelines and infrastructure-as-code tools (e.g., Terraform, Ansible) for scalable and automated deployments. Familiarity with security frameworks and tools for AI systems, such as model hardening, differential privacy, or encrypted computation. Proven ability to work in global, client-facing roles, with strong communication skills to bridge technical and business teams. Share the CV on hr.mobilefirst@gmail.com/6355560672 Show more Show less
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Vertex Next is an integral part of the globally renowned Vertex Group, celebrated for its innovative approach towards events and exhibitions sector. As one of the fastest-growing event organizers, Vertex Next excels in curating world-class events that foster collaboration, learning, and industry advancements. Operating across key markets in the United States, Canada, Africa, and the Middle East, driving impactful solutions tailored to diverse business needs. With a strong emphasis on delivering value, the company creates platforms for thought leadership, networking, and business development across various sectors. We have an exciting opportunity for young and dynamic professionals with hardcore experience as a conference producer with excellent communications to be a part of our team. Below is the detailed job description: This is a full-time on-site role for a Conference Producer at Vertex Group in Gurugram. The Conference Producer will be responsible for: - 1. Market Research and Analysis o Conduct in-depth research to identify industry trends, target audience needs, and emerging topics. o Analyze competitor events and identify gaps in the market. 2. Content Development o Create a comprehensive agenda, including session topics, formats, and flow. o Identify and secure high-profile speakers, panelists, and moderators. o Collaborate with experts to ensure the content is insightful, relevant, and engaging. 3. Project Management o Develop and manage project timelines, budgets, and deliverables. o Coordinate with internal teams (marketing, sales, operations) to ensure seamless execution. 4. Speaker and Stakeholder Engagement o Build relationships with speakers, sponsors, and industry professionals. o Handle speaker logistics, such as travel, accommodation, and presentation requirements. 5. Event Execution o Oversee on-site or virtual event operations to ensure the program runs smoothly. o Address last-minute changes or challenges effectively. 6. Post-Event Analysis o Gather attendee feedback and measure the event's success against key performance . Prepare post-event reports with actionable insights for future improvements. Key Skills Required • Research and Analytical Skills: Ability to conduct thorough research and derive actionable insights. • Communication: Excellent verbal and written communication skills for liaising with stakeholders and promoting the event. • Interpersonal Skills: Ability to build and maintain relationships with speakers, attendees, and sponsors. • Problem-Solving: Quick decision-making skills to manage unexpected challenges. • Creativity: Innovative ideas for content formats and audience engagement. Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
India has seen a rise in demand for professionals with expertise in Vertex, a cloud-based tax technology solution. Companies across various industries are actively seeking individuals with skills in Vertex to manage their tax compliance processes efficiently. If you are a job seeker looking to explore opportunities in this field, read on to learn more about the Vertex job market in India.
The salary range for Vertex professionals in India varies based on experience levels. Entry-level professionals can expect to earn around INR 4-6 lakhs per annum, while experienced professionals with several years in the industry can earn upwards of INR 12-15 lakhs per annum.
In the Vertex domain, a typical career progression path may include roles such as Tax Analyst, Tax Consultant, Tax Manager, and Tax Director. Professionals may advance from Junior Tax Analyst to Senior Tax Analyst, and eventually take on leadership roles as Tax Managers or Directors.
Alongside expertise in Vertex, professionals in this field are often expected to have skills in tax compliance, tax regulations, accounting principles, and data analysis. Knowledge of ERP systems and experience in tax software implementation can also be beneficial.
As you explore job opportunities in the Vertex domain in India, remember to showcase your expertise, skills, and experience confidently during interviews. Prepare thoroughly for technical questions and demonstrate your understanding of tax compliance processes. With dedication and continuous learning, you can build a successful career in Vertex roles. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.