Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 years
1 - 4 Lacs
Indore
On-site
Job Title: AI/ML Engineer (Python + AWS + REST APIs) Department: Web Location: Indore Job Type: Full-time Experience: 3-5 years Notice Period: 0-15 days (immediate joiners preferred) Work Arrangement: On-site (Work from Office) Overview: Advantal Technologies is seeking a passionate AI/ML Engineer to join our team in building the core AI-driven functionality an intelligent visual data encryption system. The role involves designing, training, and deploying AI models (e.g., CLIP, DCGANs, Decision Trees), integrating them into a secure backend, and operationalizing the solution via AWS cloud services and Python-based APIs. Key Responsibilities: AI/ML Development Design and train deep learning models for image classification and sensitivity tagging using CLIP, DCGANs, and Decision Trees. Build synthetic datasets using DCGANs for balancing. Fine-tune pre-trained models for customized encryption logic. Implement explainable classification logic for model outputs. Validate model performance using custom metrics and datasets. API Development Design and develop Python RESTful APIs using FastAPI or Flask for: Image upload and classification Model inference endpoints Encryption trigger calls Integrate APIs with AWS Lambda and Amazon API Gateway. AWS Integration Deploy and manage AI models on Amazon SageMaker for training and real-time inference. Use AWS Lambda for serverless backend compute. Store encrypted image data on Amazon S3 and metadata on Amazon RDS (PostgreSQL). Use AWS Cognito for secure user authentication and KMS for key management. Monitor job status via CloudWatch and enable secure, scalable API access. Required Skills & Experience: Must-Have 3–5 years of experience in AI/ML (especially vision-based systems). Strong experience with PyTorch or TensorFlow for model development. Proficient in Python with experience building RESTful APIs. Hands-on experience with Amazon SageMaker, Lambda, API Gateway, and S3. Knowledge of OpenSSL/PyCryptodome or basic cryptographic concepts. Understanding of model deployment, serialization, and performance tuning. Nice-to-Have Experience with CLIP model fine-tuning. Familiarity with Docker, GitHub Actions, or CI/CD pipelines. Experience in data classification under compliance regimes (e.g., GDPR, HIPAA). Familiarity with multi-tenant SaaS design patterns. Tools & Technologies: Python, PyTorch, TensorFlow FastAPI, Flask AWS: SageMaker, Lambda, S3, RDS, Cognito, API Gateway, KMS Git, Docker, Postgres, OpenCV, OpenSSL If interested, please share resume to hr@advantal.ne
Posted 1 week ago
6.0 years
0 Lacs
Visakhapatnam
On-site
Job Title: Machine Learning Engineer – 3D Graphics Location: Visakhapatnam, UAE Experience: 6+years Job Type: Full-Time Role: We are seeking a highly skilled and innovative Machine Learning Engineer with 3D Graphics expertise . In this role, you will be responsible for developing and optimizing 3D mannequin models using machine learning algorithms, computer vision techniques, and 3D rendering tools. You will collaborate with backend developers, data scientists, and UI/UX designers to create realistic, scalable, and interactive 3D visualization modules that enhance the user experience. Key Responsibilities: 3D Mannequin Model Development: Design and develop 3D mannequin models using ML-based body shape estimation. Implement pose estimation, texture mapping, and deformation models. Use ML algorithms to adjust measurements for accurate sizing and fit. Machine Learning & Computer Vision: Develop and fine-tune ML models for body shape recognition, segmentation, and fitting. Implement pose detection algorithms using TensorFlow, PyTorch, or OpenCV. Use GANs or CNNs for realistic 3D texture generation. 3D Graphics & Visualization: Create interactive 3D rendering pipelines using Three.js, Babylon.js, or Unity. Optimize mesh processing, lighting, and shading for real-time rendering. Use GPU-accelerated techniques for rendering efficiency. Model Optimization & Performance: Optimize inference pipelines for faster real-time rendering. Implement multi-threading and parallel processing for high performance. Utilize cloud infrastructure (AWS/GCP) for distributed model training and inference. Collaboration & Documentation: Collaborate with UI/UX designers for seamless integration of 3D models into web and mobile apps. Maintain detailed documentation for model architecture, training processes, and rendering techniques. Key Skills & Qualifications: Experience: 5+ years in Machine Learning, Computer Vision, and 3D Graphics Development. Technical Skills: Proficiency in Django, Python, TensorFlow, PyTorch, and OpenCV. Strong expertise in 3D rendering frameworks: Three.js, Babylon.js, or Unity. Experience with 3D model formats (GLTF, OBJ, FBX). Familiarity with Mesh Recovery, PyMAF, and SMPL models. ML & Data Skills: Hands-on experience with GANs, CNNs, and RNNs for texture and pattern generation. Experience with 3D pose estimation and body measurement algorithms. Cloud & Infrastructure: Experience with AWS (SageMaker, Lambda) or GCP (Vertex AI, Cloud Run). Knowledge of Docker and Kubernetes for model deployment. Graphics & Visualization: Knowledge of 3D rendering engines with shader programming. Experience in optimization techniques for rendering large 3D models. Soft Skills: Strong problem-solving skills and attention to detail. Excellent collaboration and communication skills. Interested candidates can send their updated resume to: careers@onliestworld.com Job Type: Full-time
Posted 1 week ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
About Firstsource Firstsource is a specialized global business process management partner. We provide transformational solutions and services spanning the customer lifecycle across Healthcare, Banking and Financial Services, Communications, Media and Technology, and other diverse industries.With an established presence in the US, the UK, India, Mexico, Australia, and the Philippines, we act as a trusted growth partner for leading global brands, including several Fortune 500 and FTSE 100 companies. Key Responsibilities Perform data analysis to uncover patterns, trends, and insights to support decision-making. Build, validate, and optimize machine learning models for business use cases in EdTech, Healthcare, BFS and Media. Develop scalable ETL pipelines to preprocess and manage large datasets. Communicate actionable insights through visualizations and reports to stakeholders. Collaborate with engineering teams to implement and deploy models in production (good to have). Core Skills Data Analysis: Expert in Python (Pandas, NumPy), SQL, R, and exploratory data analysis (EDA). Machine Learning: Skilled in Scikit-learn, TensorFlow, PyTorch, and XGBoost for predictive modeling. Statistics: Strong understanding of regression, classification, hypothesis testing, and time-series analysis. Visualization: Proficient in Tableau, Power BI, Matplotlib, and Seaborn. ML Engineering (Good to Have): Experience with model deployment using AWS SageMaker, GCP AI, or Docker. Big Data (Good to Have): Familiarity with Spark, Hadoop, and distributed computing frameworks. ⚠️ Disclaimer: Firstsource follows a fair, transparent, and merit-based hiring process. We never ask for money at any stage. Beware of fraudulent offers and always verify through our official channels or @firstsource.com email addresses.
Posted 1 week ago
15.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Position Title: Cloud Solutions Practice Head Location: Hyderabad, India (Travel as Needed) Reports To: CEO / Executive Leadership Team Employment Type: Full-Time | Senior Leadership Role Industry: Information Technology & Services | Cloud Solutions | AI & Digital Transformation Join the Future of Enterprise Cloud At BPMLinks , we are building a cloud-first future for enterprise clients across the globe. As our Cloud Solutions Practice Head , you won’t just lead a team, you’ll shape a legacy. Position Overview: BPMLinks LLC is seeking an accomplished and visionary Cloud Solutions Practice Head to establish and lead our newly launched Cloud Solutions Practice , aligning cloud innovation with business value creation. This is a pivotal leadership role that will oversee the full spectrum of cloud consulting, engineering, cost optimization, migration, and AI/ML-enabled services across our global delivery portfolio. The ideal candidate is a cloud thought leader with deep expertise across AWS, Azure, GCP , and modern data platforms (e.g., Snowflake, Databricks, Azure Data Factory, Oracle ). You will play a key role in scaling multi-cloud capabilities, building high-performing teams, and partnering with clients to drive cost efficiency, performance, security, and digital innovation. Key Responsibilities: 🔹 Practice Strategy & Leadership Define and execute the vision, roadmap, and service catalog for the Cloud Solutions Practice. Build a world-class delivery team of cloud architects, engineers, DevOps professionals, and data specialists. Align the practice’s capabilities with BPMLinks’ broader business transformation initiatives. 🔹 Cloud & Data Architecture Oversight Lead the design and deployment of scalable, secure, cost-optimized cloud solutions on AWS, Azure, and GCP. Direct complex cloud and data migration programs , including: Transitioning from legacy systems to Snowflake, Databricks, and BigQuery Data pipeline orchestration using Azure Data Factory, Airflow, Informatica Modernization of Oracle and SQL Server environments Guide hybrid cloud and multi-cloud strategies across IaaS, PaaS, SaaS, and serverless architectures. 🔹 Cloud Cost Optimization & FinOps Leadership Architect and institutionalize cloud cost governance frameworks and FinOps best practices. Leverage tools like AWS Cost Explorer, Azure Cost Management, and third-party FinOps platforms. Drive resource rightsizing, workload scheduling, RIs/SPs adoption, and continuous spend monitoring. 🔹 Client Engagement & Solution Delivery Act as executive sponsor for strategic accounts, engaging CXOs and technology leaders. Lead cloud readiness assessments, transformation workshops, and solution design sessions. Ensure delivery excellence through agile governance, quality frameworks, and continuous improvement. 🔹 Cross-Functional Collaboration & Talent Development Partner with sales, marketing, and pre-sales teams to define go-to-market strategies and win pursuits. Foster a culture of knowledge sharing, upskilling, certification, and technical excellence. Mentor emerging cloud leaders and architects across geographies. Cloud Services Portfolio You Will Lead: Cloud Consulting & Advisory Cloud readiness assessments, cloud strategy and TCO analysis Multi-cloud and hybrid cloud governance, regulatory advisory (HIPAA, PCI, SOC2) Infrastructure, Platform & Application Services Virtual machines, networking, containers, Kubernetes, serverless computing App hosting, API gateways, orchestration, cloud-native replatforming Cloud Migration & Modernization Lift-and-shift, refactoring, legacy app migration Zero-downtime migrations and DR strategies Data Engineering & Modern Data Platforms Snowflake, Databricks, BigQuery, Redshift Azure Data Factory, Oracle Cloud, Informatica, ETL/ELT pipelines DevOps & Automation CI/CD, Infrastructure-as-Code (Terraform, CloudFormation, ARM) Release orchestration and intelligent environment management Cloud Security & Compliance IAM, encryption, CSPM, SIEM/SOAR, compliance audits and policies Cost Optimization & FinOps Reserved instances, spot instances, scheduling automation Multi-cloud FinOps dashboards, showback/chargeback enablement AI/ML & Analytics on Cloud Model hosting (SageMaker, Vertex AI, Azure ML), RAG systems, semantic vector search Real-time analytics with Power BI, Looker, Kinesis Managed Cloud Services 24/7 monitoring (NOC/SOC), SLA-driven support, patching, DR management Training & Enablement Certification workshops, cloud engineering training, CoE development Required Qualifications: 15+ years of experience in enterprise IT and cloud solutions, with 5+ years in senior leadership roles Expertise in AWS, Azure, GCP (certifications preferred) Proven success in scaling cloud practices or large delivery units Hands-on experience with data platforms: Snowflake, Databricks, Azure Data Factory, Oracle In-depth understanding of FinOps principles, cost governance, and cloud performance tuning Excellent executive-level communication, strategic thinking, and client-facing presence Preferred Qualifications: Experience serving clients in regulated industries (healthcare, finance, public sector) Strong commercial acumen with experience in pre-sales, solutioning, and deal structuring MBA or advanced degree in Computer Science, Engineering, or Technology Management What We Offer: Opportunity to define and scale a global Cloud Practice from the ground up Direct influence on innovation, customer impact, and company growth Collaboration with a forward-thinking executive team and top-tier AI engineers Competitive compensation, performance-linked incentives, and potential equity Culture of ownership, agility, and continuous learning
Posted 1 week ago
5.0 years
0 Lacs
Indore, Madhya Pradesh, India
Remote
AI/ML Expert – PHP Integration (Remote / India Preferred) Experience: 2–5 years in AI/ML with PHP integration About Us: We’re the team behind Wiser – AI-Powered Product Recommendations for Shopify , helping over 5,000+ merchants increase AOV and conversions through personalized upsell and cross-sell experiences. We’re now scaling our recommendation engine further and are looking for an AI/ML expert who can help us take Wiser to the next level with smarter, faster, and more contextual product recommendations. Role Overview: As an AI/ML Engineer, you will: Develop and optimize product recommendation algorithms based on customer behavior, sales data, and store context. Train models using behavioral and transactional data across multiple Shopify stores. Build and test ML pipelines that can scale across thousands of stores. Integrate AI outputs into our PHP-based system (Laravel/Symfony preferred). Work closely with product and backend teams to improve real-time recommendations, ranking logic, and personalization scores. Responsibilities: Analyze large datasets from Shopify stores (products, orders, sessions) Build models for: Product similarity User-based & item-based collaborative filtering Popularity-based + contextual hybrid models Improve existing recommendation logic (e.g., Frequently Bought Together, Complete the Look) Implement real-time or near real-time prediction logic Ensure AI output integrates smoothly into PHP backend APIs Document logic and performance of models for internal review Requirements: 2–5 years of experience in machine learning, AI, or data science Strong Python skills (scikit-learn, TensorFlow, PyTorch, Pandas, NumPy) Experience building recommendation systems or working with eCommerce data Experience integrating AI models with PHP/Laravel applications Familiarity with Shopify ecosystem and personalization is a bonus Ability to explain ML logic to non-technical teams Bonus: Experience with AWS, S3, SageMaker, or model hosting APIs What You’ll Get: Opportunity to shape AI in one of the fastest-growing Shopify apps Work on a product used by 4,500+ stores globally Direct collaboration with founders & product team Competitive pay + growth opportunities
Posted 1 week ago
3.0 - 2.0 years
3 - 15 Lacs
Mohali, Punjab
On-site
Job Title: AI/ML Engineer Job Summary We are seeking a talented and passionate AI/ML Engineer with at least 3 years of experience to join our growing data science and machine learning team. The ideal candidate will have hands-on experience in building and deploying machine learning models, data preprocessing, and working with real-world datasets. You will collaborate with cross-functional teams to develop intelligent systems that drive business value. Key Responsibilities ● Design, develop, and deploy machine learning models for various business use cases. ● Analyze large and complex datasets to extract meaningful insights. ● Implement data preprocessing, feature engineering, and model evaluation pipelines. ● Work with product and engineering teams to integrate ML models into production environments. ● Conduct research to stay up to date with the latest ML and AI trends and technologies. ● Monitor and improve model performance over time. Required Qualifications ● Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or a related field. ● Minimum 3 years of hands-on experience in building and deploying machine learning models. ● Strong proficiency in Python and ML libraries such as scikit-learn, TensorFlow, PyTorch, and XGBoost. ● Experience with training, fine-tuning, and evaluating ML models in real-world applications. ● Proficiency in Large Language Models (LLMs) – including experience using or fine-tuning models like BERT, GPT, LLaMA, or open-source transformers. ● Experience with model deployment, serving ML models via REST APIs or microservices using frameworks like FastAPI, Flask, or TorchServe. ● Familiarity with model lifecycle management tools such as MLflow, Weights & Biases, or Kubeflow. ● Understanding of cloud-based ML infrastructure (AWS SageMaker, Google Vertex AI, Azure ML, etc.). ● Ability to work with large-scale datasets, perform feature engineering, and optimize model performance. ● Strong communication skills and the ability to work collaboratively in cross-functional teams. Job Types: Full-time, Permanent Pay: ₹300,000.00 - ₹1,500,000.00 per year Benefits: Flexible schedule Paid sick time Paid time off Ability to commute/relocate: Mohali, Punjab: Reliably commute or planning to relocate before starting work (Preferred) Experience: Ai/ML: 2 years (Preferred) Work Location: In person
Posted 1 week ago
5.0 - 7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Requirements Minimum of 5-7 years of experience in a data science role, with a focus on building and deploying models in production Advanced skills in feature engineering and selection Proficiency in Python and SQL for data analysis and modelling Strong understanding of machine learning algorithms and statistical modelling techniques Experience with AWS cloud services, particularly Redshift for data storage and analysis, and SageMaker for model deployment Excellent communication and leadership skills Strong problem-solving and critical thinking abilities with a keen attention to detail Proven track record of successfully leading and delivering data science projects within a fast-paced environment is a must Proactive mindset and strong coding skills Nice-to-Have Experience with big data technologies and frameworks (e.g., Hadoop, Spark) Familiarity with deep learning frameworks (e.g., TensorFlow, PyTorch) Knowledge of data visualization tools and techniques for communicating insights to non-technical stakeholders Primary Skills: Data Science, Python, AWS Cloud Services, SQL
Posted 1 week ago
8.0 years
36 - 60 Lacs
Pune, Maharashtra, India
On-site
About Velsera Medicine moves too slow. At Velsera, we are changing that. Velsera was formed in 2023 through the shared vision of Seven Bridges and Pierian, with a mission to accelerate the discovery, development, and delivery of life-changing insights. Velsera provides software and professional services for: AI-powered multimodal data harmonization and analytics for drug discovery and development IVD development, validation, and regulatory approval Clinical NGS interpretation, reporting, and adoption With our headquarters in Boston, MA, we are growing and expanding our teams located in different countries! What will you do? Lead and participate in collaborative solutioning sessions with business stakeholders, translating business requirements and challenges into well-defined machine learning/data science use cases and comprehensive AI solution specifications. Architect robust and scalable AI solutions that enable data-driven decision-making, leveraging a deep understanding of statistical modeling, machine learning, and deep learning techniques to forecast business outcomes and optimize performance. Design and implement data integration strategies to unify and streamline diverse data sources, creating a consistent and cohesive data landscape for AI model development. Develop efficient and programmatic methods for synthesizing large volumes of data, extracting relevant features, and preparing data for AI model training and validation. Leverage advanced feature engineering techniques and quantitative methods, including statistical modeling, machine learning, deep learning, and generative AI, to implement, validate, and optimize AI models for accuracy, reliability, and performance. Simplify data presentation to help stakeholders easily grasp insights and make informed decisions. Maintain a deep understanding of the latest advancements in AI and generative AI, including various model architectures, training methodologies, and evaluation metrics. Identify opportunities to leverage generative AI to securely and ethically address business needs, optimize existing processes, and drive innovation. Contribute to project management processes, providing regular status updates, and ensuring the timely delivery of high-quality AI solutions. Primarily responsible for contributing to project delivery and maximizing business impact through effective AI solution architecture and implementation. Occasionally contribute technical expertise during pre-sales engagements and support internal operational improvements as needed. Requirements What do you bring to the table? A bachelor's or master's degree in a quantitative field (e.g., Computer Science, Statistics, Mathematics, Engineering) is required. The ideal candidate will have a strong background in designing and implementing end-to-end AI/ML pipelines, including feature engineering, model training, and inference. Experience with Generative AI pipelines is needed. 8+ years of experience in AI/ML development, with at least 3+ years in an AI architecture role. Fluency in Python and SQL and noSQL is essential. Experience with common data science libraries such as pandas and Scikit-learn, as well as deep learning frameworks like PyTorch and TensorFlow, is required. Hands-on experience with cloud-based AI/ML platforms and tools, such as AWS (SageMaker, Bedrock), GCP (Vertex AI, Gemini), Azure AI Studio, or OpenAI, is a must. This includes experience with deploying and managing models in the cloud. Benefits Flexible Work & Time Off - Embrace hybrid work models and enjoy the freedom of unlimited paid time off to support work-life balance Health & Well-being - Access comprehensive group medical and life insurance coverage, along with a 24/7 Employee Assistance Program (EAP) for mental health and wellness support Growth & Learning - Fuel your professional journey with continuous learning and development programs designed to help you upskill and grow Recognition & Rewards - Get recognized for your contributions through structured reward programs and campaigns Engaging & Fun Work Culture - Experience a vibrant workplace with team events, celebrations, and engaging activities that make every workday enjoyable & Many More..
Posted 1 week ago
5.0 years
0 Lacs
Bengaluru, Karnataka
On-site
At Takeda, we are guided by our purpose of creating better health for people and a brighter future for the world. Every corporate function plays a role in making sure we — as a Takeda team — can discover and deliver life-transforming treatments, guided by our commitment to patients, our people and the planet. People join Takeda because they share in our purpose. And they stay because we’re committed to an inclusive, safe and empowering work environment that offers exceptional experiences and opportunities for everyone to pursue their own ambitions. Job ID R0158759 Date posted 07/24/2025 Location Bengaluru, Karnataka I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda’sPrivacy Noticeand Terms of Use. I further attest that all information I submit in my employment application is true to the best of my knowledge. Job Description The Future Begins Here At Takeda, we are leading digital evolution and global transformation. By building innovative solutions and future-ready capabilities, we are meeting the need of patients, our people, and the planet. Bengaluru, the city, which is India’s epicenter of Innovation, has been selected to be home to Takeda’s recently launched Innovation Capability Center. We invite you to join our digital transformation journey. In this role, you will have the opportunity to boost your skills and become the heart of an innovative engine that is contributing to global impact and improvement. At Takeda’s ICC we Unite in Diversity Takeda is committed to creating an inclusive and collaborative workplace, where individuals are recognized for their backgrounds and abilities they bring to our company. We are continuously improving our collaborators journey in Takeda, and we welcome applications from all qualified candidates. Here, you will feel welcomed, respected, and valued as an important contributor to our diverse team. The Opportunity: The Data Enginee r will work directly with architects and product owners on the delivery of data pipelines and platforms for structured and unstructured data as part of a transformational data program. This data program will include an integrated data flow with end-to end control of data, internalization of numerous systems and processes, broad enablement of automation and near-time data access, efficient data review and query, and enablement of disruptive technologies for next-generation trial designs and insight derivation. We are primarily looking for people who love taking complex data and making it easy to use. As a Data Engineer you will Provide leadership to develop and execute highly complex and large-scale data structures and pipelines to organize, collect and standardize data to generate insights and addresses reporting needs. Interpret and integrate advanced techniques to ingest structured and unstructured data across complex ecosystem Delivery & Business Accountabilities Build and maintain technical solutions required for optimal ingestion, transformation, and loading of data from a wide variety of data sources and large, complex data sets with a focus on clinical and operational data Develop data profiling and data quality methodologies and embed them into the processes involved in transforming data across the systems. Manages and influences the data pipeline and analysis approaches, uses different technologies, big data preparations, programming and loading as well as initial exploration in the process of searching and finding data patterns. Uses data science input and requests, translates these from data exploration - large record (billions) and unstructured data sets - to mathematic algorithms and uses various tooling from programming languages to new tools (artificial and machine learning) to find data patterns, build and optimize models. Leads and implements ongoing tests in the search for solutions in data modelling, collects and prepares the training of data, tunes the data, optimizes algorithm implementations to test, scale, and deploy future models. Conducts and facilitates analytical assessment conceptualizing business needs and translates them into analytical opportunities. Leads the development of technical roadmaps and approaches for data analyses to find patterns, to design data models, to scale the model to a managed production environment within the current or a technical landscape to develop. Influences and manages data exploration from analysis to scalable models, works independently and decides quickly on transfers in complex data analysis and modelling. Skills and Qualifications: Bachelor’s degree or higher in a quantitative discipline such as Statistics, Mathematics, Engineering, Computer Science, Econometrics or information sciences such as business analytics or informatics 5+ years of experience working in data engineering role in an enterprise environment Strong experience with ETL/ELT design and implementations in the context of large, disparate and complex datasets Demonstrated experience with a variety of relational database and data warehousing technology such as AWS Redshift, Athena, RDS, BigQuery Demonstrated experience with big data processing systems and distributed computing technology such as Databricks, Spark, Sagemaker, Kafka, Tidal/Airflow etc. Demonstrated experience with DevOps tools such as GitLab, Terraform, Ansible, Chef etc. Experience with developing solutions on cloud computing services and infrastructure in the data and analytics space Solution-oriented enabler mindset Prior experience with Data Engineering projects and teams at an Enterprise level Preferred : Understanding or Application of Machine Learning and / or Deep Learning Significant experience in an analytical role in the healthcare industry preferred WHAT TAKEDA ICC INDIA CAN OFFER YOU: Takeda is certified as a Top Employer, not only in India, but also globally. No investment we make pays greater dividends than taking good care of our people. At Takeda, you take the lead on building and shaping your own career. Joining the ICC in Bengaluru will give you access to high-end technology, continuous training and a diverse and inclusive network of colleagues who will support your career growth. BENEFITS: It is our priority to provide competitive compensation and a benefit package that bridges your personal life with your professional career. Amongst our benefits are: Competitive Salary + Performance Annual Bonus Flexible work environment, including hybrid working Comprehensive Healthcare Insurance Plans for self, spouse, and children Group Term Life Insurance and Group Accident Insurance programs. Employee Assistance Program Broad Variety of learning platforms Diversity, Equity, and Inclusion Programs Reimbursements – Home Internet & Mobile Phone Employee Referral Program Leaves – Paternity Leave (4 Weeks), Maternity Leave (up to 26 weeks), Bereavement Leave (5 days) ABOUT ICC IN TAKEDA: Takeda is leading a digital revolution. We’re not just transforming our company; we’re improving the lives of millions of patients who rely on our medicines every day. As an organization, we are committed to our cloud-driven business transformation and believe the ICCs are the catalysts of change for our global organization. #Li-Hybrid Locations IND - Bengaluru Worker Type Employee Worker Sub-Type Regular Time Type Full time
Posted 1 week ago
0.0 - 8.0 years
0 Lacs
Bengaluru, Karnataka
On-site
About Us Observe.AI is transforming customer service with AI agents that speak, think, and act like your best human agents—helping enterprises automate routine customer calls and workflows, support agents in real time, and uncover powerful insights from every interaction. With Observe.AI, businesses boost automation, deliver faster, more consistent 24/7 service and build stronger customer loyalty. Trusted by brands like Accolade, Prudential, Concentrix, Cox Automotive, and Included Health, Observe.AI is redefining how businesses connect with customers—driving better experiences and lasting relationships at every touchpoint. The Opportunity We are looking for a Senior Data Engineer with strong hands-on experience in building scalable data pipelines and real-time processing systems. You will be part of a high-impact team focused on modernizing our data architecture, enabling self-serve analytics, and delivering high-quality data products. This role is ideal for engineers who love solving complex data challenges, have a growth mindset, and are excited to work on both batch and streaming systems. What you’ll be doing: Build and maintain real-time and batch data pipelines using tools like Kafka, Spark, and Airflow. Contribute to the development of a scalable LakeHouse architecture using modern data formats such as Delta Lake, Hudi, or Iceberg. Optimize data ingestion and transformation workflows across cloud platforms (AWS, GCP, or Azure). Collaborate with Analytics and Product teams to deliver data models, marts, and dashboards that drive business insights. Support data quality, lineage, and observability using modern practices and tools. Participate in Agile processes (Sprint Planning, Reviews) and contribute to team knowledge sharing and documentation. Contribute to building data products for inbound (ingestion) and outbound (consumption) use cases across the organization. Who you are: 5-8 years of experience in data engineering or backend systems with a focus on large-scale data pipelines. Hands-on experience with streaming platforms (e.g., Kafka) and distributed processing tools (e.g., Spark or Flink). Working knowledge of LakeHouse formats (Delta/Hudi/Iceberg) and columnar storage like Parquet. Proficient in building pipelines on AWS, GCP, or Azure using managed services and cloud-native tools. Experience in Airflow or similar orchestration platforms. Strong in data modeling and optimizing data warehouses like Redshift, BigQuery, or Snowflake. Exposure to real-time OLAP tools like ClickHouse, Druid, or Pinot. Familiarity with observability tools such as Grafana, Prometheus, or Loki. Some experience integrating data with MLOps tools like MLflow, SageMaker, or Kubeflow. Ability to work with Agile practices using JIRA, Confluence, and participating in engineering ceremonies. Compensation, Benefits and Perks Excellent medical insurance options and free online doctor consultations Yearly privilege and sick leaves as per Karnataka S&E Act Generous holidays (National and Festive) recognition and parental leave policies Learning & Development fund to support your continuous learning journey and professional development Fun events to build culture across the organization Flexible benefit plans for tax exemptions (i.e. Meal card, PF, etc.) Our Commitment to Inclusion and Belonging Observe.AI is an Equal Employment Opportunity employer that proudly pursues and hires a diverse workforce. Observe AI does not make hiring or employment decisions on the basis of race, color, religion or religious belief, ethnic or national origin, nationality, sex, gender, gender identity, sexual orientation, disability, age, military or veteran status, or any other basis protected by applicable local, state, or federal laws or prohibited by Company policy. Observe.AI also strives for a healthy and safe workplace and strictly prohibits harassment of any kind. We welcome all people. We celebrate diversity of all kinds and are committed to creating an inclusive culture built on a foundation of respect for all individuals. We seek to hire, develop, and retain talented people from all backgrounds. Individuals from non-traditional backgrounds, historically marginalized or underrepresented groups are strongly encouraged to apply. If you are ambitious, make an impact wherever you go, and you're ready to shape the future of Observe.AI, we encourage you to apply.
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
As a high-impact AI/ML Engineer, you will lead the design, development, and deployment of machine learning and AI solutions across vision, audio, and language modalities. You will be an integral part of a fast-paced, outcome-oriented AI & Analytics team, collaborating with data scientists, engineers, and product leaders to translate business use cases into real-time, scalable AI systems. Your responsibilities in this role will include architecting, developing, and deploying ML models for multimodal problems encompassing vision, audio, and NLP tasks. You will be responsible for the complete ML lifecycle, from data ingestion to model development, experimentation, evaluation, deployment, and monitoring. Leveraging transfer learning and self-supervised approaches where appropriate, you will design and implement scalable training pipelines and inference APIs using frameworks like PyTorch or TensorFlow. Collaborating with MLOps, data engineering, and DevOps teams, you will operationalize models using technologies such as Docker, Kubernetes, or serverless infrastructure. Continuously monitoring model performance and implementing retraining workflows to ensure sustained accuracy over time will be a key aspect of your role. You will stay informed about cutting-edge AI research and incorporate innovations such as generative AI, video understanding, and audio embeddings into production systems. Writing clean, well-documented, and reusable code to support agile experimentation and long-term platform development is an essential part of this position. To qualify for this role, you should hold a Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Data Science, or a related field, with a minimum of 5-8 years of experience in AI/ML Engineering, including at least 3 years in applied deep learning. In terms of technical skills, you should be proficient in Python, with knowledge of R or Java being a plus. Additionally, you should have expertise in ML/DL Frameworks like PyTorch, TensorFlow, and Scikit-learn, as well as experience in Computer Vision tasks such as image classification, object detection, OCR, segmentation, and tracking. Familiarity with Audio AI tasks like speech recognition, sound classification, and audio embedding models is also desirable. Strong capabilities in Data Engineering using tools like Pandas, NumPy, SQL, and preprocessing pipelines for structured and unstructured data are required. Knowledge of NLP/LLMs, Cloud & MLOps services, deployment & infrastructure technologies, and CI/CD & Version Control tools are also beneficial. Soft skills and competencies that will be valuable in this role include strong analytical and systems thinking, effective communication skills to convey models and results to non-technical stakeholders, the ability to work cross-functionally with various teams, and a demonstrated bias for action, rapid experimentation, and iterative delivery of impact.,
Posted 1 week ago
0.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Independently design, develop, and implement machine learning and NLP models. Build and fine-tune LLM-based solutions (prompt engineering, few-shot prompting, chain of thought prompting). Develop robust, production-quality code for AI/ML applications using Python. Build, deploy, and monitor models using AWS services (SageMaker, Bedrock, Lambda, etc.). Conduct data cleaning, feature engineering, and model evaluation on large datasets. Experiment with new GenAI tools, LLM architectures, and APIs (HuggingFace, LangChain, OpenAI, etc.). Collaborate with senior data scientists for reviews but own end-to-end solutioning tasks. Document models, pipelines, experiments, and results clearly and systematically. Stay updated with the latest in AI/ML, GenAI, and cloud technologies.
Posted 1 week ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism SAP Management Level Senior Manager Job Description & Summary At PwC, our people in business application consulting specialise in consulting services for a variety of business applications, helping clients optimise operational efficiency. These individuals analyse client needs, implement software solutions, and provide training and support for seamless integration and utilisation of business applications, enabling clients to achieve their strategic objectives. As a SAP consulting generalist at PwC, you will focus on providing consulting services across various SAP applications to clients, analysing their needs, implementing software solutions, and offering training and support for effective utilisation of SAP applications. Your versatile knowledge will allow you to assist clients in optimising operational efficiency and achieving their strategic objectives. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations Job Description & Summary: A career within…. Responsibilities: AI Architecture & Development · Design and implement generative AI models (e.g., Transformers, GANs, VAEs, Diffusion Models). · Architect Retrieval-Augmented Generation (RAG) systems and multi-agent frameworks. · Fine-tune pre-trained models for domain-specific tasks (e.g., NLP, vision, genomics). · Ensure model scalability, performance, and interpretability. System Integration & Deployment · Integrate AI models into full-stack applications using modern frameworks (React, Node.js, Django). · Deploy models using cloud platforms (AWS SageMaker, Azure ML, GCP Vertex AI). · Implement CI/CD pipelines and containerization (Docker, Kubernetes). Collaboration & Leadership · Work with data scientists, engineers, and domain experts to translate business/scientific needs into AI solutions. · Lead architectural decisions across model lifecycle: training, deployment, monitoring, and versioning. · Provide technical mentorship and guidance to junior team members. Compliance & Documentation · Ensure compliance with data privacy standards (HIPAA, GDPR). · Maintain comprehensive documentation for models, systems, and workflows. --- Required Qualifications: · Bachelor’s or Master’s in Computer Science, Engineering, Data Science, or related field. · 5+ years in AI/ML development; 3+ years in architecture or technical leadership roles. · Proficiency in Python, JavaScript, and frameworks like TensorFlow, PyTorch. · Experience with cloud platforms (AWS, Azure, GCP) and DevOps tools. · Strong understanding of NLP, computer vision, or life sciences applications. --- Preferred Qualifications: · Experience in domains like marketing, capital markets, or life sciences (e.g., drug discovery, genomics). · Familiarity with Salesforce Einstein and other enterprise AI tools. · Knowledge of regulatory standards (FDA, EMA) and ethical AI practices. · Experience with multimodal data (text, image, genomic, clinical). Mandatory skill sets: Gen AI Architect Preferred skill sets: Gen AI Years of experience required: 10+yrs Education qualification: Btech MBA MCA MTECH Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Bachelor of Technology Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills AI Architecture Optional Skills Generative AI Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Ciklum is looking for a Data Engineer to join our team full-time in India. We are a custom product engineering company that supports both multinational organizations and scaling startups to solve their most complex business challenges. With a global team of over 4,000 highly skilled developers, consultants, analysts and product owners, we engineer technology that redefines industries and shapes the way people live. About the role: As a Data Engineer, become a part of a cross-functional development team who is working with GenAI solutions for digital transformation across Enterprise Products. The prospective team you will be working with is responsible for the design, development, and deployment of innovative, enterprise technology, tools, and standard processes to support the delivery of tax services. The team focuses on the ability to deliver comprehensive, value-added, and efficient tax services to our clients. It is a dynamic team with professionals of varying backgrounds from tax technical, technology development, change management, and project management. The team consults and executes on a wide range of initiatives involving process and tool development and implementation including training development, engagement management, tool design, and implementation. Responsibilities: Responsible for the building, deployment, and maintenance of mission-critical analytics solutions that process terabytes of data quickly at big-data scales Contributes design, code, configurations, manage data ingestion, real-time streaming, batch processing, ETL across multiple data storages Responsible for performance tuning of complicated SQL queries and Data flows Requirements: Experience coding in SQL/Python, with solid CS fundamentals including data structure and algorithm design Hands-on implementation experience working with a combination of the following technologies: Hadoop, Map Reduce, Kafka, Hive, Spark, SQL and NoSQL data warehouses Experience in Azure cloud data platform Experience working with vector databases (Milvus, Postgres, etc.) Knowledge of embedding models and retrieval-augmented generation (RAG) architectures Understanding of LLM pipelines, including data preprocessing for GenAI models Experience deploying data pipelines for AI/ML workloads(*), ensuring scalability and efficiency Familiarity with model monitoring(*), feature stores (Feast, Vertex AI Feature Store), and data versioning Experience with CI/CD for ML pipelines(*) (Kubeflow, MLflow, Airflow, SageMaker Pipelines) Understanding of real-time streaming for ML model inference (Kafka, Spark Streaming) Knowledge of Data Warehousing, design, implementation and optimization Knowledge of Data Quality testing, automation and results visualization Knowledge of BI reports and dashboards design and implementation (PowerBI) Experience with supporting data scientists and complex statistical use cases highly desirable What`s in it for you? Strong community: Work alongside top professionals in a friendly, open-door environment Growth focus: Take on large-scale projects with a global impact and expand your expertise Tailored learning: Boost your skills with internal events (meetups, conferences, workshops), Udemy access, language courses, and company-paid certifications Endless opportunities: Explore diverse domains through internal mobility, finding the best fit to gain hands-on experience with cutting-edge technologies Care: We’ve got you covered with company-paid medical insurance, mental health support, and financial & legal consultations About us: At Ciklum, we are always exploring innovations, empowering each other to achieve more, and engineering solutions that matter. With us, you’ll work with cutting-edge technologies, contribute to impactful projects, and be part of a One Team culture that values collaboration and progress. India is a strategic innovation hub for Ciklum, with growing teams in Chennai and Pune leading advancements in EdgeTech, AR/VR, IoT, and beyond. Join us to collaborate on game-changing solutions and take your career to the next level. Want to learn more about us? Follow us on Instagram , Facebook , LinkedIn . Explore, empower, engineer with Ciklum! Interested already? We would love to get to know you! Submit your application. We can’t wait to see you at Ciklum.
Posted 1 week ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description We are looking for an enthusiastic AI/ML Developer with 3-5 years of experience to design, develop, and deploy AI/ML solutions. The ideal candidate is passionate about AI, skilled in machine learning, deep learning, and MLOps, and eager to work on cutting-edge projects. Key Skills & Experience: Programming: Python (TensorFlow, PyTorch, Scikit-learn, Pandas). Machine Learning: Supervised, Unsupervised, Deep Learning, NLP, Computer Vision. Model Deployment: Flask, FastAPI, AWS SageMaker, Google Vertex AI, Azure ML. MLOps & Cloud: Docker, Kubernetes, MLflow, Kubeflow, CI/CD pipelines. Big Data & Databases: Spark, Dask, SQL, NoSQL (PostgreSQL, MongoDB). Soft Skills: Strong analytical and problem-solving mindset. Passion for AI innovation and continuous learning. Excellent teamwork and communication abilities. Qualifications: Bachelor’s/Master’s in Computer Science, AI, Data Science, or related fields. AI/ML certifications are a plus. Career Level - IC4 Responsibilities We are looking for an enthusiastic AI/ML Developer with 3-5 years of experience to design, develop, and deploy AI/ML solutions. The ideal candidate is passionate about AI, skilled in machine learning, deep learning, and MLOps, and eager to work on cutting-edge projects. Key Skills & Experience: Programming: Python (TensorFlow, PyTorch, Scikit-learn, Pandas). Machine Learning: Supervised, Unsupervised, Deep Learning, NLP, Computer Vision. Model Deployment: Flask, FastAPI, AWS SageMaker, Google Vertex AI, Azure ML. MLOps & Cloud: Docker, Kubernetes, MLflow, Kubeflow, CI/CD pipelines. Big Data & Databases: Spark, Dask, SQL, NoSQL (PostgreSQL, MongoDB). Soft Skills: Strong analytical and problem-solving mindset. Passion for AI innovation and continuous learning. Excellent teamwork and communication abilities. Qualifications: Bachelor’s/Master’s in Computer Science, AI, Data Science, or related fields. AI/ML certifications are a plus. Diversity & Inclusion: An Oracle career can span industries, roles, Countries and cultures, giving you the opportunity to flourish in new roles and innovate, while blending work life in. Oracle has thrived through 40+ years of change by innovating and operating with integrity while delivering for the top companies in almost every industry. In order to nurture the talent that makes this happen, we are committed to an inclusive culture that celebrates and values diverse insights and perspectives, a workforce that inspires thought leadership and innovation. . Oracle offers a highly competitive suite of Employee Benefits designed on the principles of parity, consistency, and affordability. The overall package includes certain core elements such as Medical, Life Insurance, access to Retirement Planning, and much more. We also encourage our employees to engage in the culture of giving back to the communities where we live and do business. At Oracle, we believe that innovation starts with diversity and inclusion and to create the future we need talent from various backgrounds, perspectives, and abilities. We ensure that individuals with disabilities are provided reasonable accommodation to successfully participate in the job application, interview process, and in potential roles. to perform crucial job functions. That’s why we’re committed to creating a workforce where all individuals can do their best work. It’s when everyone’s voice is heard and valued that we’re inspired to go beyond what’s been done before. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Ciklum is looking for a Data Engineer to join our team full-time in India. We are a custom product engineering company that supports both multinational organizations and scaling startups to solve their most complex business challenges. With a global team of over 4,000 highly skilled developers, consultants, analysts and product owners, we engineer technology that redefines industries and shapes the way people live. About the role: As a Data Engineer, become a part of a cross-functional development team who is working with GenAI solutions for digital transformation across Enterprise Products. The prospective team you will be working with is responsible for the design, development, and deployment of innovative, enterprise technology, tools, and standard processes to support the delivery of tax services. The team focuses on the ability to deliver comprehensive, value-added, and efficient tax services to our clients. It is a dynamic team with professionals of varying backgrounds from tax technical, technology development, change management, and project management. The team consults and executes on a wide range of initiatives involving process and tool development and implementation including training development, engagement management, tool design, and implementation. Responsibilities: Responsible for the building, deployment, and maintenance of mission-critical analytics solutions that process terabytes of data quickly at big-data scales Contributes design, code, configurations, manage data ingestion, real-time streaming, batch processing, ETL across multiple data storages Responsible for performance tuning of complicated SQL queries and Data flows Requirements: Experience coding in SQL/Python, with solid CS fundamentals including data structure and algorithm design Hands-on implementation experience working with a combination of the following technologies: Hadoop, Map Reduce, Kafka, Hive, Spark, SQL and NoSQL data warehouses Experience in Azure cloud data platform Experience working with vector databases (Milvus, Postgres, etc.) Knowledge of embedding models and retrieval-augmented generation (RAG) architectures Understanding of LLM pipelines, including data preprocessing for GenAI models Experience deploying data pipelines for AI/ML workloads(*), ensuring scalability and efficiency Familiarity with model monitoring(*), feature stores (Feast, Vertex AI Feature Store), and data versioning Experience with CI/CD for ML pipelines(*) (Kubeflow, MLflow, Airflow, SageMaker Pipelines) Understanding of real-time streaming for ML model inference (Kafka, Spark Streaming) Knowledge of Data Warehousing, design, implementation and optimization Knowledge of Data Quality testing, automation and results visualization Knowledge of BI reports and dashboards design and implementation (PowerBI) Experience with supporting data scientists and complex statistical use cases highly desirable What`s in it for you? Strong community: Work alongside top professionals in a friendly, open-door environment Growth focus: Take on large-scale projects with a global impact and expand your expertise Tailored learning: Boost your skills with internal events (meetups, conferences, workshops), Udemy access, language courses, and company-paid certifications Endless opportunities: Explore diverse domains through internal mobility, finding the best fit to gain hands-on experience with cutting-edge technologies Care: We’ve got you covered with company-paid medical insurance, mental health support, and financial & legal consultations About us: At Ciklum, we are always exploring innovations, empowering each other to achieve more, and engineering solutions that matter. With us, you’ll work with cutting-edge technologies, contribute to impactful projects, and be part of a One Team culture that values collaboration and progress. India is a strategic innovation hub for Ciklum, with growing teams in Chennai and Pune leading advancements in EdgeTech, AR/VR, IoT, and beyond. Join us to collaborate on game-changing solutions and take your career to the next level. Want to learn more about us? Follow us on Instagram , Facebook , LinkedIn . Explore, empower, engineer with Ciklum! Interested already? We would love to get to know you! Submit your application. We can’t wait to see you at Ciklum.
Posted 1 week ago
7.0 years
35 - 40 Lacs
India
Remote
Job Title: Azure DevOps Engineer (MLOps) - Lead Location: Remote (with initial 2-3 months of travel to AbuDhabi, UAE office is a MUST, and then can continue from India remotely) Employment Type: Full-time About The Role Our client, a leading AWS Premier Partner, is seeking a highly skilled Lead DevOps / MLOps Engineer (Azure, Terraform) to join their growing cloud and AI engineering team. This role is ideal for candidates with a strong foundation in cloud DevOps practices and a passion for implementing MLOps solutions at scale. Key Responsibilities Design, implement, and manage CI/CD pipelines using tools such as Jenkins, GitHub Actions, or Azure DevOps Develop and maintain Infrastructure-as-Code using Terraform Manage container orchestration environments using Kubernetes Ensure cloud infrastructure is optimized, secure, and monitored effectively Collaborate with data science teams to support ML model deployment and operationalization Implement MLOps best practices, including model versioning, deployment strategies (e.g., blue-green), monitoring (data drift, concept drift), and experiment tracking (e.g., MLflow) Build and maintain automated ML pipelines to streamline model lifecycle management Required Skills 7+ years of experience in DevOps and/or MLOps roles Proficient in CI/CD tools: Jenkins, GitHub Actions, Azure DevOps Strong expertise in Terraform and cloud-native infrastructure (AWS preferred) Hands-on experience with Kubernetes, Docker, and microservices Solid understanding of cloud networking, security, and monitoring Scripting proficiency in Bash and Python Preferred Skills Experience with MLflow, TFX, Kubeflow, or SageMaker Pipelines Knowledge of model performance monitoring and ML system reliability Familiarity with AWS MLOps stack or equivalent tools on Azure/GCP Skills: devops,bash,kubeflow,sagemaker pipelines,security,terraform,python,microservices,monitoring,tfx,kubernetes,jenkins,github actions,azure,ci/cd tools,cloud networking,azure devops,mlflow,docker
Posted 1 week ago
9.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Hiring: Python Developers (6–9 Yrs) | Chennai & Hyderabad | F2F Interview – 26th July Are you a Python developer with strong cloud experience? Join us at Virtusa – a global leader in digital engineering. We are conducting a Face-to-Face Interview Drive for multiple roles in Chennai and Hyderabad on Saturday, 26th July . Position Details: Role: Python Developer Experience: 6–9 Years Job Type: Full-Time No. of Positions: 20 Work Location: Chennai – Navallur Office / Hyderabad – Campus Office Interview Date: Saturday, 26th July Interview Mode: Face-to-Face ONLY Key Responsibilities: Design, develop, and maintain scalable applications using Python Build and manage cloud-native applications across AWS , Azure , or GCP Implement and manage API integrations including authentication mechanisms (e.g., OAuth , API keys) Set up and manage Python development environments using pip , Conda , and virtual environments Write clean, efficient, and testable code following best practices Collaborate with cross-functional teams (DevOps, Data Engineering, Product) Troubleshoot application issues and ensure optimal performance Must-Have Skills: Strong programming expertise in Python Experience in Python environment setup and dependency management (pip, conda) Hands-on knowledge of API integration techniques (OAuth, API Keys, RESTful APIs) Cloud Expertise (Must have any one combination): Python + AWS (e.g., S3, Lambda, SageMaker AI, EC2) Python + Azure (e.g., Azure ML, Functions, Blob Storage) Python + GCP (e.g., Vertex AI, GCS, Cloud Functions) Preferred Qualifications: Experience working in Agile environments Familiarity with CI/CD pipelines Exposure to cloud security and cost optimization practices Knowledge of containerization (Docker, Kubernetes) is a plus How to Apply: 📨 Send your updated resume to shalini.v@saranshinc.com Take advantage of this great opportunity to accelerate your career—join us at the upcoming interview drive! #PythonJobs #CloudCareers #AWS #Azure #GCP #HiringNow #InterviewDrive #ChennaiJobs #HyderabadJobs #TechHiring #PythonDevelopers
Posted 1 week ago
7.0 years
0 Lacs
Chandigarh
On-site
Job Summary We are seeking an experienced and driven Senior AI Engineer to lead advanced AI initiatives and drive real-world impact through cutting-edge machine learning solutions. The ideal candidate will have 7+ years of hands-on experience in building and deploying models across NLP, computer vision, and deep learning, along with a proven track record of leading teams and managing technical deliverables. Key Responsibilities Design, develop, and deploy robust machine learning models for real-world applications Lead and mentor a team of junior engineers and researchers, ensuring timely delivery and code quality Train and fine-tune models using large, diverse, and complex datasets Apply AI techniques in natural language processing, computer vision, and deep learning Collaborate cross-functionally with product, data, and engineering teams to align models with business goals Utilize cloud-based AI platforms (e.g., AWS SageMaker, Azure ML, Google AI Platform) for scalable deployment Monitor model performance in production, implement improvements, and ensure robustness Report on project status, milestones, and team performance to senior stakeholders Required Skills & Qualifications 7+ years of experience in Python and machine learning frameworks such as TensorFlow or PyTorch Extensive experience with data science libraries including NumPy, Pandas, and Scikit-learn Solid understanding of supervised, unsupervised, and deep learning techniques Strong leadership skills with experience managing or mentoring technical teams Familiarity with cloud AI infrastructure and scalable model deployment Excellent problem-solving abilities and the capability to thrive in fast-paced environments Preferred Qualifications Experience in model performance tuning, monitoring, and retraining pipelines Exposure to MLOps practices and CI/CD workflows for ML model lifecycle Knowledge of model explainability, bias mitigation, and ethical AI frameworks Experience working in agile environments and reporting to cross-functional leadership Why Join Us Build with Purpose: Work on impactful, high-scale products that solve real problems using cutting-edge technologies. Tech-First Culture: Join a team where engineering is at the core — we prioritize clean code, scalability, automation, and continuous learning. Freedom to Innovate: You’ll have ownership from day one — with room to experiment, influence architecture, and bring your ideas to life. Collaborate with the Best: Work alongside passionate engineers, product thinkers, and designers who value clarity, speed, and technical excellence. Paladin Tech is an equal opportunity employer. We are committed to creating an inclusive and diverse workplace and welcome candidates of all backgrounds and identities. Job Types: Full-time, Permanent Benefits: Food provided Work Location: In person
Posted 1 week ago
4.0 years
2 - 6 Lacs
Hyderābād
On-site
Job Requirements Phenom Introduction: Our purpose is to help a billion people find the right work! Phenom is an AI-Powered talent experience platform that is redefining the HR tech space. We have grown into a global organization with offices in 6 countries and over 1,500 employees. As an HR tech unicorn organization, innovation and creativity is within our DNA. Come help us make every talent moment Phenomenal! Core Description Parameters: We are seeking a highly skilled and motivated Data Analyst to join our AI & Ethics Team. In this role, you will work directly with our Data Science, Product and Engineering teams to analyze and improve our Phenomenal products for our Phenomenal clients. Have you ever collected your own dataset because you were so curious about some problem? Do you have experience asking “why” and “how” to get to the heart of a dataset? Have you ever helped an executive make a million dollar decision? Would you like to? What You’ll Do Use SQL to analyze, hypothesize, and solve complex business challenges, as well as to identify and visualize novel opportunities for growth. Should be willing to work on BI tools like “Looker" and "Superset" Perform data cleaning, transformation and validation using Excel, Python or R tools Partner with stakeholders to define key metrics and deliver insights on performance Design and execute A/B tests and evaluate their outcomes statistically Communicate Findings clearly through visualizations and presentations to technical and non-technical audiences Extract and report data using database queries with a preference for cloud experience, pipeline automation, and scalable analytics. Ensure data integrity and accuracy by adhering to best practices in data governance and documentation Use statistical modeling and prediction techniques to drive actionable insights. Demonstrate curiosity and a problem-solving, research mindset. Work independently with minimal supervision and communicate your findings to technical and non-technical audiences. What You’ve Done: Data Manipulation: Efficiently manipulate and analyze large datasets. Strong experience in extracting and reporting data using complex queries. Statistical Analysis: Proficiency with statistical concepts and techniques, including exploratory data analysis, hypothesis testing, confidence intervals, and experiment design. Proficiency in using statistical methods to derive meaningful conclusions from data. Data Visualization: Proficiency in using visualization libraries like Matplotlib or Seaborn to create clear, concise, and intuitive visualizations that communicate complex data insights to stakeholders. Model Evaluation and Validation: Proficiency in assessing model performance using metrics such as accuracy, precision, recall, F1 score, and area under the curve (AUC). Proficiency in techniques like cross-validation, train-test splits, and hyperparameter tuning to ensure robust model performance. Communication: Strong written, verbal, and visual communication skills. Desired/Good To Have Skills: Cloud platforms: Familiarity with cloud platforms for data analysis and orchestration like Snowflake, AWS Sagemaker, AWS Lambda, Airflow, Jenkins, Postman Machine Learning: Proficiency with fundamental machine learning algorithms such as linear regression, logistic regression, decision trees, random forests, and neural networks. Ability to use ML and statistical modeling to solve business problems. Feature Selection and Dimensionality Reduction: Familiarity with techniques like feature selection, feature engineering, regularization, and dimensionality reduction (e.g., PCA) to improve model efficiency, reduce overfitting, and enhance interpretability. Model Deployment : Experience in deploying machine learning models in production environments, utilizing frameworks like Flask or Django. Understanding of model serving, API development, and cloud deployment to enable real-time predictions. Dashboarding: Familiarity with BI and dashboarding software such as Superset, Looker and Tableau. Candidates with experience in the credit card, banking, or healthcare domains and/or strong mathematical and statistical backgrounds will be preferred. Education and Qualifications: Bachelor’s degree in computer science, Information Technology, or related field Experience: Looking for "4+ years" of experience as a Data Analyst Should be proficient in writing complex SQL queries Preferably looking for someone from Hyderabad Benefits: We want you to be your best self and to pursue your passions! Health and wellness benefits/programs to support holistic employee health Flexible hours and working schedules, as well as parental leave for new parents Growing organization with career pathing and development opportunities Tons of perks and extras in every location for all Phenoms! Diversity, Equity, & Inclusion: Our commitment to diversity runs deep! Diversity is essential to building phenomenal teams, products, and customer experiences. Phenom is proud to be an equal opportunity employer taking collective action to build a more inclusive environment where every candidate and employee feels welcomed. We recognize there is more to be done. Our teams are committed to continuous improvement until these powerful ideas are ingrained in our culture for Phenom and employers everywhere! #LI-JG1
Posted 1 week ago
1.0 - 8.0 years
5 - 25 Lacs
Gurgaon
On-site
Lead Assistant Manager EXL/LAM/1429335 ServicesGurgaon Posted On 22 Jul 2025 End Date 05 Sep 2025 Required Experience 1 - 8 Years Basic Section Number Of Positions 1 Band B2 Band Name Lead Assistant Manager Cost Code D010803 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type New Max CTC 500000.0000 - 2500000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group Analytics Sub Group Banking & Financial Services Organization Services LOB Banking & Financial Services SBU Analytics Country India City Gurgaon Center Gurgaon-SEZ BPO Solutions Skills Skill FRAUD AND RISK MANAGEMENT SQL PYTHON Minimum Qualification B TECH Certification No data available Job Description The Senior Statistical Data Analyst is responsible for designing unique analytic approaches to detect, assess, and recommend the optimal customer treatment to reduce frictions and enhance experience while properly managing fraud risk with data driven and statistical methods. You will analyze large amounts of account and transaction data to build customer level insights to derive the recommendations and methods to reduce friction and enhance experience on fund availability, transaction/fund hold time and more, and models while managing the customer experience. This role requires critical thinking and analytical savviness to work in a fast-paced environment but can be a rewarding opportunity to help bring a great banking experience and empower the customers to achieve their financial goals. Responsibilities: Analyze large amounts of data/transactions to derive business insights and create innovative solutions/models/strategies. Aggregate and analyze internal and external risk datasets to understand performance of fraud risk at customer level. Analyze customer's banking/transaction behaviors, and be able to build predictive models (simple ones like logistic regression, linear regression) to predict churns or negative outcomes or running correlation analysis to understand the correlation. Develop personalized segmentations and micro-segmentation to identify customers based on their fraud risk, banking behaviorals, and value. Conduct analysis for data driven recommendations with reporting dashboard to optimize customer treatment regarding friction reduction and fund availability across the entire banking journey. Skillset: Analytics professional preferably with experience in Fraud analytics . Minimum 2 years of experience in relevant domain - Data Analysis and building models/strategies. Strong knowledge and working experience in SQL and Python is a must. Experience analyzing data with statistical approaches with python (e.g. in Jupyter notebook): for example, clustering analysis, decision trees, linear regression, logistic regression, correlation analysis Knowledge of Tableau and BI tools Hands-on use of AWS (e.g. S3, EC2, EMR, Athena, SageMaker and more) is a plus Strong communication and interpersonal skills Strong knowledge of financial products , including debit cards, credit cards, lending products, and deposit accounts is a plus. Experience working at a FinTech or start-up is a plus. Notice period : Max 60 days. immediate joiners preferred. Education: Bachelors or Masters in Quantitative field such as Economics, Statistics, Mathematics BTech/MTech/MBA from Tier 1 colleges (IIT, NIT, IIM) Workflow Workflow Type L&S-DA-Consulting
Posted 1 week ago
3.0 years
3 - 12 Lacs
Mohali
On-site
We are seeking a skilled and experienced DevOps Engineer with expertise in architecting, implementing, and managing hybrid cloud infrastructure to enable seamless deployment and scaling of high-performance applications and machine learning workloads. Proven experience in cloud services, on-premises systems, container orchestration, automation, and multi-database management. Key Responsibilities & Experience : - Designed, implemented, and managed scalable AWS infrastructure leveraging services such as EC2, ECS, Lambda, S3, DynamoDB, Cognito, SageMaker, Amazon ECR, SES, Route 53, VPC Peering, and Site-to-Site VPN to support secure, high-performance, and resilient cloud environments. - Applied best practices in network security, including firewall configuration, IAM policy management. - Architected and maintained large-scale, multi-database systems integrating PostgreSQL, MongoDB, DynamoDB, and Elasticsearch to support millions of records, low-latency search, and real-time analytics. - Built and maintained CI/CD pipelines using GitHub Actions and Jenkins, enabling automated testing, Docker builds, and seamless deployments to production. - Managed containerized deployments using Docker, and orchestrated services using Amazon ECS for scalable and resilient application environments. - Implemented and maintained IaC frameworks using Terraform, AWS CloudFormation, and Ansible to ensure consistent, repeatable, and scalable infrastructure deployments. - Developed Ansible playbooks to automate system provisioning, OS-level configurations, and application deployments across hybrid environments. - Configured Amazon CloudWatch and Zabbix for proactive monitoring, health checks, and custom alerts to maintain system reliability and uptime. - Administered Linux-based servers, applied system hardening techniques, and maintained OS-level and network security best practices. - Managed SSL/TLS certificates, configured DNS records, and integrated email services using Amazon SES and SMTP tools. - Deployed and managed infrastructure for ML workloads using AWS SageMaker, optimizing model training, hosting, and resource utilization for cost-effective performance. Preferred Qualifications : - 3+ years of experience in DevOps, Cloud Infrastructure - Bachelors degree in Computer Science, Engineering - Experience deploying and managing machine learning models - Hands-on experience managing multi-node Elasticsearch clusters and designing scalable, high-performance search infrastructure. - Experience designing and operating hybrid cloud architectures, integrating on-premises and cloud-based systems Job Types: Full-time, Permanent Pay: ₹300,000.00 - ₹1,200,000.00 per year Benefits: Flexible schedule Paid sick time Paid time off Ability to commute/relocate: Mohali, Punjab: Reliably commute or planning to relocate before starting work (Preferred) Education: Bachelor's (Preferred) Experience: DevOps: 3 years (Required) Work Location: In person Speak with the employer +91 8360518086
Posted 1 week ago
3.0 years
3 - 4 Lacs
Mohali
On-site
Job Title: Pre-Sales Technical Business Analyst (AI/ML & MERN Stack) Job Type: Full-time | Pre-Sales | Technical Consulting About the Role: We are seeking a dynamic Pre-Sales Technical Business Analyst with a strong foundation in AI/ML solutions , MERN stack technologies , and API integration . This role bridges the gap between clients’ business requirements and our technical solutions, playing a pivotal role in shaping proposals, leading product demos, and translating client needs into technical documentation and strategic solutions. Key Responsibilities: Client Engagement: Collaborate with the sales team to understand client requirements, pain points, and objectives. Participate in discovery calls, solution walkthroughs, and RFP/RFI responses. Solution Design & Technical Analysis: Analyze and document business needs, converting them into detailed technical requirements. Propose architectural solutions using AI/ML models and the MERN stack (MongoDB, Express.js, React.js, Node.js) . Provide input on data pipelines, model training, and AI workflows where needed. Technical Presentations & Demos: Prepare and deliver compelling demos and presentations for clients. Act as a technical expert during pre-sales discussions to communicate the value of proposed solutions. Documentation & Proposal Support: Draft technical sections of proposals, SoWs, and functional specs. Create user flows, diagrams, and system interaction documents. Collaboration: Work closely with engineering, product, and delivery teams to ensure alignment between business goals and technical feasibility. Conduct feasibility analysis and risk assessments on proposed features or integrations. Required Skills & Experience: 3+ years in a Business Analyst or Pre-Sales Technical Consultant role. Proven experience in AI/ML workflows (understanding of ML lifecycle, model deployment, data prep). Strong technical knowledge of the MERN stack – including RESTful APIs, database schema design, and frontend/backend integration. Solid understanding of API design , third-party integrations, and system interoperability. Ability to translate complex technical concepts into simple business language. Hands-on experience with documentation tools like Swagger/Postman for API analysis. Proficient in writing user stories, business cases, and technical specifications. Preferred Qualifications: Exposure to cloud platforms (AWS, Azure, GCP) and ML platforms (SageMaker, Vertex AI, etc.). Experience with Agile/Scrum methodologies. Familiarity with AI use cases like recommendation systems, NLP, predictive analytics. Experience with data visualization tools or BI platforms is a plus. Job Types: Full-time, Permanent Pay: ₹30,000.00 - ₹40,000.00 per month Work Location: In person
Posted 1 week ago
6.0 years
5 - 15 Lacs
India
On-site
Role: Lead Python/AI Developer Experience: 6/6+ Years Location: Ahmedabad (Gujarat) Roles and Responsibilities: Helping the Python/AI team in building Python/AI solutions architectures leveraging source technologies Driving the technical discussions with clients along with Project Managers. Creating Effort Estimation matrix of Solutions/Deliverables for Delivery Team Implementing AI solutions and architectures, including data pre-processing, feature engineering, model deployment, compatibility with downstream tasks, edge/error handling. Collaborating with cross-functional teams, such as machine learning engineers, software engineers, and product managers, to identify business needs and provide technical guidance. Mentoring and coaching junior Python/AI/ML engineers. Sharing knowledge through knowledge-sharing technical presentations. Implement new Python/AI features with high quality coding standards. Must-To Have: B.Tech/B.E. in computer science, IT, Data Science, ML or related field. Strong proficiency in Python programming language. Strong Verbal, Written Communication Skills with Analytics and Problem-Solving. Proficient in Debugging and Exception Handling Professional experience in developing and operating AI systems in production. Hands-on, strong programming skills with experience in python, in particular modern ML & NLP frameworks (scikit-learn, pytorch, tensorflow, huggingface, SpaCy, Facebook AI XLM/mBERT etc.) Hands-on experience with AWS services such as EC2, S3, Lambda, AWS SageMaker. Experience with collaborative development workflow: version control (we use github), code reviews, DevOps (incl automated testing), CI/CD. Comfort with essential tools & libraries: Git, Docker, GitHub, Postman, NumPy, SciPy, Matplotlib, Seaborn, or Plotly, Pandas. Prior Experience in relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB). Experience in working in Agile methodology Good-To Have: A Master’s degree or Ph.D. in Computer Science, Machine Learning, or a related quantitative field. Python framework (Django/Flask/Fast API) & API integration. AI/ML/DL/MLOops certification done by AWS. Experience with OpenAI API. Good in Japanese Language Job Types: Full-time, Permanent Pay: ₹500,000.00 - ₹1,500,000.00 per year Benefits: Provident Fund Work Location: In person Expected Start Date: 14/08/2025
Posted 1 week ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Key Responsibilities • Build Gen AI-enabled solutions using online and offline LLMs, SLMs and TLMs tailored to domain-specific problems. • Deploy agentic AI workflows and use cases using frameworks like LangGraph, Crew AI etc. • Apply NLP, predictive modelling and optimization techniques to develop scalable machine learning solutions. • Integrate enterprise knowledge bases using Vector Databases and Retrieval Augmented Generation (RAG). • Apply advanced analytics to address complex challenges in Healthcare, BFSI and Manufacturing domains. • Deliver embedded analytics within business systems to drive real-time operational insights. Required Skills & Experience • 3–5 years of experience in applied Data Science or AI roles. • Experience working in any one of the following domains: BFSI, Healthcare/Health Sciences, Manufacturing or Utilities. • Proficiency in Python, with hands-on experience in libraries such as scikit-learn, TensorFlow • Practical experience with Gen AI (LLMs, RAG, vector databases), NLP and building scalable ML solutions. • Experience with time series forecasting, A/B testing, Bayesian methods and hypothesis testing. • Strong skills in working with structured and unstructured data, including advanced feature engineering. • Familiarity with analytics maturity models and the development of Analytics Centre of Excellence (CoE’s). • Exposure to cloud-based ML platforms like Azure ML, AWS SageMaker or Google Vertex AI. • Data visualization using Matplotlib, Seaborn, Plotly; experience with Power BI is a plus. What We Look for (Values & Behaviours) • AI-First Thinking – Passion for leveraging AI to solve business problems. • Data-Driven Mindset – Ability to extract meaningful insights from complex data. • Collaboration & Agility – Comfortable working in cross-functional teams with a fast-paced mindset. • Problem-Solving – Think beyond the obvious to unlock AI-driven opportunities. • Business Impact – Focus on measurable outcomes and real-world adoption of AI. • Continuous Learning – Stay updated with the latest AI trends, research and best practices.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough