Home
Jobs
Companies
Resume

1436 Clustering Jobs

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

15.0 - 20.0 years

17 - 22 Lacs

Hyderabad

Work from Office

Naukri logo

AIX is the leading open standards-based UNIX operating system from IBM that provides scalable, secure, and robust infrastructure solution for enterprise customers. As an Engineering Manager for AIX Operating Systems , you will be responsible to Lead a team of highly skilled system software developers in designing, implementing, and supporting new enhancements, performance optimizations, scaling, and new hardware enablement for AIX core components. Define and drive the technical roadmap for AIX Operating System development, aligning with the overall product strategy, and adhere to the development process, ensuring high-quality code and deliverables by managing the overall development lifecycle. Collaborate with cross-functional teams, including hardware, software, and QA/test teams, to ensure seamless integration of Operating System components, and work with product managers, senior leaders, and customers to understand business needs and implement them in AIX Operating System. Required education Bachelor's Degree Preferred education Bachelor's Degree Required technical and professional expertise Bachelor's/Master’s Degree in Computer Science or a related technical discipline with experience in system software product development with C and Unix/Linux. 15+ years technical engineering experience including 5+ years of technical leadership/management experience. Possess expertise in Operating System internals – Device Drivers, Kernel, Networking, Security, File Systems, High Availability, Clustering and Virtualization. Strong leadership and team management skills, with the ability to motivate and inspire a team of developers. Excellent communication and collaboration skills, with the ability to work effectively across teams. Proven Interpersonal soft, oral, and written communication skills. Preferred technical and professional experience Strong technical background with hands-on experience in C programming and low-level programming using Linux-based operating systems. Prior Operating System development experience in UNIX (HP UX, Solaris, AIX) or Linux Operating systems.

Posted 7 hours ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

About Us Evangelist Apps is a UK-based custom software development company specializing in full-stack web and mobile app development, CRM/ERP solutions, workflow automation, and AI-powered platforms. Trusted by global brands like British Airways, Third Bridge, Hästens Beds, and Duxiana, we help clients solve complex business problems with technology. We’re now expanding into AI-driven services and are looking for our first Junior AI Developer to join the team. This is an exciting opportunity to help lay the groundwork for our AI capabilities. Role Overview As our first Junior AI Developer, you’ll work closely with our senior engineers and product teams to research, prototype, and implement AI-powered features across client solutions. You’ll contribute to machine learning models, LLM integrations, and intelligent automation systems that enhance user experiences and internal workflows. Key Responsibilities Assist in building and fine-tuning ML models for tasks like classification, clustering, or NLP Integrate AI services (e.g., OpenAI, Hugging Face, AWS, or Vertex AI) into applications Develop proof-of-concept projects and deploy lightweight models into production Preprocess datasets, annotate data, and evaluate model performance Collaborate with product, frontend, and backend teams to deliver end-to-end solutions Keep up to date with new trends in machine learning and generative AI Must-Have Skills Solid understanding of Python and popular AI/ML libraries (e.g., scikit-learn, pandas, TensorFlow, or PyTorch) Familiarity with foundational ML concepts (e.g., supervised/unsupervised learning, overfitting, model evaluation) Experience with REST APIs and working with JSON-based data Exposure to LLMs or prompt engineering is a plus Strong problem-solving attitude and eagerness to learn Good communication and documentation skills Nice-to-Haves (Good to Learn On the Job) Experience with cloud-based ML tools (AWS Sagemaker, Google Vertex AI, or Azure ML) Basic knowledge of MLOps and deployment practices Prior internship or personal projects involving AI or automation Contributions to open-source or Kaggle competitions What We Offer Mentorship from experienced engineers and a high-learning environment Opportunity to work on real-world client projects from day one Exposure to multiple industry domains including expert networks, fintech, healthtech, and e-commerce Flexible working hours and remote-friendly culture Rapid growth potential as our AI practice scales Show more Show less

Posted 9 hours ago

Apply

5.0 years

0 Lacs

Gurgaon

Remote

About Us: At apexanalytix, we’re lifelong innovators! Since the date of our founding nearly four decades ago we’ve been consistently growing, profitable, and delivering the best procure-to-pay solutions to the world. We’re the perfect balance of established company and start-up. You will find a unique home here. And you’ll recognize the names of our clients. Most of them are on The Global 2000. They trust us to give them the latest in controls, audit and analytics software every day. Industry analysts consistently rank us as a top supplier management solution, and you’ll be helping build that reputation. Read more about apexanalytix - https://www.apexanalytix.com/about/ Job Details The Role Quick Take - We are looking for a highly skilled systems engineer with experience working with Virtualization, Linux, Kubernetes, and Server Infrastructure. The engineer will be responsible to design, deploy, and maintain enterprise-grade cloud infrastructure using Apache CloudStack or similar technology, Kubernetes on Linux operating system. The Work - Hypervisor Administration & Engineering Architect, deploy, and manage Apache CloudStack for private and hybrid cloud environments. Manage and optimize KVM or similar virtualization technology Implement high-availability cloud services using redundant networking, storage, and compute. Automate infrastructure provisioning using OpenTofu, Ansible, and API scripting. Troubleshoot and optimize hypervisor networking (virtual routers, isolated networks), storage, and API integrations. Working experience with shared storage technologies like GFS and NFS. Kubernetes & Container Orchestration Deploy and manage Kubernetes clusters in on-premises and hybrid environments. Integrate Cluster API (CAPI) for automated K8s provisioning. Manage Helm, Azure Devops, and ingress (Nginx/Citrix) for application deployment. Implement container security best practices, policy-based access control, and resource optimization. Linux Administration Configure and maintain RedHat HA Clustering (Pacemaker, Corosync) for mission-critical applications. Manage GFS2 shared storage, cluster fencing, and high-availability networking. Ensure seamless failover and data consistency across cluster nodes. Perform Linux OS hardening, security patching, performance tuning, and troubleshooting. Physical Server Maintenance & Hardware Management Perform physical server installation, diagnostics, firmware upgrades, and maintenance. Work with SAN/NAS storage, network switches, and power management in data centers. Implement out-of-band management (IPMI/iLO/DRAC) for remote server monitoring and recovery. • Ensure hardware resilience, failure prediction, and proper capacity planning. Automation, Monitoring & Performance Optimization • Automate infrastructure provisioning, monitoring, and self-healing capabilities. Implement Prometheus, Grafana, and custom scripting via API for proactive monitoring. • Optimize compute, storage, and network performance in large-scale environments. • Implement disaster recovery (DR) and backup solutions for cloud workloads. Collaboration & Documentation • Work closely with DevOps, Enterprise Support, and software Developers to streamline cloud workflows. • Maintain detailed infrastructure documentation, playbooks, and incident reports. Train and mentor junior engineers on CloudStack, Kubernetes, and HA Clustering. The Must-Haves - 5+ years of experience in CloudStack or similar virtualization platform, Kubernetes, and Linux system administration. Strong expertise in Apache CloudStack (4.19+) or similar virtualization platform, KVM hypervisor, and Cluster API (CAPI). Extensive experience in RedHat HA Clustering (Pacemaker, Corosync) and GFS2 shared storage. Proficiency in OpenTofu, Ansible, Bash, Python, and Go for infrastructure automation. Experience with networking (VXLAN, SDN, BGP) and security best practices. Hands-on expertise in physical server maintenance, IPMI/iLO, RAID, and SAN storage. Strong troubleshooting skills in Linux performance tuning, logs, and kernel debugging. Knowledge of monitoring tools (Prometheus, Grafana, Alert manager). Preferred Qualifications • Experience with multi-cloud (AWS, Azure, GCP) or hybrid cloud environments. • Familiarity with CloudStack API customization, plugin development. • Strong background in disaster recovery (DR) and backup solutions for cloud environments. • Understanding of service meshes, ingress, and SSO. • Experience is Cisco UCS platform management. Over the years, we’ve discovered that the most effective and successful associates at apexanalytix are people who have a specific combination of values, skills, and behaviors that we call “The apex Way”. Read more about The apex Way - https://www.apexanalytix.com/careers/ Benefits At apexanalytix we know that our associates are the reason behind our successes. We truly value you as an associate and part of our professional family. Our goal is to offer the very best benefits possible to you and your loved ones. When it comes to benefits, whether for yourself or your family the most important aspect is choice. And we get that. apexanalytix offers competitive benefits for the countries that we serve, in addition to our BeWell@apex initiative that encourages employees’ growth in six key wellness areas: Emotional, Physical, Community, Financial, Social, and Intelligence. With resources such as a strong Mentor Program, Internal Training Portal, plus Education, Tuition, and Certification Assistance, we provide tools for our associates to grow and develop.

Posted 9 hours ago

Apply

8.0 - 10.0 years

4 - 6 Lacs

Chennai

On-site

Wipro Limited (NYSE: WIT, BSE: 507685, NSE: WIPRO) is a leading technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit us at www.wipro.com. Job Description Role Purpose The purpose of the role is to create exceptional architectural solution design and thought leadership and enable delivery teams to provide exceptional client engagement and satisfaction. ͏ Mandatory Skills Data Science, ML, DL, NLP or Computer Vision, Python, Tensorflow, Pytorch, Django, PostgreSQL Preferred Skills Gen AI, LLM, RAG, Lanchain, Vector DB, Azure Cloud, MLOps, Banking exposure ͏ 3.Competency Building and Branding Ensure completion of necessary trainings and certifications Develop Proof of Concepts (POCs),case studies, demos etc. for new growth areas based on market and customer research Develop and present a point of view of Wipro on solution design and architect by writing white papers, blogs etc. Attain market referencability and recognition through highest analyst rankings, client testimonials and partner credits Be the voice of Wipro’s Thought Leadership by speaking in forums (internal and external) Mentor developers, designers and Junior architects in the project for their further career development and enhancement Contribute to the architecture practice by conducting selection interviews etc ͏ Mandatory Strong understanding of Data Science, machine learning and deep learning principles and algorithms. Proficiency in programming languages such as Python, TensorFlow, and PyTorch. Ability to work with large datasets and knowledge of data preprocessing techniques. Strong Backend Python developer Experience in applying machine learning techniques, Natural Language Processing or Computer Vision using TensorFlow, Pytorch Build and deploy end to end ML models and leverage metrics to support predictions, recommendations, search, and growth strategies Expert in applying ML techniques such as: classification, clustering, deep learning, optimization methods, supervised and unsupervised techniques Optimize model performance and scalability for real-time inference and deployment. Experiment with different hyperparameters and model configurations to improve AI model quality. Ensure AI ML solutions are developed, and validations are performed in accordance with Responsible AI guidelines. ͏ 4.Team Management Resourcing Anticipating new talent requirements as per the market/ industry trends or client requirements Hire adequate and right resources for the team Talent Management Ensure adequate onboarding and training for the team members to enhance capability & effectiveness Build an internal talent pool and ensure their career progression within the organization Manage team attrition Drive diversity in leadership positions Performance Management Set goals for the team, conduct timely performance reviews and provide constructive feedback to own direct reports Ensure that the Performance Nxt is followed for the entire team Employee Satisfaction and Engagement Lead and drive engagement initiatives for the team Track team satisfaction scores and identify initiatives to build engagement within the team Mandatory Skills: Generative AI. Experience: 8-10 Years. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.

Posted 9 hours ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Key Responsibilities Lead and architect end-to-end data migrations from on-premise and legacy systems to Snowflake, ensuring optimal performance, scalability, and cost-efficiency. Design and develop reusable data ingestion and transformation frameworks using Python. Build and optimize real-time ingestion pipelines using Kafka, Snowpipe, and the COPY command. Utilize SnowConvert to migrate and optimize legacy ETL and SQL logic for Snowflake. Design and implement high-performance Snowflake data models, including materialized views, clustering keys, and result caching strategies. Monitor resource usage and implement auto-suspend/auto-resume, query profiling, and cost-control measures to manage compute and storage effectively. Drive cost governance initiatives, providing insights into credit usage and optimizing workload distribution. Integrate Snowflake with AWS services such as S3, Lambda, Glue, and Step Functions to ensure a robust data ecosystem. Mentor junior engineers, enforce best practices in development and code quality, and champion agile data engineering practices. ________________________________________ Required Skills And Experience 10+ years of experience in data engineering with a focus on enterprise ETL and cloud data platforms. 4+ years of hands-on experience in Snowflake development and architecture. Expertise In Advanced Snowflake Features Such As Snowpark, Streams & Tasks, Secure Data Sharing, Data Masking, and Time Travel. Proven ability to architect enterprise-grade Snowflake solutions optimized for performance, governance, and scalability. Proficient in Python for building orchestration tools, automation, and reusable data pipelines. Solid knowledge of AWS services, including S3, IAM, Lambda, Glue, and Step Functions. Hands-on experience with SnowConvert or similar tools for legacy code conversion. Familiarity with real-time data streaming technologies such as Kafka, Kinesis, or other event-based systems. Strong SQL skills with proven experience in query tuning, profiling, and performance optimization. Deep understanding of legacy ETL tools, with preferable experience in Ab Initio. Exposure to CI/CD pipelines, version control systems (e.g., Git), and automated deployment practices. ________________________________________ Preferred Qualifications Bachelors degree in Computer Science, Information Technology, or a related field. Experience in migrating on-premises or mainframe data warehouses to Snowflake. Familiarity with BI/analytics tools such as Tableau, Power BI, or Looker. Knowledge of data security and compliance best practices, including data masking, RBAC, and OAuth integration. Snowflake certifications (Developer, Architect) are a strong plus. Show more Show less

Posted 9 hours ago

Apply

1.0 - 2.0 years

0 Lacs

Chennai

On-site

Qualification BE or MCA or any equivalent professional degree Gender Criteria : Male Experience · 1-2 years of experience with Linux knowledge · Hands on Experience with Setup/Manage/Monitor Application/Web Servers (Jetty/IIS/Web Logic/Tomcat etc...) · Work experience with Cloud fare environment · Experience with DevOps is an added advantage. · Work experience with Clustering, handle Server/application migration activities on need basis. Job Description · To work with DB/App/network teams and handle relevant issues. · Setup/Monitor applications/Servers in Dev/UAT/Live environments · Hosting application with cloud fare · To provide Technical Guidance to teams on need basis. · Troubleshooting of Production issues and handle customer with technical evidences. · To handle Server, Video servers, Studio streams, application migration activities on need basis. · Identify root-cause of reported critical/high issues. · To handle planned/un-planned maintenance activities if any. Roles and Responsibilites Data Center , Video servers, Production Server Management Experiences OBS / Linux / SQL Troubleshooting Skill Live application Setup / Monitoring · Analyze the production reported issue and find out the root-cause of the issue. · Collaboration with Application / Support teams · Verbal/Written chat with stakeholders and customers if required · Monitoring different servers and Provide performance improvement recommendations. · Provide on time solution to production issues. · Verbal/Written chat with stakeholders and customers if required. Work Experience and skills: Essential; · Experience in Java / Linux / SQL · Product deployment / Monitoring / support. · Troubleshooting of Production issues · Product integration · Suggesting alternate solutions. · Incident management experiences Desirable: Gaming domain and Video Servers knowledge. Personal Qualities/Traits: Essential · Analytical with reporting skill · Good written and oral Communications Job Type: Full-time Pay: Up to ₹35,000.00 per month Schedule: Rotational shift Ability to commute/relocate: Urapakkam, Chennai, Chennai - 603 202, Tamil Nadu: Reliably commute or planning to relocate before starting work (Required) Education: Bachelor's (Preferred) Experience: total work: 1 year (Required) Linux: 1 year (Required) Expected Start Date: 20/06/2025

Posted 9 hours ago

Apply

5.0 years

3 - 7 Lacs

Ahmedabad

On-site

Location: Ahmedabad / Pune Required Experience: 5+ Years Preferred Immediate Joiner We are looking for a highly skilled Lead Data Engineer (Snowflake) to join our team. The ideal candidate will have extensive experience Snowflake, and cloud platforms, with a strong understanding of ETL processes, data warehousing concepts, and programming languages. If you have a passion for working with large datasets, designing scalable database schemas, and solving complex data problems. Key Responsibilities: Design, implement, and optimize data pipelines and workflows using Apache Airflow Develop incremental and full-load strategies with monitoring, retries, and logging Build scalable data models and transformations in dbt, ensuring modularity, documentation, and test coverage Develop and maintain data warehouses in Snowflake Ensure data quality, integrity, and reliability through validation frameworks and automated testing Tune performance through clustering keys, warehouse scaling, materialized views, and query optimization. Monitor job performance and resolve data pipeline issues proactively Build and maintain data quality frameworks (null checks, type checks, threshold alerts). Partner with data analysts, scientists, and business stakeholders to translate reporting and analytics requirements into technical specifications. Required Skills & Qualifications: Snowflake (data modeling, performance tuning, access control, external tables, streams & tasks) Apache Airflow (DAG design, task dependencies, dynamic tasks, error handling) dbt (Data Build Tool) (modular SQL development, jinja templating, testing, documentation) Proficiency in SQL, Spark and Python Experience building data pipelines on cloud platforms like AWS, GCP, or Azure Strong knowledge of data warehousing concepts and ELT best practices Familiarity with version control systems (e.g., Git) and CI/CD practices Familiarity with infrastructure-as-code tools like Terraform for provisioning Snowflake or Airflow environments. Excellent problem-solving skills and the ability to work independently. Perks: Flexible Timings 5 Days Working Healthy Environment Celebration Learn and Grow Build the Community Medical Insurance Benefit

Posted 10 hours ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Key Responsibilities Develop and maintain data pipelines and ETL processes using Snowflake, Streams & Tasks, and Snowpipe. Leverage Snowpark to build scalable data transformations in Python. Implement Secure Data Sharing, Row-Level Security, and Dynamic Data Masking for governed data access. Create and manage materialized views, automatic clustering, and search optimization for performance tuning. Collaborate with data scientists, analysts, and DevOps teams to deliver end-to-end data solutions. Monitor query performance, troubleshoot issues, and recommend optimizations using Query Profile and Resource Monitors. ________________________________________ Required Technical Skills Strong expertise in Snowflake SQL and data modeling (Star/Snowflake schema). Hands-on with Snowpark, Streams/Tasks, and Secure Data Sharing. Proficiency in Python or Java for data processing with Snowpark. Experience with cloud platforms: AWS, Azure, or GCP (Snowflake hosted environments). Familiarity with CI/CD, Git, and orchestration tools (e.g., Airflow, DBT). Working knowledge of data governance, data security, and compliance best practices. ________________________________________ Qualifications Bachelors or Masters degree in Computer Science, Information Systems, or a related field. Snowflake certification is a strong plus (e.g., SnowPro Core or Advanced). Show more Show less

Posted 10 hours ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Key Responsibilities Develop and maintain data pipelines and ETL processes using Snowflake, Streams & Tasks, and Snowpipe. Leverage Snowpark to build scalable data transformations in Python. Implement Secure Data Sharing, Row-Level Security, and Dynamic Data Masking for governed data access. Create and manage materialized views, automatic clustering, and search optimization for performance tuning. Collaborate with data scientists, analysts, and DevOps teams to deliver end-to-end data solutions. Monitor query performance, troubleshoot issues, and recommend optimizations using Query Profile and Resource Monitors. ________________________________________ Required Technical Skills Strong expertise in Snowflake SQL and data modeling (Star/Snowflake schema). Hands-on with Snowpark, Streams/Tasks, and Secure Data Sharing. Proficiency in Python or Java for data processing with Snowpark. Experience with cloud platforms: AWS, Azure, or GCP (Snowflake hosted environments). Familiarity with CI/CD, Git, and orchestration tools (e.g., Airflow, DBT). Working knowledge of data governance, data security, and compliance best practices. ________________________________________ Qualifications Bachelors or Masters degree in Computer Science, Information Systems, or a related field. Snowflake certification is a strong plus (e.g., SnowPro Core or Advanced). Show more Show less

Posted 10 hours ago

Apply

0 years

6 - 9 Lacs

Noida

On-site

Req ID: 299670 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Systems Integration Analyst to join our team in Noida, Uttar Pradesh (IN-UP), India (IN). Position General Duties and Tasks: Participate in research, design, implementation, and optimization of Machine learning Models Help AI product managers and business stakeholders understand the potential and limitations of AI when planning new products Understanding of Revenue Cycle Management processes like Claims filing and adjudication Hands on experience in Python Build data ingest and data transformation platform Identify transfer learning opportunities and new training datasets Build AI models from scratch and help product managers and stakeholders understand results Analysing the ML algorithms that could be used to solve a given problem and ranking them by their success probability Exploring and visualizing data to gain an understanding of it, then identifying differences in data distribution that could affect performance when deploying the model in the real world Verifying data quality, and/or ensuring it via data cleaning Supervising the data acquisition process if more data is needed Defining validation strategies Defining the pre-processing or feature engineering to be done on a given dataset Training models and tuning their hyperparameters Analysing the errors of the model and designing strategies to overcome them Deploying models to production Create APIs and help business customers put results of your AI models into operations JD Education Bachelor's in computer sciences or similar. Masters preferred. Skills hands on programming experience working on enterprise products Demonstrated proficiency in multiple programming languages with a strong foundation in a statistical platform such as Python, R, SAS, or MatLab. Knowledge in Deep Learning/Machine learning, Artificial Intelligence Experience in building AI models using algorithms of Classification & Clustering techniques Expertise in visualizing and manipulating big datasets Strong in MS SQL Acumen to take a complex problem and break it down to workable pieces, to code a solution Excellent verbal and written communication skills Ability to work in and define a fast pace and team focused environment Proven record of delivering and completing assigned projects and initiatives Ability to deploy large scale solutions to an enterprise estate Strong interpersonal skills Understanding of Revenue Cycle Management processes like Claims filing and adjudication is a plus About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA is an equal opportunity employer and considers all applicants without regarding to race, color, religion, citizenship, national origin, ancestry, age, sex, sexual orientation, gender identity, genetic information, physical or mental disability, veteran or marital status, or any other characteristic protected by law. We are committed to creating a diverse and inclusive environment for all employees. If you need assistance or an accommodation due to a disability, please inform your recruiter so that we may connect you with the appropriate team.

Posted 10 hours ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Senior Test Analyst Gurgaon/Bangalore, India The Senior Test Analyst will be responsible for ensuring the quality and reliability of our software applications within Global Technology by implementing comprehensive testing processes and methodologies. This position involves designing and executing test plans, test cases, and test scripts to validate functional and non-functional requirements. The Senior Test Analyst will collaborate with cross-functional teams, including developers, business analysts, and project managers, to understand requirements and make sure that testing aligns with business objectives. The ideal candidate will possess robust analytical skills, a deep understanding of testing best practices, and a commitment to continuous improvement in testing processes. What You’ll Be DOING What will your essential responsibilities include? Conduct detailed test analysis and preparation through the creation of estimates, test plans, test cases and scripts. Understand the business requirements and translate the business needs into test scenarios/cases Create comprehensive Test Plan / Test Approach documentation and manage sign-off process. Produce, maintain and communicate the following test artefacts: concise Test Level progress metrics, progress reports and completion reports. Provide a comprehensive approach to defect management i.e. defect clustering, triaging, etc. make sure delivery of the various areas of Systems Test such as Functional vs. Non-Functional testing. Determine how to implement the various stages of test, e.g. Systems Test, Integration Testing, Regression Testing, etc. during the project lifecycle Provide day-to-day support and guidance in test principles, techniques and tools to other Test Team members. Actively take part in test automation scripts development and ensures the efficient test coverage by reviewing the scripts Collaborate with our testing vendor partners as well as our AXA XL delivery team members to make sure comprehensive testing coverage and the timely delivery of software changes/enhancements Identify, communicate & track testing risks and issues then help develop mitigation plans to bring them to closure Work with the TCoE team to understand best practices and effectively implement them on your assigned applications to achieve our expected quality results. Provide guidance and training to your assigned testing teams on our TCoE’s best practices, tools, and methodologies. Define, collect, and analyze key performance indicators (KPIs) & metrics to evaluate testing effectiveness and drive improvements. What You Will BRING We’re looking for someone who has these abilities and skills: Bachelor’s degree in computer science, Information Technology, or a related field. Effective understanding of software development methodologies, including agile Proficiency in testing frameworks, tools, and best practices, especially TMMi and ISTQB. Robust knowledge of the various types of software testing - static, smoke, system, system integration, regression, UAT etc. Experience in database and API testing and knowledge of SQL, NoSQL, REST, SOAP services Experience of testing the applications hosted in cloud-based platforms such as Azure, AWS etc. Working knowledge of test automation tools such as Selenium, Playwright, UFT, Rest; Assured etc. Familiarity with JIRA & JIRA X-Ray Experience working with teams across distributed geographical boundaries, particularly with the majority of the business representatives located in Europe and India Excellent interpersonal and communication skills to effectively collaborate with both technical and non-technical stakeholders. Experience with property & casualty insurance lines of business and products will be preferred You will report to Test Lead Who WE Are AXA XL, the P&C and specialty risk division of AXA, is known for solving complex risks. For mid-sized companies, multinationals and even some inspirational individuals we don’t just provide re/insurance, we reinvent it. How? By combining a comprehensive and efficient capital platform, data-driven insights, leading technology, and the best talent in an agile and inclusive workspace, empowered to deliver top client service across all our lines of business − property, casualty, professional, financial lines and specialty. With an innovative and flexible approach to risk solutions, we partner with those who move the world forward. Learn more at axaxl.com What We OFFER Inclusion AXA XL is committed to equal employment opportunity and will consider applicants regardless of gender, sexual orientation, age, ethnicity and origins, marital status, religion, disability, or any other protected characteristic. At AXA XL, we know that an inclusive culture and a diverse workforce enable business growth and are critical to our success. That’s why we have made a strategic commitment to attract, develop, advance and retain the most diverse workforce possible, and create an inclusive culture where everyone can bring their full selves to work and can reach their highest potential. It’s about helping one another — and our business — to move forward and succeed. Five Business Resource Groups focused on gender, LGBTQ+, ethnicity and origins, disability and inclusion with 20 Chapters around the globe Robust support for Flexible Working Arrangements Enhanced family friendly leave benefits Named to the Diversity Best Practices Index Signatory to the UK Women in Finance Charter Learn more at axaxl.com/about-us/inclusion-and-diversity. AXA XL is an Equal Opportunity Employer. Total Rewards AXA XL’s Reward program is designed to take care of what matters most to you, covering the full picture of your health, wellbeing, lifestyle and financial security. It provides dynamic compensation and personalized, inclusive benefits that evolve as you do. We’re committed to rewarding your contribution for the long term, so you can be your best self today and look forward to the future with confidence. Sustainability At AXA XL, Sustainability is integral to our business strategy. In an ever-changing world, AXA XL protects what matters most for our clients and communities. We know that sustainability is at the root of a more resilient future. Our 2023-26 Sustainability strategy, called “Roots of resilience”, focuses on protecting natural ecosystems, addressing climate change, and embedding sustainable practices across our operations. Our Pillars Valuing nature: How we impact nature affects how nature impacts us. Resilient ecosystems - the foundation of a sustainable planet and society - are essential to our future. We’re committed to protecting and restoring nature - from mangrove forests to the bees in our backyard - by increasing biodiversity awareness and inspiring clients and colleagues to put nature at the heart of their plans. Addressing climate change: The effects of a changing climate are far reaching and significant. Unpredictable weather, increasing temperatures, and rising sea levels cause both social inequalities and environmental disruption. We're building a net zero strategy, developing insurance products and services, and mobilizing to advance thought leadership and investment in societal-led solutions. Integrating ESG: All companies have a role to play in building a more resilient future. Incorporating ESG considerations into our internal processes and practices builds resilience from the roots of our business. We’re training our colleagues, engaging our external partners, and evolving our sustainability governance and reporting. AXA Hearts in Action: We have established volunteering and charitable giving programs to help colleagues support causes that matter most to them, known as AXA XL’s “Hearts in Action” programs. These include our Matching Gifts program, Volunteering Leave, and our annual volunteering day - the Global Day of Giving. For more information, please see axaxl.com/sustainability Show more Show less

Posted 10 hours ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description Global Data Insight & Analytics organization is looking for a top-notch Software Engineer who has also got Machine Learning knowledge & Experience to add to our team to drive the next generation of AI/ML (Mach1ML) platform. In this role you will work in a small, cross-functional team. The position will collaborate directly and continuously with other engineers, business partners, product managers and designers from distributed locations, and will release early and often. The team you will be working on is focused on building Mach1ML platform – an AI/ML enablement platform to democratize Machine Learning across Ford enterprise (like OpenAI’s GPT, Facebook’s FBLearner, etc.) to deliver next-gen analytics innovation. We strongly believe that data has the power to help create great products and experiences which delight our customers. We believe that actionable and persistent insights, based on high quality data platform, help business and engineering make more impactful decisions. Our ambitions reach well beyond existing solutions, and we are in search of innovative individuals to join this Agile team. This is an exciting, fast-paced role which requires outstanding technical and organization skills combined with critical thinking, problem-solving and agile management tools to support team success. Responsibilities What you'll be able to do: As a Software Engineer, you will work on developing features for Mach1ML platform, support customers in model deployment using Mach1ML platform on GCP and On-prem. You will follow Rally to manage your work. You will incorporate an understanding of product functionality and customer perspective for model deployment. You will work on the cutting-edge technologies such as GCP, Kubernetes, Docker, Seldon, Tekton, Airflow, Rally, etc. Position Responsibilities: Work closely with Tech Anchor, Product Manager and Product Owner to deliver machine learning use cases using Ford Agile Framework. Work with Data Scientists and ML engineers to tackle challenging AI problems. Work specifically on the Deploy team to drive model deployment and AI/ML adoption with other internal and external systems. Help innovate by researching state-of-the-art deployment tools and share knowledge with the team. Lead by example in use of Paired Programming for cross training/upskilling, problem solving, and speed to delivery. Leverage latest GCP, CICD, ML technologies Critical Thinking: Able to influence the strategic direction of the company by finding opportunities in large, rich data sets and crafting and implementing data driven strategies that fuel growth including cost savings, revenue, and profit. Modelling: Assessments, and evaluating impacts of missing/unusable data, design and select features, develop, and implement statistical/predictive models using advanced algorithms on diverse sources of data and testing and validation of models, such as forecasting, natural language processing, pattern recognition, machine vision, supervised and unsupervised classification, decision trees, neural networks, etc. Analytics: Leverage rigorous analytical and statistical techniques to identify trends and relationships between different components of data, draw appropriate conclusions and translate analytical findings and recommendations into business strategies or engineering decisions - with statistical confidence Data Engineering: Experience with crafting ETL processes to source and link data in preparation for Model/Algorithm development. This includes domain expertise of data sets in the environment, third-party data evaluations, data quality Visualization: Build visualizations to connect disparate data, find patterns and tell engaging stories. This includes both scientific visualization as well as geographic using applications such as Seaborn, Qlik Sense/PowerBI/Tableau/Looker Studio, etc. Qualifications Minimum Requirements we seek: Bachelor’s or master’s degree in computer science engineering or related field or a combination of education and equivalent experience. 3+ years of experience in full stack software development 3+ years’ experience in Cloud technologies & services, preferably GCP 3+ years of experience of practicing statistical methods and their accurate application e.g. ANOVA, principal component analysis, correspondence analysis, k-means clustering, factor analysis, multi-variate analysis, Neural Networks, causal inference, Gaussian regression, etc. 3+ years’ experience with Python, SQL, BQ. Experience in SonarQube, CICD, Tekton, terraform, GCS, GCP Looker, Google cloud build, cloud run, Vertex AI, Airflow, TensorFlow, etc., Experience in Train, Build and Deploy ML, DL Models Experience in HuggingFace, Chainlit, Streamlit, React Ability to understand technical, functional, non-functional, security aspects of business requirements and delivering them end-to-end. Ability to adapt quickly with opensource products & tools to integrate with ML Platforms Building and deploying Models (Scikit learn, DataRobots, TensorFlow PyTorch, etc.) Developing and deploying On-Prem & Cloud environments Kubernetes, Tekton, OpenShift, Terraform, Vertex AI Our Preferred Requirements: Master’s degree in computer science engineering, or related field or a combination of education and equivalent experience. Demonstrated successful application of analytical methods and machine learning techniques with measurable impact on product/design/business/strategy. Proficiency in programming languages such as Python with a strong emphasis on machine learning libraries, generative AI frameworks, and monitoring tools. Utilize tools and technologies such as TensorFlow, PyTorch, scikit-learn, and other machine learning libraries to build and deploy machine learning solutions on cloud platforms. Design and implement cloud infrastructure using technologies such as Kubernetes, Terraform, and Tekton to support scalable and reliable deployment of machine learning models, generative AI models, and applications. Integrate machine learning and generative AI models into production systems on cloud platforms such as Google Cloud Platform (GCP) and ensure scalability, performance, and proactive monitoring. Implement monitoring solutions to track the performance, health, and security of systems and applications, utilizing tools such as Prometheus, Grafana, and other relevant monitoring tools. Conduct code reviews and provide constructive feedback to team members on machine learning-related projects. Knowledge and experience in agentic workflow based application development and DevOps Stay up to date with the latest trends and advancements in machine learning and data science. Show more Show less

Posted 10 hours ago

Apply

3.0 - 4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Python ML Engineer Experience : 3 to 4 years Location : Chennai Type : Full-time About the Role We are looking for a talented and motivated Machine Learning Engineer with strong experience in Natural Language Processing (NLP) and Python . The ideal candidate will have hands-on experience developing and deploying machine learning models, particularly focused on text data, and be comfortable working in a fast-paced, collaborative environment. Key Responsibilities Design, develop, and deploy NLP models for text classification, information extraction, summarization, and/or sentiment analysis. Preprocess and clean large text corpora using NLP tools and libraries. Implement and fine-tune machine learning models using libraries like scikit-learn , spaCy , Transformers , or TensorFlow/Keras . Work with OCR tools (e.g., Tesseract , EasyOCR ) if working with scanned documents/images. Collaborate with data scientists, backend engineers, and product managers to integrate ML models into production systems. Analyse model performance and continuously improve accuracy, precision, and recall. Write clean, maintainable, and testable Python code. Required Skills & Qualifications 2 to 3 years of hands-on experience in Machine Learning and NLP using Python. Strong grasp of text preprocessing , tokenization , POS tagging , NER , topic modelling , etc. Experience with spaCy , NLTK , Hugging Face Transformers , or similar NLP libraries. Experience building and tuning models for classification, clustering, or regression tasks. Knowledge of OCR technologies (e.g., Tesseract , OpenCV ) is a plus. Familiarity with version control systems (e.g., Git ) and agile development. Bachelor's degree in Computer Science, Data Science, AI, or a related field. 📩 Don’t miss out! Send your CV to Prerana.prakash@adept-view.com today! Show more Show less

Posted 10 hours ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Dear Job Seekers, Greetings from Voice Bay! We are currently hiring for Machine Learning Engineer , If you are interested, please submit your application. Please find below the JD for your consideration: Work Location – Hyderabad Exp – 4 – 10 Years Work Mode – 5 Days Work From Office Mandatory Key Responsibilities  Design, develop, and implement end-to-end machine learning models, from initial data exploration and feature engineering to model deployment and monitoring in production environments.  Build and optimize data pipelines for both structured and unstructured datasets, focusing on advanced data blending, transformation, and cleansing techniques to ensure data quality and readiness for modeling.  Create, manage, and query complex databases, leveraging various data storage solutions to efficiently extract, transform, and load data for machine learning workflows.  Collaborate closely with data scientists, software engineers, and product managers to translate business requirements into effective, scalable, and maintainable ML solutions.  Implement and maintain robust MLOps practices, including version control, model monitoring, logging, and performance evaluation to ensure model reliability and drive continuous improvement.  Research and experiment with new machine learning techniques, tools, and technologies to enhance our predictive capabilities and operational efficiency. Required Skills & Experience  5+ years of hands-on experience in building, training, and deploying machine learning models in a professional, production-oriented setting.  Demonstrable experience with database creation and advanced querying (e.g., SQL, NoSQL), with a strong understanding of data warehousing concepts.  Proven expertise in data blending, transformation, and feature engineering, adept at integrating and harmonizing both structured (e.g., relational databases, CSVs) and unstructured (e.g., text, logs, images) data.  Strong practical experience with cloud platforms for machine learning development and deployment; significant experience with Google Cloud Platform (GCP) services (e.g., Vertex AI, BigQuery, Dataflow) is highly desirable.  Proficiency in programming languages commonly used in data science (e.g., Python is preferred, R).  Solid understanding of various machine learning algorithms (e.g., regression, classification, clustering, dimensionality reduction) and experience with advanced techniques like Deep Learning, Natural Language Processing (NLP), or Computer Vision.  Experience with machine learning libraries and frameworks (e.g., scikit-learn, TensorFlow, PyTorch).  Familiarity with MLOps tools and practices, including model versioning, monitoring, A/B testing, and continuous integration/continuous deployment (CI/CD) pipelines.  Experience with containerization technologies like Docker and orchestration tools like Kubernetes for deploying ML models as REST APIs.  Proficiency with version control systems (e.g., Git, GitHub/GitLab) for collaborative development. Educational Background  Bachelor's or Master's degree in Statistics, Mathematics, Computer Science, Engineering, Data Science, or a closely related quantitative field.  Alternatively, a significant certification in Data Science, Machine Learning, or Cloud AI combined with relevant practical experience will be considered.  A compelling combination of relevant education and professional experience will also be valued. Interested Candidates can share their Resume to the below mentioned Email I.D tarunrai@voicebaysolutions.in hr@voicebaysolutions.in Show more Show less

Posted 10 hours ago

Apply

8.0 - 12.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Title: Siebel Administrator Work Location: Any Oracle Global Services Center is a unit within oracle that establishes long-term relationships with many of Oracle's customers through annuity-based service contracts and project-based one-time services. Oracle GSC team sells from a broad IT-services portfolio in both the fixed price and T&M basis. Orace GSC services are typically requested by large Oracle customers that require the utmost attention to real mission-critical applications and processes. Oracle GSC covers many large-scale Oracle customers. Oracle Global Services Center provides unmatched, tailored support that ensures organization’s Oracle technology investments deliver the cutting-edge innovation and performance your business requires to compete, all while coexisting within your IT environment. Detailed Job Description: An experienced consulting professional who has an understanding of solutions, industry best practices, multiple business processes, or technology designs within a product/technology family. Operates independently to provide quality work products to an engagement. Performs varied and complex duties and tasks that need independent judgment, in order to implement Oracle products and technology to meet customer needs. Applies Oracle methodology, company procedures, and leading practices. Operates independently to provide quality work products to an engagement. Performs varied and complex duties and tasks that need independent judgment, in order to implement Oracle products and technology to meet customer needs. Applies Oracle methodology, company procedures, and leading practices. Demonstrates expertise to deliver functional and technical solutions on moderately complex customer engagements. May act as the team lead on projects. Effectively consults with management of customer organizations. Participates in business development activities. Develops and configures detailed solutions for moderately complex projects. 8 - 12 years of experience relevant to this position including consulting experience preferred. Undergraduate degree or equivalent experience. Product or technical expertise relevant to practice focus. Ability to communicate effectively. Ability to build rapport with team members and clients. Ability to travel as needed. Required Skills: Experience on Siebel installation on Windows and Linux In-depth knowledge & experience on Siebel migrations & Upgrade to latest version eg: IP17 and later versions Experience on Siebel Gateway clustering, multimode AI load balancing etc Experience on Siebel Performance tuning of server, AOM, AI, Gateway, tomcat’s etc Experience on Troubleshooting EAI component crashes and analysing crashes & fdr and component log files. Knowledge on System Administration activities such as configuring application components, and parameters and Troubleshooting component crashes. SSO, LDAP setup to AD and Troubleshooting Good overall troubleshooting skills Automation of regular administrative tasks Preferable experience on WLS/BIP/OAS/OAP installation, upgrade, and integration with Siebel Experience on DR setup and testing Experience on managing Siebel on OCI (or any cloud) is preferable. Performance Tuning of Siebel CRM Ready to work in 24x7 shift Ready to Travel Cloud- Migration exposure Desired Skills: OCI Certification Foundation / Architect / professional is added advantage. Willingness to Travel both domestic or out of the country. Show more Show less

Posted 10 hours ago

Apply

5.0 years

0 Lacs

India

Remote

Linkedin logo

Hi Please find the below JD for Facilitator – Data Science role with Regensys. Work Days : Mon – Sat Work Location : Work From Home www.digitalregenesys.com Job Description for Data Science facilitator Position Overview: We are searching for a highly skilled and proficient Data Science Facilitator for online upskilling courses in the field of data science. The facilitator will play a pivotal role in delivering comprehensive modules on cutting-edge data science topics, including machine learning, artificial intelligence, deep learning, web development, and more. The ideal candidate should possess a strong grasp of data science principles, technologies, and practical applications, coupled with exceptional communication and instructional abilities. Staying updated with the latest advancements in data science and effectively communicating these concepts to course participants is a fundamental aspect of this role. Responsibilities: Curriculum Development and Delivery: Develop and present engaging training modules that cover advanced data science topics, catering to learners' varying levels of proficiency. Deliver insightful content on subjects such as machine learning algorithms, artificial intelligence frameworks, deep learning architectures, and web development techniques. Technical skills: Programming Language: Python AI & Data analysis Packages/Framework: Numpy, Pandas, Seaborn, Sklearn, TensorFlow & Keras Knowledge of Natural Language Processing /Computer Vision will add an advantage. Customization of Learning Material: Tailor existing course materials or create new content to align with the specific learning requirements and skill levels of diverse participants, spanning from novice learners to experienced data science professionals. Stay Abreast of Industry Trends: Continuously research and monitor the dynamic landscape of data science, including emerging technologies, methodologies, and industry best practices, to ensure that training content remains relevant and up to date. Individualized Mentorship: Provide personalized guidance and support to participants, addressing their inquiries and assisting them in overcoming challenges encountered while mastering data science methodologies. Qualification 1. M Sc (Computer Science) or 2. MCA (Master in Computer Application) or 3. B Tech or M Tech in Computer Engineering or IT Technical Skills 1. Programming Language: Python 2. Database: MySQL/Oracle/SQL Server/PostgreSQL SQL (any one) 3. Data Science: Numpy, Pandas, Matplotlib, Seaborn, EDA 4. Machine Learning: Sklearn, ML models for regression, classification, and clustering problems 5. Additional knowledge of the following will be an advantage: Tableau/Power BI Work Experience[1] Minimum 5 years of teaching experience in relevant domain, however it can be lowered to 3 years for an exceptional candidate. Proven expertise and hands-on experience in data science, particularly in advanced areas like machine learning, artificial intelligence, deep learning, and web development. Exceptional presentation and facilitation skills, with the ability to engage and inspire learners in an online environment. Outstanding verbal and written communication skills, enabling the clear and concise explanation of intricate concepts to diverse audiences. In-depth comprehension of data science principles, tools, techniques, and industry standards. Capability to adapt instructional content based on participants' varying levels of familiarity with data science. Experience in online teaching, curriculum design, and the application of interactive learning tools will be advantageous. Relevant certifications (e.g., Certified Data Scientist, Google TensorFlow Developer, AWS Machine Learning Specialty) would be a plus. Effective problem-solving skills and the ability to address participants' inquiries and obstacles in a constructive manner. High level of professionalism, commitment to maintaining confidentiality, and adherence to ethical standards. Autonomous and collaborative work ethic, proficiently managing time and tasks. Keenness to continually enhance personal knowledge and skills within the evolving realm of data science. Please share your cv on riyap@dananda.net Show more Show less

Posted 10 hours ago

Apply

0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Linkedin logo

Senior Test Analyst Gurgaon/Bangalore, India The Senior Test Analyst will be responsible for ensuring the quality and reliability of our software applications within Global Technology by implementing comprehensive testing processes and methodologies. This position involves designing and executing test plans, test cases, and test scripts to validate functional and non-functional requirements. The Senior Test Analyst will collaborate with cross-functional teams, including developers, business analysts, and project managers, to understand requirements and make sure that testing aligns with business objectives. The ideal candidate will possess robust analytical skills, a deep understanding of testing best practices, and a commitment to continuous improvement in testing processes. What You’ll Be DOING What will your essential responsibilities include? Conduct detailed test analysis and preparation through the creation of estimates, test plans, test cases and scripts. Understand the business requirements and translate the business needs into test scenarios/cases Create comprehensive Test Plan / Test Approach documentation and manage sign-off process. Produce, maintain and communicate the following test artefacts: concise Test Level progress metrics, progress reports and completion reports. Provide a comprehensive approach to defect management i.e. defect clustering, triaging, etc. make sure delivery of the various areas of Systems Test such as Functional vs. Non-Functional testing. Determine how to implement the various stages of test, e.g. Systems Test, Integration Testing, Regression Testing, etc. during the project lifecycle Provide day-to-day support and guidance in test principles, techniques and tools to other Test Team members. Actively take part in test automation scripts development and ensures the efficient test coverage by reviewing the scripts Collaborate with our testing vendor partners as well as our AXA XL delivery team members to make sure comprehensive testing coverage and the timely delivery of software changes/enhancements Identify, communicate & track testing risks and issues then help develop mitigation plans to bring them to closure Work with the TCoE team to understand best practices and effectively implement them on your assigned applications to achieve our expected quality results. Provide guidance and training to your assigned testing teams on our TCoE’s best practices, tools, and methodologies. Define, collect, and analyze key performance indicators (KPIs) & metrics to evaluate testing effectiveness and drive improvements. What You Will BRING We’re looking for someone who has these abilities and skills: Bachelor’s degree in computer science, Information Technology, or a related field. Effective understanding of software development methodologies, including agile Proficiency in testing frameworks, tools, and best practices, especially TMMi and ISTQB. Robust knowledge of the various types of software testing - static, smoke, system, system integration, regression, UAT etc. Experience in database and API testing and knowledge of SQL, NoSQL, REST, SOAP services Experience of testing the applications hosted in cloud-based platforms such as Azure, AWS etc. Working knowledge of test automation tools such as Selenium, Playwright, UFT, Rest; Assured etc. Familiarity with JIRA & JIRA X-Ray Experience working with teams across distributed geographical boundaries, particularly with the majority of the business representatives located in Europe and India Excellent interpersonal and communication skills to effectively collaborate with both technical and non-technical stakeholders. Experience with property & casualty insurance lines of business and products will be preferred You will report to Test Lead Who WE Are AXA XL, the P&C and specialty risk division of AXA, is known for solving complex risks. For mid-sized companies, multinationals and even some inspirational individuals we don’t just provide re/insurance, we reinvent it. How? By combining a comprehensive and efficient capital platform, data-driven insights, leading technology, and the best talent in an agile and inclusive workspace, empowered to deliver top client service across all our lines of business − property, casualty, professional, financial lines and specialty. With an innovative and flexible approach to risk solutions, we partner with those who move the world forward. Learn more at axaxl.com What We OFFER Inclusion AXA XL is committed to equal employment opportunity and will consider applicants regardless of gender, sexual orientation, age, ethnicity and origins, marital status, religion, disability, or any other protected characteristic. At AXA XL, we know that an inclusive culture and a diverse workforce enable business growth and are critical to our success. That’s why we have made a strategic commitment to attract, develop, advance and retain the most diverse workforce possible, and create an inclusive culture where everyone can bring their full selves to work and can reach their highest potential. It’s about helping one another — and our business — to move forward and succeed. Five Business Resource Groups focused on gender, LGBTQ+, ethnicity and origins, disability and inclusion with 20 Chapters around the globe Robust support for Flexible Working Arrangements Enhanced family friendly leave benefits Named to the Diversity Best Practices Index Signatory to the UK Women in Finance Charter Learn more at axaxl.com/about-us/inclusion-and-diversity. AXA XL is an Equal Opportunity Employer. Total Rewards AXA XL’s Reward program is designed to take care of what matters most to you, covering the full picture of your health, wellbeing, lifestyle and financial security. It provides dynamic compensation and personalized, inclusive benefits that evolve as you do. We’re committed to rewarding your contribution for the long term, so you can be your best self today and look forward to the future with confidence. Sustainability At AXA XL, Sustainability is integral to our business strategy. In an ever-changing world, AXA XL protects what matters most for our clients and communities. We know that sustainability is at the root of a more resilient future. Our 2023-26 Sustainability strategy, called “Roots of resilience”, focuses on protecting natural ecosystems, addressing climate change, and embedding sustainable practices across our operations. Our Pillars Valuing nature: How we impact nature affects how nature impacts us. Resilient ecosystems - the foundation of a sustainable planet and society - are essential to our future. We’re committed to protecting and restoring nature - from mangrove forests to the bees in our backyard - by increasing biodiversity awareness and inspiring clients and colleagues to put nature at the heart of their plans. Addressing climate change: The effects of a changing climate are far reaching and significant. Unpredictable weather, increasing temperatures, and rising sea levels cause both social inequalities and environmental disruption. We're building a net zero strategy, developing insurance products and services, and mobilizing to advance thought leadership and investment in societal-led solutions. Integrating ESG: All companies have a role to play in building a more resilient future. Incorporating ESG considerations into our internal processes and practices builds resilience from the roots of our business. We’re training our colleagues, engaging our external partners, and evolving our sustainability governance and reporting. AXA Hearts in Action: We have established volunteering and charitable giving programs to help colleagues support causes that matter most to them, known as AXA XL’s “Hearts in Action” programs. These include our Matching Gifts program, Volunteering Leave, and our annual volunteering day - the Global Day of Giving. For more information, please see axaxl.com/sustainability Show more Show less

Posted 10 hours ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Description : Senior Data Scientist Role Overview: We are seeking a highly skilled and experienced Senior Data Scientist with a minimum of 4 years of experience in Data Science and Machine Learning, preferably with experience in NLP, Generative AI, LLMs, MLOps, Optimization techniques, and AI solution Architecture. In this role, you will play a key role in the development and implementation of AI solutions, leveraging your technical expertise. The ideal candidate should have a deep understanding of AI technologies and experience in designing and implementing cutting-edge AI models and systems. Additionally, expertise in data engineering, DevOps, and MLOps practices will be valuable in this role. Responsibilities: Your technical responsibilities: Contribute to the design and implementation of state-of-the-art AI solutions. Assist in the development and implementation of AI models and systems, leveraging techniques such as Language Models (LLMs) and generative AI. Collaborate with stakeholders to identify business opportunities and define AI project goals. Stay updated with the latest advancements in generative AI techniques, such as LLMs, and evaluate their potential applications in solving enterprise challenges. Utilize generative AI techniques, such as LLMs, Agentic Framework to develop innovative solutions for enterprise industry use cases. Integrate with relevant APIs and libraries, such as Azure Open AI GPT models and Hugging Face Transformers, to leverage pre-trained models and enhance generative AI capabilities. Implement and optimize end-to-end pipelines for generative AI projects, ensuring seamless data processing and model deployment. Utilize vector databases, such as Redis, and NoSQL databases to efficiently handle large-scale generative AI datasets and outputs. Implement similarity search algorithms and techniques to enable efficient and accurate retrieval of relevant information from generative AI outputs. Collaborate with domain experts, stakeholders, and clients to understand specific business requirements and tailor generative AI solutions accordingly. Conduct research and evaluation of advanced AI techniques, including transfer learning, domain adaptation, and model compression, to enhance performance and efficiency. Establish evaluation metrics and methodologies to assess the quality, coherence, and relevance of generative AI outputs for enterprise industry use cases. Ensure compliance with data privacy, security, and ethical considerations in AI applications. Leverage data engineering skills to curate, clean, and preprocess large-scale datasets for generative AI applications. Requirements: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. A Ph.D. is a plus. Minimum 4 years of experience in Data Science and Machine Learning. In-depth knowledge of machine learning, deep learning, and generative AI techniques. Proficiency in programming languages such as Python, R, and frameworks like TensorFlow or PyTorch. Strong understanding of NLP techniques and frameworks such as BERT, GPT, or Transformer models. Familiarity with computer vision techniques for image recognition, object detection, or image generation. Experience with cloud platforms such as Azure, AWS, or GCP and deploying AI solutions in a cloud environment. Expertise in data engineering, including data curation, cleaning, and preprocessing. Knowledge of trusted AI practices, ensuring fairness, transparency, and accountability in AI models and systems. Strong collaboration with software engineering and operations teams to ensure seamless integration and deployment of AI models. Excellent problem-solving and analytical skills, with the ability to translate business requirements into technical solutions. Strong communication and interpersonal skills, with the ability to collaborate effectively with stakeholders at various levels. Understanding of data privacy, security, and ethical considerations in AI applications. Track record of driving innovation and staying updated with the latest AI research and advancements. Good to Have Skills: Apply trusted AI practices to ensure fairness, transparency, and accountability in AI models Utilize optimization tools and techniques, including MIP (Mixed Integer Programming. Deep knowledge of classical AIML (regression, classification, time series, clustering) Drive DevOps and MLOps practices, covering CI/CD and monitoring of AI models. Implement CI/CD pipelines for streamlined model deployment and scaling processes. Utilize tools such as Docker, Kubernetes, and Git to build and manage AI pipelines. Apply infrastructure as code (IaC) principles, employing tools like Terraform or CloudFormation. Implement monitoring and logging tools to ensure AI model performance and reliability. Collaborate seamlessly with software engineering and operations teams for efficient AI model integration and deployment. Familiarity with DevOps and MLOps practices, including continuous integration, deployment, and monitoring of AI models. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 10 hours ago

Apply

8.0 - 12.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Title: Siebel Administrator Work Location: Any Oracle Global Services Center is a unit within oracle that establishes long-term relationships with many of Oracle's customers through annuity-based service contracts and project-based one-time services. Oracle GSC team sells from a broad IT-services portfolio in both the fixed price and T&M basis. Orace GSC services are typically requested by large Oracle customers that require the utmost attention to real mission-critical applications and processes. Oracle GSC covers many large-scale Oracle customers. Oracle Global Services Center provides unmatched, tailored support that ensures organization’s Oracle technology investments deliver the cutting-edge innovation and performance your business requires to compete, all while coexisting within your IT environment. Detailed Job Description: An experienced consulting professional who has an understanding of solutions, industry best practices, multiple business processes, or technology designs within a product/technology family. Operates independently to provide quality work products to an engagement. Performs varied and complex duties and tasks that need independent judgment, in order to implement Oracle products and technology to meet customer needs. Applies Oracle methodology, company procedures, and leading practices. Operates independently to provide quality work products to an engagement. Performs varied and complex duties and tasks that need independent judgment, in order to implement Oracle products and technology to meet customer needs. Applies Oracle methodology, company procedures, and leading practices. Demonstrates expertise to deliver functional and technical solutions on moderately complex customer engagements. May act as the team lead on projects. Effectively consults with management of customer organizations. Participates in business development activities. Develops and configures detailed solutions for moderately complex projects. 8 - 12 years of experience relevant to this position including consulting experience preferred. Undergraduate degree or equivalent experience. Product or technical expertise relevant to practice focus. Ability to communicate effectively. Ability to build rapport with team members and clients. Ability to travel as needed. Required Skills: Experience on Siebel installation on Windows and Linux In-depth knowledge & experience on Siebel migrations & Upgrade to latest version eg: IP17 and later versions Experience on Siebel Gateway clustering, multimode AI load balancing etc Experience on Siebel Performance tuning of server, AOM, AI, Gateway, tomcat’s etc Experience on Troubleshooting EAI component crashes and analysing crashes & fdr and component log files. Knowledge on System Administration activities such as configuring application components, and parameters and Troubleshooting component crashes. SSO, LDAP setup to AD and Troubleshooting Good overall troubleshooting skills Automation of regular administrative tasks Preferable experience on WLS/BIP/OAS/OAP installation, upgrade, and integration with Siebel Experience on DR setup and testing Experience on managing Siebel on OCI (or any cloud) is preferable. Performance Tuning of Siebel CRM Ready to work in 24x7 shift Ready to Travel Cloud- Migration exposure Desired Skills: OCI Certification Foundation / Architect / professional is added advantage. Willingness to Travel both domestic or out of the country. Show more Show less

Posted 10 hours ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Title: Data Modeler / Data Analyst Experience : 6 – 8 Years Location : Pune Job Summary We are looking for a seasoned Data Modeler / Data Analyst to design and implement scalable, reusable logical and physical data models on Google Cloud Platform—primarily BigQuery. You will partner closely with data engineers, analytics teams, and business stakeholders to translate complex business requirements into performant data models that power reporting, self-service analytics, and advanced data science workloads. Key Responsibilities · Gather and analyze business requirements to translate them into conceptual, logical, and physical data models on GCP (BigQuery, Cloud SQL, Cloud Spanner, etc.). · Design star/snowflake schemas, data vaults, and other modeling patterns that balance performance, flexibility, and cost. · Implement partitioning, clustering, and materialized views in BigQuery to optimize query performance and cost efficiency. · Establish and maintain data modelling standards, naming conventions, and metadata documentation to ensure consistency across analytic and reporting layers. · Collaborate with data engineers to define ETL/ELT pipelines and ensure data models align with ingestion and transformation strategies (Dataflow, Cloud Composer, Dataproc, dbt). · Validate data quality and lineage; work with BI developers and analysts to troubleshoot performance issues or data anomalies. · Conduct impact assessments for schema changes and guide version-control processes for data models. · Mentor junior analysts/engineers on data modeling best practices and participate in code/design reviews. · Contribute to capacity planning and cost-optimization recommendations for BigQuery datasets and reservations. Must-Have Skills · 6-8 years of hands-on experience in data modeling, data warehousing, or database design, including at least 2 years on GCP BigQuery. · Proficiency in dimensional modeling, 3NF, and modern patterns such as data vault. · Expert SQL skills with demonstrable ability to optimize complex analytical queries on BigQuery (partitioning, clustering, sharding strategies). · Strong understanding of ETL/ELT concepts and experience working with tools such as Dataflow, Cloud Composer, or dbt. · Familiarity with BI/reporting tools (Looker, Tableau, Power BI, or similar) and how model design impacts dashboard performance. · Experience with data governance practices—data cataloging, lineage, and metadata management (e.g., Data Catalog). · Excellent communication skills to translate technical concepts into business-friendly language and collaborate across functions. Good to Have · Experience of working on Azure Cloud (Fabric, Synapse, Delta Lake) Education · Bachelor’s or master’s degree in computer science, Information Systems, Engineering, Statistics, or a related field. Equivalent experience will be considered. Show more Show less

Posted 11 hours ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description ( Candidates with less than 5 years of experience, please do not apply for this. Wait for my next post for engineer roles) In this role, you will be working with one of the top engineering companies in the world, will be handling complex algorithms involving peta bytes of data. In this role you'll be responsible for building machine learning based systems and conduct data analysis that improves the quality of our large geospatial data. You’ll be developing NLP models to extract information, using outlier detection to identifying anomalies and applying data science methods to quantify the quality of our data. You will take part in the development, integration, productionisation and deployment of the models at scale, which would require a good combination of data science and software development. Responsibilities Development of machine learning models Building and maintaining software development solutions Provide insights by applying data science methods Take ownership of delivering features and improvements on time Must-have Qualifications 6+ years experience Senior data scientist/ML engineer preferable with knowledge of NLP Strong programming skills and extensive experience with PythonProfessional experience working with LLMs, transformers and open-source models from HuggingFace Professional experience working with machine learning and data science, such as classification, feature engineering, clustering, anomaly detection and neural networks Knowledgeable in classic machine learning algorithms (SVM, Random Forest, Naive Bayes, KNN etc). Experience using deep learning libraries and platforms, such as PyTorch Experience with frameworks such as Sklearn, Numpy, Pandas, Polars Excellent analytical and problem solving skillsExcellent oral and written communication skills Extra Merit Qualifications Knowledge in at least one of the following: NLP, information retrieval, data mining Ability to do statistical modeling and building predictive models Programming skills and experience with Scala and/or Java Show more Show less

Posted 12 hours ago

Apply

7.5 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Linkedin logo

Project Role : Infrastructure Engineer Project Role Description : Assist in defining requirements, designing and building data center technology components and testing efforts. Must have skills : DevOps Good to have skills : Network Analytics Minimum 7.5 Year(s) Of Experience Is Required Educational Qualification : Proactive Hirring Summary: As an Infrastructure Engineer, you will assist in defining requirements, designing and building data center technology components, and testing efforts. You will play a crucial role in ensuring the smooth functioning of the infrastructure. Your typical day will involve collaborating with cross-functional teams, analyzing requirements, designing and implementing solutions, and conducting testing to ensure optimal performance and reliability. Roles & Responsibilities: - Expected to be an SME, collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Assist in defining requirements for data center technology components. - Design and build data center technology components. - Conduct testing efforts to ensure optimal performance and reliability. - Collaborate with cross-functional teams to analyze requirements and design solutions. Professional & Technical Skills: - Must To Have Skills: Proficiency in DevOps. - Strong understanding of statistical analysis and machine learning algorithms. - Experience with data visualization tools such as Tableau or Power BI. - Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. - Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information: - The candidate should have a minimum of 7.5 years of experience in DevOps. - This position is based at our Indore office. - A Proactive Hirring is required. Show more Show less

Posted 12 hours ago

Apply

3.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Project Role : Application Tech Support Practitioner Project Role Description : Act as the ongoing interface between the client and the system or application. Dedicated to quality, using exceptional communication skills to keep our world class systems running. Can accurately define a client issue and can interpret and design a resolution based on deep product knowledge. Must have skills : Wireless Technologies Operations Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Tech Support Practitioner, you will act as the ongoing interface between the client and the system or application. You will be dedicated to quality, using exceptional communication skills to keep our world-class systems running. With your deep product knowledge, you will accurately define client issues and design resolutions. Your typical day will involve providing ongoing support to clients, troubleshooting technical issues, and ensuring smooth system operations. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work-related problems. - Provide ongoing support to clients, addressing their technical issues and concerns. - Troubleshoot system or application problems and provide timely resolutions. - Collaborate with cross-functional teams to ensure smooth system operations. - Stay updated with the latest product knowledge and industry trends. - Identify areas for process improvement and suggest innovative solutions. Professional & Technical Skills: - Must To Have Skills: Proficiency in Wireless Technologies Operations. - Strong understanding of statistical analysis and machine learning algorithms. - Experience with data visualization tools such as Tableau or Power BI. - Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. - Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information: - The candidate should have a minimum of 3 years of experience in Wireless Technologies Operations. - This position is based at our Bengaluru office. - A 15 years full-time education is required. Show more Show less

Posted 12 hours ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Level AI was founded in 2019 and is a Series C startup headquartered in Mountain View, California. Level AI revolutionizes customer engagement by transforming contact centers into strategic assets. Our AI-native platform leverages advanced technologies such as Large Language Models to extract deep insights from customer interactions. By providing actionable intelligence, Level AI empowers organizations to enhance customer experience and drive growth. Consistently updated with the latest AI innovations, Level AI stands as the most adaptive and forward-thinking solution in the industry. Empowering contact center stakeholders with real-time insights, our tech facilitates data-driven decision-making for contact centers, enhancing service levels and agent performance. As a vital team member, your work will be cutting-edge technologies and will play a high-impact role in shaping the future of AI-driven enterprise applications. You will directly work with people who've worked at Amazon, Facebook, Google, and other technology companies in the world. With Level AI, you will get to have fun, learn new things, and grow along with us. Ready to redefine possibilities? Join us! We'll love to explore more about you if you have Qualification: B.E/B.Tech/M.E/M.Tech/PhD from tier 1 engineering institutes with relevant work experience with a top technology company in computer science or mathematics-related fields with 3-5 years of experience in machine learning and NLP Knowledge and practical experience in solving NLP problems in areas such as text classification, entity tagging, information retrieval, question-answering, natural language generation, clustering, etc 3+ years of experience working with LLMs in large-scale environments. Expert knowledge of machine learning concepts and methods, especially those related to NLP, Generative AI, and working with LLMs Knowledge and hands-on experience with Transformer-based Language Models like BERT, DeBERTa, Flan-T5, Mistral, Llama, etc Deep familiarity with internals of at least a few Machine Learning algorithms and concepts Experience with Deep Learning frameworks like Pytorch and common machine learning libraries like scikit-learn, numpy, pandas, NLTK, etc Experience with ML model deployments using REST API, Docker, Kubernetes, etc Knowledge of cloud platforms (AWS/Azure/GCP) and their machine learning services is desirable Knowledge of basic data structures and algorithms Knowledge of real-time streaming tools/architectures like Kafka, Pub/Sub is a plus Your role at Level AI includes but is not limited to Big picture: Understand customers’ needs, innovate and use cutting edge Deep Learning techniques to build data-driven solutions Work on NLP problems across areas such as text classification, entity extraction, summarization, generative AI, and others Collaborate with cross-functional teams to integrate/upgrade AI solutions into the company’s products and services Optimize existing deep learning models for performance, scalability, and efficiency Build, deploy, and own scalable production NLP pipelines Build post-deployment monitoring and continual learning capabilities. Propose suitable evaluation metrics and establish benchmarks Keep abreast with SOTA techniques in your area and exchange knowledge with colleagues Desire to learn, implement and work with latest emerging model architectures, training and inference techniques, data curation pipelines, etc To learn more visit : https://thelevel.ai/ Funding : https://www.crunchbase.com/organization/level-ai LinkedIn : https://www.linkedin.com/company/level-ai/ Show more Show less

Posted 14 hours ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About the Role: We are seeking a highly skilled and hands-on GenAI expert to join our team and help shape our AI strategy from the ground up. The ideal candidate will not only bring deep technical knowledge but also a product-first mindset and a passion for delivering value through responsible and efficient AI deployment. This is a high-impact role with the potential to lead and build a team around. Key Responsibilities: Develop and refine GenAI applications leveraging foundation models (LLMs, VLMs) for real-world use cases. Fine-tune foundation models using proprietary and domain-specific data to enhance model relevance and performance. Own the full AI lifecycle including experimentation, evaluation, production readiness, and value realization. Define and track key AI metrics; implement monitoring and feedback loops to measure model effectiveness post-deployment. Apply traditional ML techniques (clustering, classification, vector search) as complementary strategies where appropriate. Build and deploy AI agent frameworks that can autonomously interact with tools, data stores, and other models to solve tasks end-to-end. Collaborate cross-functionally to integrate GenAI systems into existing platforms, ensuring scalability, efficiency, and business alignment. Ask the right questions to iterate, refine, and evolve AI solutions. Qualifications: Proven experience building GenAI-powered applications using LLMs, VLMs, and custom pipelines. Strong knowledge of model fine-tuning techniques and prompt engineering using proprietary data. Practical understanding of AI productization, lifecycle, metrics, and monitoring strategies. Hands-on experience with AI agent frameworks and related orchestration tools. Ability to articulate technical solutions, integration patterns, and tradeoffs effectively. Experience with Python, ML libraries (e.g., Hugging Face, LangChain, PyTorch, TensorFlow), and deployment in cloud environments (AWS, Azure, GCP). Preferred: Experience in a startup or innovation lab environment. Familiarity with vector databases (e.g., Pinecone, FAISS) and retrieval-augmented generation (RAG). Exposure to ethical AI, model interpretability, and responsible deployment practices. Show more Show less

Posted 14 hours ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies