Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
3.0 years
0 Lacs
Gujarat, India
On-site
Designation: Liferay Developer Experience: 3+ years Location: Bopal, Ahmedabad Overview: We are seeking a skilled Liferay Developer to join our dynamic team. The ideal candidate will have a strong background in Java development and extensive experience with Liferay portal development. As a Liferay Developer, you will be responsible for designing, developing, testing, and implementing Liferay-based solutions to meet our business needs. Responsibilities: 1.Develop and maintain Liferay-based web applications using Java/J2EE, Spring, Hibernate, and related technologies. 2.Collaborate with cross-functional teams to design, develop, and implement new features and enhancements. 3.Troubleshoot and resolve issues related to Liferay-based applications promptly and efficiently. 4.Develop and maintain Liferay-based integrations with other systems, including web services and third-party applications. 5.Ensure that Liferay-based applications are scalable, secure, and maintainable. 6.Work collaboratively with other teams to develop and implement best practices for Liferay development and maintenance. 7.Provide supervision and guidance to development teams, including experience in handling a small team of members. 8.Implement integration and security requirements, ensuring adherence to coding standards and best practices. 9.Create reusable design patterns and components to streamline development processes. 10.Conduct code reviews and ensure coding standards and best practices are followed. 11.Provide technical documentation for developed solutions. 12.Ensure non-functional best practices are incorporated, including Security, Performance, Scalability, DevOps, and Server configurations. Skills Required: · Minimum 3 years of experience with Liferay 7.x. · Proven track record of resolving technical problems quickly with high-quality code. · Experience in Web Service/Restful Web Service development. · Strong knowledge of the Liferay portal, including portlets, themes, layouts, hooks, and EXT. · Good understanding of Liferay Workflow implementation. · Strong knowledge of Liferay web content, including templates, structures, blogs, and message boards. · Proficiency in Roles and Permission framework within Liferay. · Good understanding of Liferay Commerce. · Very good knowledge of the OSGi framework. · Experience with clustering, SSO, Objects, Blueprints, gogo shell. · Ability to write PlSQLs in Databases like MySQL, PostgreSQL. · Proficiency in version control systems such as Git or SVN. · Experience in Microservice-based Solution Development and Implementation. · Strong problem-solving skills and the ability to work independently or as part of a team. · Excellent written and verbal communication skills. Interested Candidate share their updated CV at akansha.k@tridhyatech.com Show more Show less
Posted 1 week ago
3.0 - 4.5 years
0 Lacs
Gurugram, Haryana, India
On-site
Roles & Responsibilities '- Strong understanding of ML algorithms (regression, classification, clustering) with the ability to independently develop and scale models using Python with minimal supervision Experience in commercial analytics with a knack for translating business problems into analytical solutions and strategic recommendations. Proficient in Power BI to build intuitive dashboards and deliver insights in a clear, actionable format. Experience 3-4.5 Years Skills Primary Skill: Data Science Sub Skill(s): Data Science Additional Skill(s): Python, Data Science About The Company Infogain is a human-centered digital platform and software engineering company based out of Silicon Valley. We engineer business outcomes for Fortune 500 companies and digital natives in the technology, healthcare, insurance, travel, telecom, and retail & CPG industries using technologies such as cloud, microservices, automation, IoT, and artificial intelligence. We accelerate experience-led transformation in the delivery of digital platforms. Infogain is also a Microsoft (NASDAQ: MSFT) Gold Partner and Azure Expert Managed Services Provider (MSP). Infogain, an Apax Funds portfolio company, has offices in California, Washington, Texas, the UK, the UAE, and Singapore, with delivery centers in Seattle, Houston, Austin, Kraków, Noida, Gurgaon, Mumbai, Pune, and Bengaluru. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Overview: TekWissen is a global workforce management provider throughout India and many other countries in the world. The below client is a global company with shared ideals and a deep sense of family. From our earliest days as a pioneer of modern transportation, we have sought to make the world a better place – one that benefits lives, communities and the planet Job Title: Machine Learning Engineer Location: Chennai,TN 600119 Duration: 24 Months Work Type: Onsite Position Description: Train, Build and Deploy ML, DL Models > Software development using Python, work with Tech Anchors, Product Managers and the Team internally and across other Teams Ability to understand technical, functional, non-functional, security aspects of business requirements and delivering them end-to-end Software development using TDD approach Experience using GCP products & services Ability to adapt quickly with opensource products & tools to integrate with ML Platforms Skills Required: 3+ years of experience in Python software development 3+ years’ experience in Cloud technologies & services, preferably GCP 3+ years of experience of practicing statistical methods and their accurate application e.g. ANOVA, principal component analysis, correspondence analysis, k-means clustering, factor analysis, multi-variate analysis, Neural Networks, causal inference, Gaussian regression, etc. 3+ years’ experience with Python, SQL, BQ. Experience in SonarQube, CICD, Tekton, terraform, GCS, GCP Looker, Vertex AI, Airflow, TensorFlow, etc., Experience in Train, Build and Deploy ML, DL Models Ability to understand technical, functional, non-functional, security aspects of business requirements and delivering them end-to-end. Ability to adapt quickly with opensource products & tools to integrate with ML Platforms Building and deploying Models (Scikit learn, DataRobots, TensorFlow PyTorch, etc.) Developing and deploying On-Prem & Cloud environments Kubernetes, Tekton, OpenShift, Terraform, Vertex AI Skills Preferred: Good Communication, Presentation and Collaboration Skills Experience Required: 2 to 5 yrs Experience Preferred: GCP products & services Education Required: BE, BTech, MCA, M.Sc, ME TekWissen® Group is an equal opportunity employer supporting workforce diversity. Show more Show less
Posted 1 week ago
15.0 years
0 Lacs
Greater Kolkata Area
On-site
Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Large Language Models Good to have skills : NA Minimum 15 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an AI/ML Engineer, you will develop applications and systems utilizing AI tools, Cloud AI services, and GenAI models. Your role involves creating cloud or on-prem application pipelines with production-ready quality, incorporating deep learning, neural networks, chatbots, and image processing. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Expected to provide solutions to problems that apply across multiple teams. - Lead the implementation of large language models in AI applications. - Research and apply cutting-edge AI techniques to enhance system performance. - Contribute to the development and deployment of AI solutions across various domains. Professional & Technical Skills: - Must To Have Skills: Proficiency in Large Language Models. - Strong understanding of statistical analysis and machine learning algorithms. - Experience with data visualization tools such as Tableau or Power BI. - Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. - Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information: - The candidate should have a minimum of 12 years of experience in Large Language Models. - This position is based at our Hyderabad office. - A 15 years full-time education is required. 15 years full time education Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Overview: TekWissen is a global workforce management provider throughout India and many other countries in the world. The below clientis a global company with shared ideals and a deep sense of family. From our earliest days as a pioneer of modern transportation, we have sought to make the world a better place – one that benefits lives, communities and the planet Job Title: Senior Data Scientist Location: Chennai Duration: 12 Months Work Type: Onsite Position Description: We are seeking an experienced and highly analytical Senior Data Scientist with a strong statistical background to join our dynamic team. You will be instrumental in leveraging our rich datasets to uncover insights, build sophisticated predictive models, and create impactful visualizations that drive strategic decisions. Responsibilities: Lead the end-to-end lifecycle of data science projects, from defining the business problem and exploring data to developing, validating, deploying, and monitoring models in production. Apply advanced statistical methodologies and machine learning algorithms to analyze large, complex datasets (structured and unstructured) and extract meaningful patterns and insights. Develop and implement robust, scalable, and automated processes for data analysis and model pipelines, leveraging cloud infrastructure. Collaborate closely with business stakeholders and cross-functional teams to understand their analytical needs, translate them into technical requirements, and effectively communicate findings. Create compelling and interactive dashboards and data visualizations to clearly present complex results and insights to both technical and non-technical audiences. Stay up to date with the latest advancements in statistics, machine learning, and cloud technologies, and advocate for the adoption of best practices. Skills Required: Statistics, Machine Learning, Data Science, Problem Solving, Analytical, Communications Skills Preferred: GCP, Google Cloud Platform, Mechanical Engineering, Cost Analysis Experience Required: 5+ years of progressive professional experience in a Data Scientist, Machine Learning Engineer, or similar quantitative role, with a track record of successfully delivering data science projects. Bachelor's or Master's degree in Statistics. A strong foundation in statistical theory and application is essential for this role. (We might consider highly related quantitative fields like Applied Statistics, Econometrics, or Mathematical Statistics if they have a demonstrably strong statistical core, but Statistics is our primary focus). Proven hands-on experience applying a variety of machine learning techniques (e.g., regression, classification, clustering, tree-based models, potentially deep learning) to real-world business problems. Must have strong proficiency in Python and its data science ecosystem (e.g., Pandas, NumPy, Scikit-learn, potentially TensorFlow or PyTorch). Hands-on experience working with cloud computing platforms (e.g., AWS, Azure, GCP) for data storage, processing, and deploying analytical solutions. Extensive experience creating data visualizations and dashboards to effectively communicate insights. You know how to tell a story with data! Solid understanding of experimental design, hypothesis testing, and statistical inference. Excellent problem-solving skills, attention to detail, and the ability to work with complex data structures. Strong communication, presentation, and interpersonal skills, with the ability to explain technical concepts clearly to diverse audiences. Experience Preferred: Experience working within the Automotive industry or with related data such as vehicle telematics, manufacturing quality, supply chain, or customer behavior in an automotive context. Experience with GCP services such as GCP Big query, GCS, Cloud Run, Cloud Build, Cloud Source Repositories, Cloud Workflows Proficiency with specific dashboarding and visualization tools such as Looker Studio, PowerBI, Qlik, or Tableau. Experience with SQL for data querying and manipulation. Familiarity with big data technologies (e.g., Spark, Hadoop). Experience with MLOps practices and tools for deploying and managing models in production. Advanced degree (PhD) in Statistics or a related quantitative field. Education Required: Bachelor's Degree Education Preferred: Master's Degree TekWissen® Group is an equal opportunity employer supporting workforce diversity. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
With more than 45,000 employees and partners worldwide, the Customer Experience and Success (CE&S) organization is on a mission to empower customers to accelerate business value through differentiated customer experiences that leverage Microsoft’s products and services, ignited by our people and culture. We drive cross-company alignment and execution, ensuring that we consistently exceed customers’ expectations in every interaction, whether in-product, digital, or human-centered. CE&S is responsible for all up services across the company, including consulting, customer success, and support across Microsoft’s portfolio of solutions and products. Join CE&S and help us accelerate AI transformation for our customers and the world. Within CE&S, the Customer Service & Support (CSS) organization builds trust and confidence for every person and organization through delivering a seamless support experience. In CSS, we are powered by Microsoft’s AI technology to help consumers, businesses, partners, and more, resolve their issues quickly and securely, helping prevent future problems from occurring and achieving more from their Microsoft investment. In the Customer Service & Support (CSS) team we are looking for people with a passion for delivering customer success. As a Senior Technical Support Engineer, you will own, troubleshoot and solve complex customer technical issues. This opportunity will allow you to accelerate your career growth, hone your problem-solving, collaboration and research skills, and deepen your technical proficiency. This role is flexible in that you can work up to 50% from home. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Responsibilities Response and Resolution: You own, investigate, and solve complex customer technical issues and act as an advisor to the customer, collaborating within and across teams and leveraging troubleshooting tools and practices. Readiness: You lead in building communities with peer delivery roles and share your knowledge through readiness programs, technical coaching and mentoring of others. You deepen your technical and professional proficiency to enable you to resolve complex customer issues, through training and readiness. Product/Process Improvement: You engage with Microsoft Engineering/ Supportability teams to investigate potential product defects and help develop automation techniques and diagnostic tools driving Microsoft product improvements. Qualifications Required Qualifications: Bachelor's degree in Computer Science, Information Technology (IT), or related field AND 3+ years of technical support, technical consulting experience, or information technology experience OR 5+ years of technical support, technical consulting experience, or information technology experience. OR equivalent experience Language Qualification English Language: fluent in reading, writing and speaking. Preferred Qualifications Windows System Administration, Configuration, including a good basic understanding of: Registry File Storage User Accounts and Access Control Event Logs and Auditing Performance, Resource Monitor Networking (TCP, IP) Experience in one or more of these areas desirable Automated installation of Windows User Profile management Windows Update management Kerberos and delegation Bitlocker administration Windows Shell configuration and management Windows Activation, Licensing Remote Desktop Services configuration and management Clustering Printing configuration and management Resilient Storage technology (clustering, storage spaces) Server management tools Hyper-V management and VM deployment Application installation and management Windows backup and VSS PowerShell scripting Active Directory topology and management Network Tracing and analysis Public Key Infrastructure (PKI) deployment, management Remote File Systems (SMB) Group Policy management DNS deployment, management Troubleshooting hangs and crashes in Windows Network Virtualisation (Hyper-V, SDN) Troubleshooting performance issues using PerfMon and other tools Strong experience in below technologies Memory Management, Windows Registry, Blue Screen Windows Shell configuration and management Server hang and crash, Server No Boot and Reboot Scenarios Troubleshooting server performance issues using PerfMon and other tools Ability to meet Microsoft, customer and / or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud Background Check upon hire / transfer and every two years thereafter. Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations. Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
We are looking for 8+years experienced candidates for this role. Job Location: Technopark, Trivandrum Experience 8+ years of experience in Microsoft SQL Server administration Primary skills Strong experience in Microsoft SQL Server administration Qualifications Bachelor's degree in computer science, software engineering or a related field Microsoft SQL certifications (MTA Database, MCSA: SQL Server, MCSE: Data Management and Analytics) will be an advantage. Secondary Skills Experience in MySQL, PostgreSQL, and Oracle database administration. Exposure to Data Lake, Hadoop, and Azure technologies Exposure to DevOps or ITIL Main duties/responsibilities Optimize database queries to ensure fast and efficient data retrieval, particularly for complex or high-volume operations. Design and implement effective indexing strategies to reduce query execution times and improve overall database performance. Monitor and profile slow or inefficient queries and recommend best practices for rewriting or re-architecting queries. Continuously analyze execution plans for SQL queries to identify bottlenecks and optimize them. Database Maintenance: Schedule and execute regular maintenance tasks, including backups, consistency checks, and index rebuilding. Health Monitoring: Implement automated monitoring systems to track database performance, availability, and critical parameters such as CPU usage, memory, disk I/O, and replication status. Proactive Issue Resolution: Diagnose and resolve database issues (e.g., locking, deadlocks, data corruption) proactively, before they impact users or operations. High Availability: Implement and manage database clustering, replication, and failover strategies to ensure high availability and disaster recovery (e.g., using tools like SQL Server Always On, Oracle RAC, MySQL Group Replication). Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Purpose - Understand Business Processes & Data, Model the requirements to create Analytics Solutions Build Predictive Models & Recommendation Engines using state-of-the-art Machine Learning Techniques to aid Business Processes increase efficiency and effectiveness in their outcomes. Churn and Analyze the data to discover actionable insights & patterns for Business use. Assist the Function Head in Data Preparation & Modelling Tasks as required JobOutline - Collaborate with Business and IT teams for understanding and collecting data. Collect, collate, clean, process and transform large volume(s) of primarily Tabular data (Blend of Numerical, Categorical & some Text). Apply Data Preparation Techniques like Data Filtering, Joining, Cleaning, Missing Value imputation, Feature Extraction, Feature Engineering, Feature Selection, Dimensionality Reduction, Feature Scaling, Variable Transformation etc Apply as required: basic Algorithms like Linear Regression, Logistic Regression, ANOVA, KNN, Clustering (K-Means, Density, Hierarchical etc), SVM, Naïve Bayes, Decision Trees, Principal Components, Association Rule Mining etc. Apply as required: Ensemble Modeling algorithms like Bagging (Random Forest), Boosting (GBM, LGBM, XGBoost, CatBoost), Time-Series Modelling and other state-of-the-art Algorithms. Apply as required: Modelling concepts like Hyperparameter Optimization, Feature Selection, Stacking, Blending, K-Fold Cross-Validation, Bias & Variance, Overfitting etc Build Predictive Models using state-of-the-art Machine Learning techniques for Regression, Classification, Clustering, Recommendation Engines etc Perform Advance Analytics of the Business Data to find hidden patterns & insights, explanatory causes, and make strategic business recommendations based on the same Knowledge /Education BE /B. Tech – Any Stream Skills Should have strong expertise in Python libraries like Pandas & Scikit Learn along with ability to code according to requirements stated in the Job Outline above Experience of Python Editors like PyCharm and/or Jupyter Notebooks (or other editors) is a must. Ability to organize the code into Modules, Functions and/or Objects is a must Knowledge of using ChatGPT for ML will be preferred. Familiarity with basic SQL for Querying & Excel for Data Analysis is a must. Should understand basics of Statistics like Distributions, Hypothesis Testing, Sampling Techniques etc Work Experience Have an experience of at least 4 years of solving Business Problems through Data Analytics, Data Science and Modelling. Should have experience as a full-time Data Scientist for at least 2 years. Experience of at least 3 Projects in ML Model building, which were used in Production by Business or other clients Skills/Experience Preferred but not compulsory - Familiarity with using ChatGPT, LLMs, Out-of-the Box Models etc for Data Preparation & Model building Kaggle experience. Familiarity with R. Job Interface/Relationships: Internal Work with different Business Teams to build Predictive Models for them External None Key Responsibilities and % Time Spent Data Preparation for Modelling - Data Extraction, Cleaning, Joining & Transformation - 35% Build ML/AI Models for various Business Requirements - 35% Perform Custom Analytics for providing actionable insights to the Business - 20% Assist the Function Head in Data Preparation & Modelling Tasks as required - 10% Any other additional Input - Will not be considered for selection: Familiarity with Deep Learning Algorithms Image Processing & Classification Text Modelling using NLP Techniques Show more Show less
Posted 1 week ago
0.0 - 2.0 years
0 Lacs
Pune, Maharashtra
On-site
Location Pune, Maharashtra, India Category Digital Technology Job ID: R137761 Posted: Jun 10th 2025 Job Description Staff Build & Release Engineer Would you enjoy designing innovative software for energy products? Do you like working in collaborative teams and solving technical problems? Join our cutting-edge Software Development team Our Digital Solutions business provides intelligent, connected technologies to monitor and control our energy extraction assets. We provide customers with the peace of mind needed to reliably and efficiently improve their operations. Our team is building a next-generation platform of software for intelligent decisions, supporting the mission-critical requirements of customers. Partner with the best As a Staff Build & Release Engineer you will develop high performing, scaling and innovative end-to-end applications. You will collaborate extensively with system engineers, product owners, subject matter experts and various product stakeholders to create unique products. You will implement solutions that are aligned with our future and extend shared platforms and solutions. As a Staff Build & Release Engineer, you will be responsible for: Support and improve our tools/process for continuous deployment management Support solution Infra Architect to deploy the application and infra to customer private/public cloud Debug the Docker images/containers, Kubernetes clusters issues Build monitoring tools around Kubernetes/AKS clusters Develop process tools to track the customer releases and create update plans Develop process to ensure the patching/updates take place without affecting the operation SLA Responsible to meet availability SLA working with Infra and application team responsible for 24x7 Profile deployment process and identifies bottlenecks Demonstrate expertise in writing scripts to automate tasks. Implements Continuous Integration/Deployment build principles Provide expertise in the quality engineering, test planning and testing methodology for a developed code/images/containers Help businesses develop an overall strategy for deploying code. Contribute to planning and strategy with his/her ideas. Draw off experience in order to influence others Be expert at applying principles of SDLC and methodologies like Lean/Agile/XP, CI, Software and Product Security, Scalability, Documentation Practices, refactoring and Testing Techniques. Be able to document procedures for building and deploying Fuel your passion To be successful in this role you will: Bachelor's education in Computer Science, IT or Engineering At least 8+ years production experience providing hands-on technical expertise to design, deploy, secure, and optimize Cloud services Hands on Experience with containerization technologies (Docker, Kubernetes) is Must (minimum 2 years). Providing Production Operations support experience preferably with a cloud services provider (AWS, Azure or GCP) Experience with creating, maintaining and deploying automated build tools for minimum 2 years Have in-depth knowledge of Clustering, Load Balancing, High Availability, and Disaster Recovery, Auto Scaling Infrastructure-as-a-code (IaaC) using Terraform/CloudFormation Good to have knowledge of Application & Infrastructure Monitoring Tools like Prometheus, Grafana, Kibana, New Relic, Nagios Hands on experience of CI/CD tools like Jenkins Understanding on standard networking concept such as DNS, DHCP, subnets, Server Load Balancing, Firewalls Knowledge of Web based application development Strong knowledge of Unix/Linux and/or Windows operating systems Experience with common scripting languages (Bash, Perl, Python, Ruby) Able to assess code, build it, and run applications locally on his/her own Experience with creating and maintaining automated build tools Facilitates and coaches software engineering team sessions on requirements estimation and alternative approaches to team sizing and estimation. Publishes guidance and documentation to promote adoption of design. Proposes design solutions based on research and synthesis; creates general design principles that capture the vision and critical concerns for a program. Demonstrates mastery of the intricacies of interactions and dynamics in Agile teams. Working with us Our people are at the heart of what we do at Baker Hughes. We know we are better when all of our people are developed, engaged and able to bring their whole authentic selves to work. We invest in the health and well-being of our workforce, train and reward talent and develop leaders at all levels to bring out the best in each other. Working for you Our inventions have revolutionized energy for over a century. But to keep going forward tomorrow, we know we have to push the boundaries today. We prioritize rewarding those who embrace change with a package that reflects how much we value their input. Join us, and you can expect: Contemporary work-life balance policies and wellbeing activities Comprehensive private medical care options Safety net of life insurance and disability programs Tailored financial programs Additional elected or voluntary benefits #digitalpilot About Us: We are an energy technology company that provides solutions to energy and industrial customers worldwide. Built on a century of experience and conducting business in over 120 countries, our innovative technologies and services are taking energy forward – making it safer, cleaner and more efficient for people and the planet. Join Us: Are you seeking an opportunity to make a real difference in a company that values innovation and progress? Join us and become part of a team of people who will challenge and inspire you! Let’s come together and take energy forward. Baker Hughes Company is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, national or ethnic origin, sex, sexual orientation, gender identity or expression, age, disability, protected veteran status or other characteristics protected by law.
Posted 1 week ago
0.0 - 10.0 years
0 Lacs
Pune, Maharashtra
On-site
You deserve to do what you love, and love what you do – a career that works as hard for you as you do. At Fiserv, we are more than 40,000 #FiservProud innovators delivering superior value for our clients through leading technology, targeted innovation and excellence in everything we do. You have choices – if you strive to be a part of a team driven to create with purpose, now is your chance to Find your Forward with Fiserv. Responsibilities Requisition ID R-10356383 Date posted 06/10/2025 End Date 06/20/2025 City Pune State/Region Maharashtra Country India Location Type Onsite Calling all innovators – find your future at Fiserv. We’re Fiserv, a global leader in Fintech and payments, and we move money and information in a way that moves the world. We connect financial institutions, corporations, merchants, and consumers to one another millions of times a day – quickly, reliably, and securely. Any time you swipe your credit card, pay through a mobile app, or withdraw money from the bank, we’re involved. If you want to make an impact on a global scale, come make a difference at Fiserv. Job Title Professional, Systems Engineering What does a successful Snowflakes Advisor do? We are seeking a highly skilled and experienced Snowflake Advisor to take ownership of our data warehousing strategy, implementation, maintenance and support. In this role, you will design, develop, and lead the adoption of Snowflake-based solutions to ensure scalable, efficient, and secure data systems that empower our business analytics and decision-making processes. As a Snowflake Advisor, you will collaborate with cross-functional teams, lead data initiatives, and act as the subject matter expert for Snowflake across the organization. What you will do: Define and implement best practices for data modelling, schema design, query optimization in Snowflakes Develop and manage ETL/ELT workflows to ingest, transform and load data into Snowflakes from various resources Integrate data from diverse systems like databases, API`s, flat files, cloud storage etc. into Snowflakes. Using tools like Streamsets, Informatica or dbt to streamline data transformation processes Monitor or tune Snowflake performance including warehouse sizing, query optimizing and storage management. Manage Snowflakes caching, clustering and partitioning to improve efficiency Analyze and resolve query performance bottlenecks Monitor and resolve data quality issues within the warehouse Collaboration with data analysts, data engineers and business users to understand reporting and analytic needs Work closely with DevOps team for Automation, deployment and monitoring Plan and execute strategies for scaling Snowflakes environments as data volume grows Monitor system health and proactively identify and resolve issues Implement automations for regular tasks Enable seamless integration of Snowflakes with BI Tools like Power BI and create Dashboards Support ad hoc query requests while maintaining system performance Creating and maintaining documentation related to data warehouse architecture, data flow, and processes Providing technical support, troubleshooting, and guidance to users accessing the data warehouse Optimize Snowflakes queries and manage Performance Keeping up to date with emerging trends and technologies in data warehousing and data management Good working knowledge of Linux operating system Working experience on GIT and other repository management solutions Good knowledge of monitoring tools like Dynatrace, Splunk Serve as a technical leader for Snowflakes based projects, ensuring alignment with business goals and timelines Provide mentorship and guidance to team members in Snowflakes implementation, performance tuning and data management Collaborate with stakeholders to define and prioritize data warehousing initiatives and roadmaps. Act as point of contact for Snowflakes related queries, issues and initiatives What you will need to have: Must have 8 to 10 years of experience in data management tools like Snowflakes, Streamsets, Informatica Should have experience on monitoring tools like Dynatrace, Splunk. Should have experience on Kubernetes cluster management CloudWatch for monitoring and logging and Linux OS experience Ability to track progress against assigned tasks, report status, and proactively identifies issues. Demonstrate the ability to present information effectively in communications with peers and project management team. Highly Organized and works well in a fast paced, fluid and dynamic environment. What would be great to have: Experience in EKS for managing Kubernetes cluster Containerization technologies such as Docker and Podman AWS CLI for command-line interactions CI/CD pipelines using Harness S3 for storage solutions and IAM for access management Banking and Financial Services experience Knowledge of software development Life cycle best practices Thank you for considering employment with Fiserv. Please: Apply using your legal name Complete the step-by-step profile and attach your resume (either is acceptable, both are preferable). Our commitment to Diversity and Inclusion: Fiserv is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, gender identity, sexual orientation, age, disability, protected veteran status, or any other category protected by law. Note to agencies: Fiserv does not accept resume submissions from agencies outside of existing agreements. Please do not send resumes to Fiserv associates. Fiserv is not responsible for any fees associated with unsolicited resume submissions. Warning about fake job posts: Please be aware of fraudulent job postings that are not affiliated with Fiserv. Fraudulent job postings may be used by cyber criminals to target your personally identifiable information and/or to steal money or financial information. Any communications from a Fiserv representative will come from a legitimate Fiserv email address.
Posted 1 week ago
3.0 years
0 Lacs
Greater Kolkata Area
On-site
Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : SAP HANA DB Administration, PostgreSQL Administration, Hadoop Administration Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 16 years full time education Cloud Database Engineer HANA Required Skills: SAP HANA Database Administration - Knowledge of clustering, replication, and load balancing techniques to ensure database availability and reliability Proficiency in monitoring and maintaining the health and performance of high availability systems Experience with public cloud platforms such as GCP, AWS, or Azure Strong troubleshooting skills and the ability to provide effective resolutions for technical issues Desired Skills: Understanding of Cassandra, Ansible, Terraform, Kafka, Redis, Hadoop or Postgres. Growth and product mindset and a strong focus on automation. Working knowledge of Kubernetes for container orchestration and scalability. Activities: Collaborate closely with cross-functional teams to gather requirements and support SAP teams to execute database initiatives. Automate the provisioning and configuration of cloud infrastructure, ensuring efficient and reliable deployments. Provide operational support to monitor database performance, implement changes, and apply new patches and versions when required and previously agreed . Act as the point of contact for escalated technical issues with our Engineering colleagues, demonstrating deep troubleshooting skills to provide effective resolutions to unblock our partners. Requirements: Bachelor’s degree in computer science, Engineering, or a related field. Proven experience in planning, deploying, supporting, and optimizing highly scalable and resilient SAP HANA database systems. Ability to collaborate effectively with cross-functional teams to gather requirements and convert them into measurable scopes. troubleshooting skills and the ability to provide effective resolutions for technical issues. Familiarity with public cloud platforms such as GCP, AWS, or Azure. Understands Agile principles and methodologies. 16 years full time education Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Greater Kolkata Area
On-site
Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : SAP HANA DB Administration, PostgreSQL Administration, Hadoop Administration, Ansible on Microsoft Azure Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 16 years full time education Cloud Database Engineer HANA Required Skills: SAP HANA Database Administration - Knowledge of clustering, replication, and load balancing techniques to ensure database availability and reliability Proficiency in monitoring and maintaining the health and performance of high availability systems Experience with public cloud platforms such as GCP, AWS, or Azure Strong troubleshooting skills and the ability to provide effective resolutions for technical issues Desired Skills: Understanding of Cassandra, Ansible, Terraform, Kafka, Redis, Hadoop or Postgres. Growth and product mindset and a strong focus on automation. Working knowledge of Kubernetes for container orchestration and scalability. Activities: Collaborate closely with cross-functional teams to gather requirements and support SAP teams to execute database initiatives. Automate the provisioning and configuration of cloud infrastructure, ensuring efficient and reliable deployments. Provide operational support to monitor database performance, implement changes, and apply new patches and versions when required and previously agreed . Act as the point of contact for escalated technical issues with our Engineering colleagues, demonstrating deep troubleshooting skills to provide effective resolutions to unblock our partners. Requirements: Bachelor’s degree in computer science, Engineering, or a related field. Proven experience in planning, deploying, supporting, and optimizing highly scalable and resilient SAP HANA database systems. Ability to collaborate effectively with cross-functional teams to gather requirements and convert them into measurable scopes. troubleshooting skills and the ability to provide effective resolutions for technical issues. Familiarity with public cloud platforms such as GCP, AWS, or Azure. Understands Agile principles and methodologies. 16 years full time education Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
FactSet creates flexible, open data and software solutions for over 200,000 investment professionals worldwide, providing instant access to financial data and analytics that investors use to make crucial decisions. At FactSet, our values are the foundation of everything we do. They express how we act and operate, serve as a compass in our decision-making, and play a big role in how we treat each other, our clients, and our communities. We believe that the best ideas can come from anyone, anywhere, at any time, and that curiosity is the key to anticipating our clients’ needs and exceeding their expectations. Your Team's Impact Join our dynamic Machine Learning and AI team, where we build innovative models and solutions that drive business transformation and unlock new opportunities. You’ll be at the forefront of AI-powered initiatives, collaborating closely with product teams to shape the future of data-driven insights. This position offers high visibility and the chance to directly influence key decisions across the organization. You will report to the Manager of AI. Working Mode: Hybrid (3 days mandatory Office) What You'll Do Technical Leadership & Strategy Lead the design, development, and deployment of machine learning models and systems at scale. Define and drive the technical roadmap for ML initiatives in alignment with business goals. Evaluate and select appropriate ML techniques, architectures, and tools for various problems (e.g., NLP, CV, tabular data). Ensure robust experimentation, validation, and performance benchmarking practices. Team Guidance & Mentorship Mentor and support junior and mid-level ML engineers, guiding them on model development, research approaches, and code quality. Conduct technical reviews of models, pipelines, and code to ensure high standards. Promote a culture of continuous learning, innovation, and scientific rigor within the team. System & Pipeline Development Architect and implement scalable ML pipelines for training, validation, inference, and monitoring. Collaborate with data engineers to ensure high-quality data ingestion, feature engineering, and labeling workflows. Contribute to MLOps practices by building reproducible, testable, and maintainable model delivery frameworks. Assist in designing, developing, and implementing machine learning models for real-world applications. Work on data collection, preprocessing, feature engineering, and model evaluation tasks. Collaborate with cross-functional teams including Data Science, Software Engineering, and Product. Perform exploratory data analysis (EDA) and prepare datasets for training/testing. Contribute to the deployment and monitoring of models in production environments. Write clean, efficient, and well-documented code in Python or similar languages. Stay updated with the latest developments in AI/ML research and tools. Assist in model optimization, hyperparameter tuning, and performance scaling. Stay current with the latest industry trends and technologies, contributing innovative ideas to ongoing projects. Test and validate models to ensure their reliability and effectiveness in production environments. Work with large datasets to extract meaningful insights using various statistical and ML techniques. Required Skills What We're Looking For Bachelor’s or master’s degree in computer science, Engineering, or a related field is required. 5+ years of experience in software development, with a focus on systems handling large-scale data operations. Strong foundation in Machine Learning concepts (supervised/unsupervised learning, regression, classification, clustering, etc.). Good programming skills in Python (or similar languages like R, Java, C++). Hands-on experience with ML libraries and frameworks (e.g., scikit-learn, TensorFlow, PyTorch, Keras). Understanding of data structures, algorithms, and basic mathematics/statistics and database management systems. Excellent verbal and written communication skills, capable of articulating complex concepts to technical and non-technical audiences. Familiarity with data handling tools (e.g., Pandas, NumPy, SQL). Good analytical, problem-solving, and communication skills. Ability to learn new technologies quickly and work independently or as part of a team. Ability to work collaboratively in a team environment, contributing to group success while expanding personal skills. Desired Skills Exposure to deep learning, NLP, computer vision, or reinforcement learning projects (academic or internships). Knowledge of cloud platforms like AWS, Azure, or GCP. Familiarity with version control systems (e.g., Git). Understanding of MLOps concepts and pipelines (bonus) What's In It For You At FactSet, our people are our greatest asset, and our culture is our biggest competitive advantage. Being a FactSetter means: The opportunity to join an S&P 500 company with over 45 years of sustainable growth powered by the entrepreneurial spirit of a start-up. Support for your total well-being. This includes health, life, and disability insurance, as well as retirement savings plans and a discounted employee stock purchase program, plus paid time off for holidays, family leave, and company-wide wellness days. Flexible work accommodations. We value work/life harmony and offer our employees a range of accommodations to help them achieve success both at work and in their personal lives. A global community dedicated to volunteerism and sustainability, where collaboration is always encouraged, and individuality drives solutions. Career progression planning with dedicated time each month for learning and development. Business Resource Groups open to all employees that serve as a catalyst for connection, growth, and belonging. Learn More About Our Benefits Here. Salary is just one component of our compensation package and is based on several factors including but not limited to education, work experience, and certifications. Company Overview: FactSet (NYSE:FDS | NASDAQ:FDS) helps the financial community to see more, think bigger, and work better. Our digital platform and enterprise solutions deliver financial data, analytics, and open technology to more than 8,200 global clients, including over 200,000 individual users. Clients across the buy-side and sell-side, as well as wealth managers, private equity firms, and corporations, achieve more every day with our comprehensive and connected content, flexible next-generation workflow solutions, and client-centric specialized support. As a member of the S&P 500, we are committed to sustainable growth and have been recognized among the Best Places to Work in 2023 by Glassdoor as a Glassdoor Employees’ Choice Award winner. Learn more at www.factset.com and follow us on X and LinkedIn. At FactSet, we celebrate difference of thought, experience, and perspective. Qualified applicants will be considered for employment without regard to characteristics protected by law. Show more Show less
Posted 1 week ago
25.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
The Company PayPal has been revolutionizing commerce globally for more than 25 years. Creating innovative experiences that make moving money, selling, and shopping simple, personalized, and secure, PayPal empowers consumers and businesses in approximately 200 markets to join and thrive in the global economy. We operate a global, two-sided network at scale that connects hundreds of millions of merchants and consumers. We help merchants and consumers connect, transact, and complete payments, whether they are online or in person. PayPal is more than a connection to third-party payment networks. We provide proprietary payment solutions accepted by merchants that enable the completion of payments on our platform on behalf of our customers. We offer our customers the flexibility to use their accounts to purchase and receive payments for goods and services, as well as the ability to transfer and withdraw funds. We enable consumers to exchange funds more safely with merchants using a variety of funding sources, which may include a bank account, a PayPal or Venmo account balance, PayPal and Venmo branded credit products, a credit card, a debit card, certain cryptocurrencies, or other stored value products such as gift cards, and eligible credit card rewards. Our PayPal, Venmo, and Xoom products also make it safer and simpler for friends and family to transfer funds to each other. We offer merchants an end-to-end payments solution that provides authorization and settlement capabilities, as well as instant access to funds and payouts. We also help merchants connect with their customers, process exchanges and returns, and manage risk. We enable consumers to engage in cross-border shopping and merchants to extend their global reach while reducing the complexity and friction involved in enabling cross-border trade. Our beliefs are the foundation for how we conduct business every day. We live each day guided by our core values of Inclusion, Innovation, Collaboration, and Wellness. Together, our values ensure that we work together as one global team with our customers at the center of everything we do – and they push us to ensure we take care of ourselves, each other, and our communities. Job Description Summary: What you need to know about the role Each Data Scientist on this team has full ownership of a portfolio of a product and is responsible for end-to-end management of loss and decline rates. Day-to-day duties include data analysis, monitoring and forecasting, creating the logic for and implementing risk rules and strategies, providing requirements to data scientists and technology teams on attribute, model and platform requirements, and communicating with global stakeholders to ensure we deliver the best possible customer experience while meeting loss rate targets. Meet our team PayPal's Global Fraud Protection team is responsible for partnering with global business units to manage a variety of risk of various types, including identity fraud, account takeover, stolen financial fraud, and credit issues. This is an exciting department that plays an important role in contributing PayPal's bottom line financial savings, ensuring safe and secure global business growth, and delivering the best customer experience. This open opportunity is within the Large Merchant and Markets Fraud Risk team. This portfolio is comprised of PayPal’s newest leading-edge payments solutions, such as Risk-as-Service, Fastlane, PayPal Complete Payments, etc. as well as customized experiences developed for the company’s highest-priority strategic Markets and Partnerships. Job Description: Your way to impact You will be the Data Scientist in the Fraud Risk team , where you will work on leading new projects to build and improve the Risk strategies to prevent fraud using the Risk tooled and custom data & AL/ML models. In this position, you will be partnering with the corresponding Business Units to align with and influence their strategic priorities, educate business partners about Risk management principles, and collaboratively optimize the Risk treatments and experiences for these unique products and partners. Your day to day In your day to day role you will - In this role you will have full ownership of a portfolio of merchants and is responsible for end-to-end management of loss and decline rates. Collaborate with different teams to develop strategies for fraud prevention, loss savings, and optimize transaction declines or improve customer friction. You will work together with cross-functional teams to deliver solutions and providing Risk analytics on frustration trend/ KPIs monitoring or alerting for fraud events. These solutions will adapt PayPal’s advanced proprietary fraud prevention tools enabling business growth. What Do You Need To Bring- 2-4 years of relevant experience working with large-scale complex dataset. Strong analytical mindset, ability to decompose business requirements into an analytical plan, and execute the plan to answer those business questions Excellent communication skills, equally adept at working with engineers as well as business leaders Want to build new solutions and invent new approaches to big, ambiguous, critical problems Strong working knowledge of Excel, SQL and Python/R Technical Proficiency: Exploratory Data Analysis and expertise in preparing a clean and structured data for model development. Experience in applying AI/ML techniques for business decisioning including supervised and unsupervised learning (e.g., regression, classification, clustering, decision trees, anomaly detection, etc.). Knowledge of model evaluation techniques such as Precision, Recall, ROC-AUC Curve, etc. along with basic statistical concepts. For the majority of employees, PayPal's balanced hybrid work model offers 3 days in the office for effective in-person collaboration and 2 days at your choice of either the PayPal office or your home workspace, ensuring that you equally have the benefits and conveniences of both locations. Our Benefits: At PayPal, we’re committed to building an equitable and inclusive global economy. And we can’t do this without our most important asset—you. That’s why we offer benefits to help you thrive in every stage of life. We champion your financial, physical, and mental health by offering valuable benefits and resources to help you care for the whole you. We have great benefits including a flexible work environment, employee shares options, health and life insurance and more. To learn more about our benefits please visit https://www.paypalbenefits.com Who We Are: To learn more about our culture and community visit https://about.pypl.com/who-we-are/default.aspx Commitment to Diversity and Inclusion PayPal provides equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, pregnancy, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state, or local law. In addition, PayPal will provide reasonable accommodations for qualified individuals with disabilities. If you are unable to submit an application because of incompatible assistive technology or a disability, please contact us at paypalglobaltalentacquisition@paypal.com. Belonging at PayPal: Our employees are central to advancing our mission, and we strive to create an environment where everyone can do their best work with a sense of purpose and belonging. Belonging at PayPal means creating a workplace with a sense of acceptance and security where all employees feel included and valued. We are proud to have a diverse workforce reflective of the merchants, consumers, and communities that we serve, and we continue to take tangible actions to cultivate inclusivity and belonging at PayPal. Any general requests for consideration of your skills, please Join our Talent Community. We know the confidence gap and imposter syndrome can get in the way of meeting spectacular candidates. Please don’t hesitate to apply. REQ ID R0127427 Show more Show less
Posted 1 week ago
12.0 years
0 Lacs
Gurugram, Haryana, India
Remote
Job Description As a Care Engineer for Nokia Mediation, you'll deliver end-to-end global support while combining technical expertise with hands-on development—owning code customizations, fixes, and delivery to customers. You'll act as a technical leader and mentor, contributing to global improvement initiatives and serving as a Subject Matter Expert. While the role is primarily remote, it may occasionally involve on-site customer visits. You'll also be part of a 24x7 support rotation, ensuring high service availability across supported regions. How You Will Contribute And What You Will Learn Deliver end-to-end (L2–L4) support for Nokia’s Digital Business suite—primarily Nokia Mediation—ensuring timely resolution of customer issues within SLA through root cause analysis, solution delivery, and source code fixes. Meet and exceed Care quality standards and KPIs while actively contributing to a high-performance, innovation-driven support culture. Collaborate with cross-functional teams to address support and project-related needs efficiently and effectively. Engage directly with customers, requiring strong communication skills and the ability to manage expectations in high-pressure environments. Participate in 24x7 emergency support rotations, while contributing to continuous improvement initiatives focused on Care efficiency, product enhancement, and overall customer experience. Key Skills And Experience You have: A Bachelor's / Master's degree or equivalent with over 12 years of hands-on experience in technical support, service deployment, or software development for complex software applications, including L3/L4 support and R&D involvement At least 10 years of practical expertise in UNIX/Linux scripting with strong proficiency in operating systems and shell environments Proven experience with databases and programming languages, including RedisDB, Postgres, MariaDB, Oracle, Hadoop, SQL, Java, C, Perl, and PL/SQL Strong knowledge of networking, IP protocols, and cloud technologies, along with virtualization and clustering platforms such as OpenStack, VMware vSphere, OpenShift, and Kubernetes It would be Good if you also had: Motivated, independent, and able to build and maintain good relationship with customers and internal stakeholders About Us Come create the technology that helps the world act together Nokia is committed to innovation and technology leadership across mobile, fixed and cloud networks. Your career here will have a positive impact on people’s lives and will help us build the capabilities needed for a more productive, sustainable, and inclusive world. We challenge ourselves to create an inclusive way of working where we are open to new ideas, empowered to take risks and fearless to bring our authentic selves to work What we offer Nokia offers continuous learning opportunities, well-being programs to support you mentally and physically, opportunities to join and get supported by employee resource groups, mentoring programs and highly diverse teams with an inclusive culture where people thrive and are empowered. Nokia is committed to inclusion and is an equal opportunity employer Nokia has received the following recognitions for its commitment to inclusion & equality: One of the World’s Most Ethical Companies by Ethisphere Gender-Equality Index by Bloomberg Workplace Pride Global Benchmark At Nokia, we act inclusively and respect the uniqueness of people. Nokia’s employment decisions are made regardless of race, color, national or ethnic origin, religion, gender, sexual orientation, gender identity or expression, age, marital status, disability, protected veteran status or other characteristics protected by law. We are committed to a culture of inclusion built upon our core value of respect. Join us and be part of a company where you will feel included and empowered to succeed. About The Team As Nokia's growth engine, we create value for communication service providers and enterprise customers by leading the transition to cloud-native software and as-a-service delivery models. Our inclusive team of dreamers, doers and disruptors push the limits from impossible to possible. Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description Job Summary This position provides input and support for full systems life cycle management activities (e.g., analyses, technical requirements, design, coding, testing, implementation of systems and applications software, etc.). He/She performs tasks within planned durations and established deadlines. This position collaborates with teams to ensure effective communication and support the achievement of objectives. He/She provides knowledge, development, maintenance, and support for applications. Qualification Up to 4 years of experience Understanding of the IT infrastructure and its relationship to the operation Bachelor's degree in computer science, Information Systems, or equivalent preferred Primary Skills Strong knowledge of - Server Administration Networking Linux Administration Windows Server Administration SQL Server Administration Proficient with version control (Git) Managing packages (rpm, yum, apt) Process and service management (ps, kill, systemctl, cron) Secure remote access and file transfer (ssh, scp) Shell scripting & text processing (awk, sed) Networking basics and diagnostics (ping, curl, telnet, netstat, iptables, lsof) File system operations and storage management (df, du, mount, ln) System performance monitoring (top, htop, strace) Good knowledge of PC hardware and server architecture and networking Good documentation skills Good troubleshooting and analytical skills Good process management skills Proficient in Microsoft Office Secondary Skills Basic knowledge of clustering technologies Willingness to learn new technologies Minimal supervision required Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation. Show more Show less
Posted 1 week ago
8.0 - 10.0 years
7 - 11 Lacs
Chennai
Work from Office
About The Role Role Purpose The purpose of the role is to create exceptional architectural solution design and thought leadership and enable delivery teams to provide exceptional client engagement and satisfaction. ? Mandatory Skills Data Science, ML, DL, NLP or Computer Vision, Python, Tensorflow, Pytorch, Django, PostgreSQL Preferred Skills Gen AI, LLM, RAG, Lanchain, Vector DB, Azure Cloud, MLOps, Banking exposure ? 3.Competency Building and Branding Ensure completion of necessary trainings and certifications Develop Proof of Concepts (POCs),case studies, demos etc. for new growth areas based on market and customer research Develop and present a point of view of Wipro on solution design and architect by writing white papers, blogs etc. Attain market referencability and recognition through highest analyst rankings, client testimonials and partner credits Be the voice of Wipro’s Thought Leadership by speaking in forums (internal and external) Mentor developers, designers and Junior architects in the project for their further career development and enhancement Contribute to the architecture practice by conducting selection interviews etc ? Mandatory Strong understanding of Data Science, machine learning and deep learning principles and algorithms. Proficiency in programming languages such as Python, TensorFlow, and PyTorch. Ability to work with large datasets and knowledge of data preprocessing techniques. Strong Backend Python developer Experience in applying machine learning techniques, Natural Language Processing or Computer Vision using TensorFlow, Pytorch Build and deploy end to end ML models and leverage metrics to support predictions, recommendations, search, and growth strategies Expert in applying ML techniques such asclassification, clustering, deep learning, optimization methods, supervised and unsupervised techniques Optimize model performance and scalability for real-time inference and deployment. Experiment with different hyperparameters and model configurations to improve AI model quality. Ensure AI ML solutions are developed, and validations are performed in accordance with Responsible AI guidelines. ? 4.Team Management Resourcing Anticipating new talent requirements as per the market/ industry trends or client requirements Hire adequate and right resources for the team Talent Management Ensure adequate onboarding and training for the team members to enhance capability & effectiveness Build an internal talent pool and ensure their career progression within the organization Manage team attrition Drive diversity in leadership positions Performance Management Set goals for the team, conduct timely performance reviews and provide constructive feedback to own direct reports Ensure that the Performance Nxt is followed for the entire team Employee Satisfaction and Engagement Lead and drive engagement initiatives for the team Track team satisfaction scores and identify initiatives to build engagement within the team Mandatory Skills: Generative AI. Experience8-10 Years. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.
Posted 1 week ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Project Role : Product Owner Project Role Description : Drives the vision for the product by being the voice of the customer, following a human-centered design approach. Shapes and manages the product roadmap and product backlog and ensures the product team consistently deliver on the clients needs and wants. Validates and tests ideas through recurrent feedback loops to ensure knowledge discovery informs timely direction changes. Must have skills : Data Analytics Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Product Owner, you will drive the vision for the product by being the voice of the customer, following a human-centered design approach. You will shape and manage the product roadmap and product backlog, ensuring the product team consistently delivers on the clients' needs and wants. You will validate and test ideas through recurrent feedback loops to ensure knowledge discovery informs timely direction changes. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Lead the product vision and strategy. - Collaborate with stakeholders to define product requirements. - Prioritize and manage the product backlog. - Facilitate communication within the product team and with external stakeholders. - Analyze market trends and competition to inform product decisions. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Analytics. - Strong understanding of statistical analysis and machine learning algorithms. - Experience with data visualization tools such as Tableau or Power BI. - Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. - Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information: - The candidate should have a minimum of 3 years of experience in Data Analytics. - This position is based at our Bengaluru office. - A 15 years full time education is required. 15 years full time education Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Company Description WNS (Holdings) Limited (NYSE: WNS), is a leading Business Process Management (BPM) company. We combine our deep industry knowledge with technology and analytics expertise to co-create innovative, digital-led transformational solutions with clients across 10 industries. We enable businesses in Travel, Insurance, Banking and Financial Services, Manufacturing, Retail and Consumer Packaged Goods, Shipping and Logistics, Healthcare, and Utilities to re-imagine their digital future and transform their outcomes with operational excellence.We deliver an entire spectrum of BPM services in finance and accounting, procurement, customer interaction services and human resources leveraging collaborative models that are tailored to address the unique business challenges of each client. We co-create and execute the future vision of 400+ clients with the help of our 44,000+ employees. Job Description Job Description The ideal candidate would be adept at understanding customer's business challenges and defineappropriate analytics approach to design solution The ideal candidate should possess good communication and project management skills and can communicate effectively with a wide range of audiences, both technical and business Ability to accurately comprehend business requirements and being able to translate technical results into easily understood business outcomes The person should also be able to efficiently work with, or manage, an offshore team to produce outcomes in a timely manner. Previous experience working with the current offshore team would be really beneficial. The person should be competent in Python (Pandas, NumPy, scikit-learn etc.), possess high levels of analytical skills and have experience in the creation and/or evaluation of predictive models. Ideal candidate should have hands on experience using Machine Learning Techniques They should also have data extraction capabilities (SQL) and/or be able to understand and evaluate code relevant to data extraction Good knowledge of model development/data engineering experience using Python and R He / She would be responsible for creating Presentations, reports etc to present theanalysis findings to the end clients/stakeholders Should possess the ability to confidently socialize business recommendations and enable customer organization to implement such recommendations. Ideal candidate should be able to drive customer engagement and always exhibit thought leadership and provide value add Technical Experience (using Python):o Hands on experience in developing models (end to end) Logistic regression Clustering Decision Tree Random Forest Support Vector Machine Naïve Bayes Gradient Boosting Machine Deep learning Natural Language Processingo Tools Python (mandatory), R (preferred) Experience processing large amount of data using BigData technologies is preferred Knowledge of any of the visualization tools like Tableau, R Shiny etc. is a plus Functional/Domain Experience:o Good exposure to Insurance domain Certifications/Analytics competitionso Recognized certifications in analytics technologies o Participations/solving Kaggle competitions Qualifications Educational Qualification:o Masters in Statistics/Mathematics/Economics/Econometrics from Tier 1 institutions Or BE/B-Tech, MCA or MBA from Tier 1 institutions Relevant Experience: o 8+ years of hands on experience in executing analytics projects Show more Show less
Posted 1 week ago
3.0 - 10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Company We provide companies with innovative technology solutions for everyday business problems. Our passion is to help clients become intelligent, information-driven organizations. By embedding fact-based decision-making into daily operations, we optimize processes and outcomes. Experience 3 to 10 Years Required Qualifications: Data Engineering Skills 3–5 years of experience in data engineering, with hands-on experience in Snowflake and basic to intermediate proficiency in dbt. Capable of building and maintaining ELT pipelines using dbt and Snowflake, with guidance on architecture and best practices. Understanding of ELT principles and foundational knowledge of data modeling techniques (preferably Kimball/Dimensional). Intermediate experience with SAP Data Services (SAP DS), including extracting, transforming, and integrating data from legacy systems. Proficient in SQL for data transformation and basic performance tuning in Snowflake (e.g., clustering, partitioning, materializations). Familiar with workflow orchestration tools like dbt Cloud, Airflow, or Control M. Experience using Git for version control and exposure to CI/CD workflows in team environments. Exposure to cloud storage solutions such as Azure Data Lake, AWS S3, or GCS for ingestion and external staging in Snowflake. Working knowledge of Python for basic automation and data manipulation tasks. Understanding of Snowflake's role-based access control (RBAC), data security features, and general data privacy practices like GDPR. Key Responsibilities Design and build robust ELT pipelines using dbt on Snowflake, including ingestion from relational databases, APIs, cloud storage, and flat files. Reverse-engineer and optimize SAP Data Services (SAP DS) jobs to support scalable migration to cloud-based data platforms. Implement layered data architectures (e.g., staging, intermediate, mart layers) to enable reliable and reusable data assets. Enhance dbt/Snowflake workflows through performance optimization techniques such as clustering, partitioning, query profiling, and efficient SQL design. Use orchestration tools like Airflow, dbt Cloud, and Control-M to schedule, monitor, and manage data workflows. Apply modular SQL practices, testing, documentation, and Git-based CI/CD workflows for version-controlled, maintainable code. Collaborate with data analysts, scientists, and architects to gather requirements, document solutions, and deliver validated datasets. Contribute to internal knowledge sharing through reusable dbt components and participate in Agile ceremonies to support consulting delivery. Skills: sql,ci/cd,dbt,data engineering,azure data lake,gcs,git,ci,sap data services,airflow,aws s3,snowflake,python Show more Show less
Posted 1 week ago
0.0 - 2.0 years
1 - 4 Lacs
Hyderabad
Work from Office
Phantom/SOAR & Python experience with Good Development skills Good in ITIS and Understanding and building playbooks with On-prem multi-site clustering Splunk environment Practical experience in monitoring and tuning Playbooks & Use cases Good knowledge of creating custom apps with dashboards / reports / alerts and demonstrate Understanding of Splunk apps Ownership of delivery for small to large Splunk onboarding projects Ability to automate repetitive tasks and reduce noise Implementing and supporting Phantom with good Python, Red Hat and Windows experience Location: Pan India
Posted 1 week ago
3.0 - 7.0 years
12 - 16 Lacs
Bengaluru
Work from Office
As an Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviour’s. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Proof of Concept (POC) DevelopmentDevelop POCs to validate and showcase the feasibility and effectiveness of the proposed AI solutions. Help in showcasing the ability of Gen AI code assistant to refactor/rewrite and document code from one language to another Document solution architectures, design decisions, implementation details, and lessons learned. Stay up to date with the latest trends and advancements in AI, foundation models, and large language models. Evaluate emerging technologies, tools, and frameworks to assess their potential impact on solution design and implementation Preferred technical and professional experience Experience and working knowledge in COBOL & JAVA would be preferred Having experience in Code generation, code matching & code translation leveraging LLM capabilities would be a Big plus Demonstrate a growth mindset to understand clients' business processes and challenges
Posted 1 week ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
Company Overview Docusign brings agreements to life. Over 1.5 million customers and more than a billion people in over 180 countries use Docusign solutions to accelerate the process of doing business and simplify people’s lives. With intelligent agreement management, Docusign unleashes business-critical data that is trapped inside of documents. Until now, these were disconnected from business systems of record, costing businesses time, money, and opportunity. Using Docusign’s Intelligent Agreement Management platform, companies can create, commit, and manage agreements with solutions created by the #1 company in e-signature and contract lifecycle management (CLM). What you'll do You will play an important role in applying and implementing effective machine learning solutions, with a significant focus on Generative AI. You will work with product and engineering teams to contribute to data-driven product strategies, explore and implement GenAI applications, and deliver impactful insights. This position is an individual contributor role reporting to the Senior Manager, Data Science. Responsibility Experiment with, apply, and implement DL/ML models, with a strong emphasis on Large Language Models (LLMs), Agentic Frameworks, and other Generative AI techniques to predict user behavior, enhance product features, and improve automation Utilize and adapt various GenAI techniques (e.g., prompt engineering, RAG, fine-tuning existing models) to derive actionable insights, generate content, or create novel user experiences Collaborate with product, engineering, and other teams (e.g., Sales, Marketing, Customer Success) to build Agentic system to run campaigns at-scale Conduct in-depth analysis of customer data, market trends, and user insights to inform the development and improvement of GenAI-powered solutions Partner with product teams to design, administer, and analyze the results of A/B and multivariate tests, particularly for GenAI-driven features Leverage data to develop actionable analytical insights & present findings, including the performance and potential of GenAI models, to stakeholders and team members Communicate models, frameworks (especially those related to GenAI), analysis, and insights effectively with stakeholders and business partners Stay updated on the latest advancements in Generative AI and propose their application to relevant business problems Complete assignments with a sense of urgency and purpose, identify and help resolve roadblocks, and collaborate with cross-functional team members on GenAI initiatives Job Designation Hybrid: Employee divides their time between in-office and remote work. Access to an office location is required. (Frequency: Minimum 2 days per week; may vary by team but will be weekly in-office expectation) Positions at Docusign are assigned a job designation of either In Office, Hybrid or Remote and are specific to the role/job. Preferred job designations are not guaranteed when changing positions within Docusign. Docusign reserves the right to change a position's job designation depending on business needs and as permitted by local law. What you bring Basic Bachelor's or Master's degree in Computer Science, Physics, Mathematics, Statistics, or a related field 3+ years of hands-on experience in building data science applications and machine learning pipelines, with demonstrable experience in Generative AI projects Experience with Python for research and software development purposes, including common GenAI libraries and frameworks Strong knowledge of common machine learning, deep learning, and statistics frameworks and concepts, with a specific understanding of Large Language Models (LLMs), transformer architectures, and their applications Experience with or exposure to prompt engineering, and utilizing pre-trained LLMs (e.g., via APIs or open-source models) Experience with large datasets, distributed computing, and cloud computing platforms (e.g., AWS, Azure, GCP) Proficiency with relational databases (e.g., SQL) Experience in training, evaluating, and deploying machine learning models in production environments, with an interest in MLOps for GenAI Proven track record in contributing to ML/GenAI projects from ideation through to deployment and iteration Experience using machine learning and deep learning algorithms like CatBoost, XGBoost, LGBM, Feed Forward Networks for classification, regression, and clustering problems, and an understanding of how these can complement GenAI solutions Experience as a Data Scientist, ideally in the SaaS domain with some focus on AI-driven product features Preferred PhD in Statistics, Computer Science, or Engineering with specialization in machine learning, AI, or Statistics, with research or projects in Generative AI 5+ years of prior industry experience, with at least 1-2 years focused on GenAI applications Previous experience applying data science and GenAI techniques to customer success, product development, or user experience optimization Hands-on experience with fine-tuning LLMs or working with RAG methodologies Experience with or knowledge of experimentation platforms (like DataRobot) and other AI related ones (like CrewAI) Experience with or knowledge of the software development lifecycle/agile methodology, particularly in AI product development Experience with or knowledge of Github, JIRA/Confluence Contributions to open-source GenAI projects or a portfolio of GenAI related work Programming Languages like Python, SQL; familiarity with R Ability to break down complex technical concepts (including GenAI) into simple terms to present to diverse, technical, and non-technical audiences Life at Docusign Working here Docusign is committed to building trust and making the world more agreeable for our employees, customers and the communities in which we live and work. You can count on us to listen, be honest, and try our best to do what’s right, every day. At Docusign, everything is equal. We each have a responsibility to ensure every team member has an equal opportunity to succeed, to be heard, to exchange ideas openly, to build lasting relationships, and to do the work of their life. Best of all, you will be able to feel deep pride in the work you do, because your contribution helps us make the world better than we found it. And for that, you’ll be loved by us, our customers, and the world in which we live. Accommodation Docusign is committed to providing reasonable accommodations for qualified individuals with disabilities in our job application procedures. If you need such an accommodation, or a religious accommodation, during the application process, please contact us at accommodations@docusign.com. If you experience any issues, concerns, or technical difficulties during the application process please get in touch with our Talent organization at taops@docusign.com for assistance. Applicant and Candidate Privacy Notice Show more Show less
Posted 1 week ago
8.0 - 10.0 years
12 - 16 Lacs
Noida
Work from Office
We are seeking an experienced Lead Database Administrator ( DBA ) with a strong background in Oracle, MySQL, and AWS to join our growing team. In this role, you will be responsible for overseeing the management, performance, and security of our database environments, ensuring high availability and optimal performance. You will lead a team of DBA s and work collaboratively with various departments to support database needs across the organization. Key Responsibilities: Database Administration: Oversee and manage Oracle, MySQL, and cloud-based databases (AWS RDS, Aurora, etc.) in a production environment. Ensure high availability, performance tuning, backup/recovery, and security of all databases. Perform regular health checks, performance assessments, and troubleshooting for all database platforms. Implement database changes, patches, and upgrades in a controlled manner, ensuring minimal downtime. Cloud Infrastructure Management: Design, implement, and manage database systems on AWS, including AWS RDS, Aurora, and EC2-based database instances. Collaborate with cloud engineers to optimize database services and architecture for cost, performance, and scalability. Team Leadership: Lead and mentor a team of DBAs, providing guidance on database best practices and technical challenges. Manage and prioritize database-related tasks and projects to ensure timely completion. Develop and enforce database standards, policies, and procedures. Database Optimization: Monitor database performance and optimize queries, indexes, and database structures to ensure efficient operations. Tune databases to ensure high availability and fast query response times. Security and Compliance: Implement and maintain robust database security practices, including access controls, encryption, and audit logging. Ensure databases comply with internal and external security standards, regulations, and policies. Disaster Recovery Backup: Design and maintain disaster recovery plans, ensuring business continuity through regular testing and validation of backup and recovery processes. Automate database backup processes and ensure backups are performed regularly and correctly. Collaboration Support: Work closely with development teams to provide database support for application development, data modeling, and schema design. Provide 24/7 on-call support for critical database issues or emergencies. Required Skills Qualifications: Technical Expertise: Extensive experience in Oracle and MySQL database administration (version 11g and higher for Oracle, 5.x and higher for MySQL). Strong understanding of AWS cloud services related to database management, particularly AWS RDS, Aurora, EC2, and Lambda. Experience in database performance tuning, query optimization, and indexing. Proficient in backup and recovery strategies, including RMAN for Oracle and MySQL backup techniques. Solid understanding of database replication, clustering, and high-availability technologies. Leadership Management: Proven experience leading and mentoring teams of DBAs. Strong project management skills, with the ability to manage multiple database-related projects simultaneously. Excellent problem-solving and analytical skills. Security: Knowledge of database security best practices, including encryption, auditing, and access control. Experience implementing compliance frameworks such as PCI DSS, GDPR, or HIPAA for database systems. Additional Skills: Strong scripting skills (e.g., Shell, Python, Bash) for automation and database maintenance tasks. Experience with database monitoring tools (e.g., Oracle Enterprise Manager, MySQL Workbench, CloudWatch). Familiarity with containerization technologies (Docker, Kubernetes) and CI/CD pipelines for database deployments is a plus. Education Certifications: Bachelor's degree in Computer Science, Information Technology, or a related field. Oracle Certified Professional (OCP) and MySQL certifications preferred. AWS Certified Database - Specialty or similar AWS certification is a plus. Preferred Skills: Familiarity with other database technologies (SQL Server, PostgreSQL, NoSQL). Experience with DevOps practices and tools for database automation and infrastructure-as-code (e.g., Terraform, CloudFormation).
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Why Ryan? Global Award-Winning Culture Flexible Work Environment Generous Paid Time Off World-Class Benefits and Compensation Rapid Growth Opportunities Company Sponsored Two-Way Transportation Exponential Career Growth The Systems Administrator II maintains and manages server computing and storage platforms, including, but not limited to, installation, configuration, preventive maintenance, operation, and problem-resolution activities. Duties and Responsibilities, as they align to Ryan’s Key Results People Create a positive team experience. Receives cross training from other members of the Information Technology department. Client Proactive work status update to US / India liaison. Respond to client inquiries and requests from tax authorities. Value Proactively monitor and support all systems equipment and software to ensure high availability, including: servers, tape backups, UPS and printers. Maintain and support various Ryan applications, OS hardening, virus management services, server clustering. Supports critical server applications, including Microsoft® Exchange, mail gateways, and Web proxies. Maintains computer security. Maintains server computers, storage systems, and tape backup systems with current BIOS/firmware. Maintains server operating systems with current security patches. Restores user files as required. Contributes to the maintenance of the Information Technology department's disaster-recovery plan. Maintains server asset inventory and appropriate documentation. Cross trains other members of the Information Technology department. Performs on-call duties on a rotational basis. Contributes to efficiency improvements through process automation. Assists with other projects as needed. Support and assist the Help Desk and act as PC specialist when needed. Performs other duties as assigned. Education And Experience High-school diploma or general equivalency diploma (GED), and three to five years related systems administrator experience. Computer Skills To perform this job successfully, an individual must have basic knowledge of Microsoft® Word and Access and intermediate knowledge of Microsoft® Excel, Outlook, Internet navigation and research, systems administration tools, and scripting and automation tools. Certificates And Licenses Valid driver's license required. Windows Server, Microsoft® Exchange, EMC Storage and Active Directory certifications preferred. Supervisory Responsibilities This position has no supervisory responsibilities. Work Environment Standard indoor working environment. Occasional long periods of sitting while working at computer. Position requires regular interaction with employees at all levels of the Firm and interface with external vendors as necessary. Independent travel requirement: up to 25%. Equal Opportunity Employer: disability/veteran Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The job market for clustering roles in India is thriving, with numerous opportunities available for job seekers with expertise in this area. Clustering professionals are in high demand across various industries, including IT, data science, and research. If you are considering a career in clustering, this article will provide you with valuable insights into the job market in India.
Here are 5 major cities in India actively hiring for clustering roles: 1. Bangalore 2. Pune 3. Hyderabad 4. Mumbai 5. Delhi
The average salary range for clustering professionals in India varies based on experience levels. Entry-level positions may start at around INR 3-6 lakhs per annum, while experienced professionals can earn upwards of INR 12-20 lakhs per annum.
In the field of clustering, a typical career path may look like: - Junior Data Analyst - Data Scientist - Senior Data Scientist - Tech Lead
Apart from expertise in clustering, professionals in this field are often expected to have skills in: - Machine Learning - Data Analysis - Python/R programming - Statistics
Here are 25 interview questions for clustering roles: - What is clustering and how does it differ from classification? (basic) - Explain the K-means clustering algorithm. (medium) - What are the different types of distance metrics used in clustering? (medium) - How do you determine the optimal number of clusters in K-means clustering? (medium) - What is the Elbow method in clustering? (basic) - Define hierarchical clustering. (medium) - What is the purpose of clustering in machine learning? (basic) - Can you explain the difference between supervised and unsupervised learning? (basic) - What are the advantages of hierarchical clustering over K-means clustering? (advanced) - How does DBSCAN clustering algorithm work? (medium) - What is the curse of dimensionality in clustering? (advanced) - Explain the concept of silhouette score in clustering. (medium) - How do you handle missing values in clustering algorithms? (medium) - What is the difference between agglomerative and divisive clustering? (advanced) - How would you handle outliers in clustering analysis? (medium) - Can you explain the concept of cluster centroids? (basic) - What are the limitations of K-means clustering? (medium) - How do you evaluate the performance of a clustering algorithm? (medium) - What is the role of inertia in K-means clustering? (basic) - Describe the process of feature scaling in clustering. (basic) - How does the GMM algorithm differ from K-means clustering? (advanced) - What is the importance of feature selection in clustering? (medium) - How can you assess the quality of clustering results? (medium) - Explain the concept of cluster density in DBSCAN. (advanced) - How do you handle high-dimensional data in clustering? (medium)
As you venture into the world of clustering jobs in India, remember to stay updated with the latest trends and technologies in the field. Equip yourself with the necessary skills and knowledge to stand out in interviews and excel in your career. Good luck on your job search journey!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2