Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
1.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Internal Audit - Data Strategy - Associate – Hyderabad What We Do Internal Audit’s Mission Is To Independently Assess The Firm’s Internal Control Structure, Including The Firm’s Governance Processes And Controls, Risk Management, Capital And Anti-financial Crime Framework. In Addition, It Is Also To Raise Awareness Of Control Risk And Monitor The Implementation Of Management’s Control Measures. In Doing So, Internal Audit Communicates and reports on the effectiveness of the firm’s governance, risk management and controls that mitigate current and evolving risk Raise awareness of control risk Assesses the firm’s control culture and conduct risks; and Monitors management’s implementation of control measures Goldman Sachs Internal Audit comprises individuals from diverse backgrounds including chartered accountants, developers, risk management professionals, cybersecurity professionals, and data scientists. We are organized into global teams comprising business and technology auditors to cover all the firm’s businesses and functions, including securities, investment banking, consumer and investment management, risk management, finance, cyber-security and technology risk, and engineering. Who We Look For Goldman Sachs Internal Auditors demonstrate a strong risk, control and analytical mindset, exercise professional skepticism and challenge status quo on risks and control measures effectively with management. We look for individuals who enjoy learning about audit, businesses, and processes, have innovative and creative mindset in adapting analytical techniques to enhance audit function, develop teamwork and build relationships and are able to evolve and thrive in a fast-paced global environment. Your Impact As part of the third line of defense, you will be involved in independently assessing the firm’s overall control environment and its effectiveness as it relates to current and emerging risks and communicating the results to local/ global management. In doing so, you will be supporting the provision of independent, objective and timely assurance around the firm’s internal control structure, thereby supporting the Audit Committee, Board of Directors and Risk Committee in fulfilling their oversight responsibilities. We are looking for a strong data scientist, passionate about using data to challenge the norm, to join our Embed Data Analytics team. The candidate will work closely with the audit teams to build innovative and reusable analytical tools that will help make audit testing more efficient and provide meaningful insights into firm’s control environment. This position description is intended to describe the duties most frequently performed by an individual in this position. It is not intended to be a complete list of assigned duties but to describe a position level. The role shall be performed within a professional office environment. Goldman Sachs has health and safety polices that are available for all workers upon request. There are no specific health risks associate with therole. Responsibilities Execute on DA strategy developed by IA management within the context of audit responsibilities, such as risk assessment, audit planning, creation of reusable tools and providing innovative solutions to complex problems Partner with audit teams to help identify risks associated with businesses and facilitate strategic data sourcing and develop innovative solutions to increase efficiency and effectiveness of audit testing Build production ready analytical tools to automate repeatable and reusable processes within IA Build and manage relationships and communications with Audit team members Basic Qualifications 1-3 years of experience with a minimum of Bachelor’s in Computer Science, Math, or Statistics Experience with RDBMS/ SQL Proficiency in programming languages, such as Python, Java, or C++ Knowledge of basic statistics, including descriptive statistics, data distribution models, Time Series Analysis, correlation, and regression, and its application to data Strong team player with excellent communication skills (written and oral). Ability to communicate what is relevant and important in a clear and concise manner and ability to handle multiple tasks Strong contributing member of Data Science team and help build analytical capabilities for Internal Audit Division Driven and motivated and constantly taking initiative to improve performance Preferred Qualifications Experience with advanced data analytics tools and techniques Familiarity with text analytics and NLP using python Familiarity with machine learning algorithms and exposure to supervised and unsupervised learning - Linear/Logistic Regression, SVM, Random Forest and Boosting, Clustering and Patterns Recognition techniques Experience with analytical/ statistical programs such as SAS, SPSS, and R Experience with visualization tools (Spotfire, Qlikview or Tableau) is a plus Creativity/Innovation, i.e., ability to create new ways to improve current processes and develop practical solutions that add value to department About Goldman Sachs At Goldman Sachs, we commit our people, capital and ideas to help our clients, shareholders and the communities we serve to grow. Founded in 1869, we are a leading global investment banking, securities and investment management firm. Headquartered in New York, we maintain offices around the world. We believe who you are makes you better at what you do. We're committed to fostering and advancing diversity and inclusion in our own workplace and beyond by ensuring every individual within our firm has a number of opportunities to grow professionally and personally, from our training and development opportunities and firmwide networks to benefits, wellness and personal finance offerings and mindfulness programs. Learn more about our culture, benefits, and people at GS.com/careers. Show more Show less
Posted 1 week ago
5.0 - 8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description Job Description: We are seeking a highly motivated and enthusiastic Senior Data Scientist with 5- 8 years of experience to join our dynamic team. The ideal candidate will have a strong background in AI/ML analytics and a passion for leveraging data to drive business insights and innovation. Key Responsibilities Develop and implement machine learning models and algorithms. Work closely with project stakeholders to understand requirements and translate them into deliverables. Utilize statistical and machine learning techniques to analyze and interpret complex data sets. Stay updated with the latest advancements in AI/ML technologies and methodologies. Collaborate with cross-functional teams to support various AI/ML initiatives. Qualifications Bachelor’s degree in Computer Science, Data Science, or a related field. Strong understanding of machine learning , deep learning and Generative AI concepts. Preferred Skills Experience in machine learning techniques such as Regression, Classification, Predictive modeling, Clustering, Deep Learning stack using python Experience with cloud infrastructure for AI/ ML on AWS(Sagemaker, Quicksight,Athena, Glue). Expertise in building enterprise grade, secure data ingestion pipelines for unstructured data(ETL/ELT) – including indexing, search, and advance retrieval patterns. Proficiency in Python, TypeScript, NodeJS, ReactJS (and equivalent) and frameworks. (e.g., pandas, NumPy, scikit-learn, SKLearn, OpenCV, SciPy), Glue crawler, ETL Experience with data visualization tools (e.g., Matplotlib, Seaborn, Quicksight). Knowledge of deep learning frameworks (e.g., TensorFlow, Keras, PyTorch). Experience with version control systems (e.g., Git, CodeCommit). Strong knowledge and experience in Generative AI/ LLM based development. Strong experience working with key LLM models APIs (e.g. AWS Bedrock, Azure Open AI/ OpenAI) and LLM Frameworks (e.g. LangChain, LlamaIndex). Knowledge of effective text chunking techniques for optimal processing and indexing of large documents or datasets. Proficiency in generating and working with text embeddings with understanding of embedding spaces and their applications in semantic search and information. retrieval. Experience with RAG concepts and fundamentals (VectorDBs, AWS OpenSearch, semantic search, etc.), Expertise in implementing RAG systems that combine knowledge bases with Generative AI models. Knowledge of training and fine-tuning Foundation Models (Athropic, Claud , Mistral, etc.), including multimodal inputs and outputs. Good To Have Skills Knowledge and Experience in building knowledge graphs in production. Understanding of multi-agent systems and their applications in complex problem-solving scenarios. Equal Opportunity Employer Pentair is an Equal Opportunity Employer. With our expanding global presence, cross-cultural insight and competence are essential for our ongoing success. We believe that a diverse workforce contributes different perspectives and creative ideas that enable us to continue to improve every day. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
Content Writer (2–5 Years Experience) Location: On-site Employment Type: Full-Time About the Role: We are seeking a highly skilled Content Writer with 2 to 5 years of experience and a strong background in software development. This role is ideal for someone who understands the tech landscape, can write across multiple content formats, and knows how to build and scale content for performance and visibility. If you can work independently, meet deadlines without compromising quality. Key Responsibilities: Create high-quality, technically sound content including blogs, how-to guides, tutorials, landing pages, and case studies. Translate complex software development topics into clear, engaging, and accessible content for various audience segments. Research and plan content across multiple blog categories , such as thought leadership, evergreen content, product comparison, and trending tech. Develop and execute content scaling plans to support SEO, audience growth, and lead generation. Collaborate closely with developers, product teams, and marketing stakeholders using collaborative tools like Notion, Trello, or Slack. Conduct deep research using Google search operators and other advanced techniques to ensure information accuracy and content depth. Implement basic NLP and SEO optimization strategies, including keyword clustering, semantic enrichment, and topic modeling, to improve content visibility . Manage tight content calendars, delivering well-researched and polished pieces under strict deadlines. Required Skills & Qualifications: 2–5 years of proven experience as a content writer, preferably in a tech or software-focused organization. Strong understanding of software development , including SDLC, programming concepts, and developer tools. Hands-on experience with blog writing across categories (e.g., technical deep-dives, how-to guides, listicles, comparisons, opinion pieces). Ability to build scalable content plans that align with SEO and content marketing strategies. Demonstrated ability to meet deadlines and manage multiple content assignments simultaneously. Familiarity with collaborative tools such as Notion, Jira, Trello, Confluence, or similar. Proficiency in online research methodologies , including using Google Operators for accurate and efficient data sourcing. Working knowledge of NLP techniques for blog ranking , such as entity-based optimization, semantic keyword use, and featured snippet strategies. Strong editing, proofreading, and content QA skills. Preferred Qualifications: Prior experience writing for B2B SaaS, developer tools, or technical products. Familiarity with SEO tools like Ahrefs, SEMrush, or Surfer SEO. Understanding of content performance metrics via Google Analytics or similar tools To Apply: Send your resume, writing portfolio, and a brief note on your approach to technical content writing to hr@telepathyinfotech.com. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Location : Mumbai / Bangalore / Gurgaon (candidates with experience in solution design / network design / warehouse design ) The Solutions Design team at Delhivery's end-to-end supply chain division and is responsible for primary two activities. - Understand the complex logistical challenges of the potential customers (multiple segments including Auto, Consumer Durable, E-commerce, Industrial, D2C, and Quick Commerce), conduct data analysis, number crunching, and provide a data-backed 360-degree state-of-the-art supply chain solution covering WH Footprint, Network Design, Vehicle Selection, Warehousing, and Omnichannel distribution. - Undertake a gap analysis to identify the potential avenue to improve the profitability of the existing accounts As a Manager, you will be an integral part of the Enterprise Solution- Analytics team and will lead the analysis & generation of actionable insights to solve business problems. Specific day-to-day responsibilities will include, but will not be limited to: 1. Drive the analytics project of supply chain solutions/Network design/Warehousing Design - Modelling of historical transaction data to arrive at the optimal pioneering, tech-enabled end-to-end supply chain solutions in terms of Network footprint and Mode of transportation to reduce cost and time to the Customer 2. Should be able to explore data, conduct analysis and provide actionable recommendations to the ops process and Clients Independently 3. Conceptualization & creation of the dashboards and automated tools as per business requirements 4. Identifying & Tracking the business KPI & associated metrics by highlighting progress and proactively identifying issues/improvement areas 5. The candidate is required to work with cross-functional teams, from BD to engineering and operations to stitch together solutions that solve customers' needs. The candidate must have the drive to innovate and question the norm, with a strong entrepreneurial drive and a sense of ownership 6. The candidate is expected to be highly analytical and data-driven while being a hustler and problem solver. The candidate must thrive in a high-performance environment and possess strong relationship-building and stakeholder management skills. The candidate must have the drive to innovate and question the norm, with a strong entrepreneurial drive and a sense of ownership. 7. Work with leadership on multiple priorities and define the growth and strategic trajectory. 8. Able to Drive profitable growth across service lines of E2E while conducting PnL analysis of existing clients. Requirement: 1. Expertise in Business analytics tools & statistical programming languages – SQL, Excel, R/Tableau 2. Excellent problem-solving and analytical skills - ability to develop hypotheses, understand and interpret data within the context of the product/business - solve problems and distill data into actionable recommendations. 3. Strong communication skills with the ability to confidently work with cross-functional teams 4. Intellectual and analytical curiosity – an initiative to dig into the why what & how. 5. Strong number crunching and quantitative skills. 6. Advanced knowledge of MS Excel and PowerPoint. 7. Previous experience of working in an analytics role with exp in Statistical concepts, Clustering (K-Means Clustering), Data Modelling, Predictive Analysis, Forecasting 8. Ability to think of solutions through the first principle method. 9. Understanding of Logistics services is desirable 10. Candidate having a sense of understanding of Business Finance will be an added advantage Show more Show less
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
About VOIS VO IS (Vodafone Intelligent Solutions) is a strategic arm of Vodafone Group Plc, creating value and enhancing quality and efficiency across 28 countries, and operating from 7 locations: Albania, Egypt, Hungary, India, Romania, Spain and the UK. Over 29,000 highly skilled individuals are dedicated to being Vodafone Group’s partner of choice for talent, technology, and transformation. We deliver the best services across IT, Business Intelligence Services, Customer Operations, Business Operations, HR, Finance, Supply Chain, HR Operations, and many more. Established in 2006, VO IS has evolved into a global, multi-functional organisation, a Centre of Excellence for Intelligent Solutions focused on adding value and delivering business outcomes for Vodafone. About VOIS India In 2009, VO IS started operating in India and now has established global delivery centres in Pune, Bangalore and Ahmedabad. With more than 14,500 employees, VO IS India supports global markets and group functions of Vodafone, and delivers best-in-class customer experience through multi-functional services in the areas of Information Technology, Networks, Business Intelligence and Analytics, Digital Business Solutions (Robotics & AI), Commercial Operations (Consumer & Business), Intelligent Operations, Finance Operations, Supply Chain Operations and HR Operations and more. Job Description Big Data Handling: Passion and attitude to learn new data practices and tools (ingestion, transformation, governance, security & privacy), on both on-prem and Cloud (AWS preferable). Influences and contributes to innovative ways of unlocking value through companywide and external data Diagnostic Models Experience with diagnostic system using decision theory and causal models (including tools like probability, DAG, ADMG, Deterministic SME, etc) to predict the effects of an action to improve insight-led decisions. Able to productize the diagnostic systems built for reuse. Predictive & Prescriptive Analytics Models Expert in AI solutions - ML, DL, NLP, ES, RL etc. Should be able to build robust prescriptive learning systems that are scalable, real-time. Should be able to determine "Next Best Action" following Prescriptive Analytics. Autonomous Cognitive Systems Drive Autonomous system utility and continuously improve precision through creating the stable learning environment. Should be able to build Intelligent Autonomous Systems to prescribe proactive actions based on ML predictions and solicit feedback from the support functions with minimal human involvement. Big Data Tech, Environments & Frameworks Advanced applications of CNNs, RNNs, MLPs, Deep learning. Excellent application of machine learning and deep learning packages like tensorflow, pytorch, scikit, numpy, pandas, statsmodels theano, XGBoost etc. Demonstrated superior deep learning algorithms/framework. At least 1 certification in AWS will be preferred. Programming: Python, R, SQL Frameworks: TensorFlow, Keras, Scikit-learn Visualization: Tableau, Power BI Cloud: AWS, Azure Statistical Modeling: Regression, classification, clustering, time series Soft Skills: Communication, stakeholder management, problem-solving VOIS Equal Opportunity Employer Commitment India VO IS is proud to be an Equal Employment Opportunity Employer. We celebrate differences and we welcome and value diverse people and insights. We believe that being authentically human and inclusive powers our employees’ growth and enables them to create a positive impact on themselves and society. We do not discriminate based on age, colour, gender (including pregnancy, childbirth, or related medical conditions), gender identity, gender expression, national origin, race, religion, sexual orientation, status as an individual with a disability, or other applicable legally protected characteristics. As a result of living and breathing our commitment, our employees have helped us get certified as a Great Place to Work in India for four years running. We have been also highlighted among the Top 10 Best Workplaces for Millennials, Equity, and Inclusion , Top 50 Best Workplaces for Women , Top 25 Best Workplaces in IT & IT-BPM and 10th Overall Best Workplaces in India by the Great Place to Work Institute in 2024. These achievements position us among a select group of trustworthy and high-performing companies which put their employees at the heart of everything they do. By joining us, you are part of our commitment. We look forward to welcoming you into our family which represents a variety of cultures, backgrounds, perspectives, and skills! Apply now, and we’ll be in touch! Show more Show less
Posted 1 week ago
12.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
JD: Windows & VMware Specialist We are looking for a highly skilled Windows & VMware Specialist – L4 (Lead/Admin) to join our IT Infrastructure team. This is a customer-facing role that requires strong interpersonal communication, technical leadership, and advanced troubleshooting and analytical skills. The ideal candidate will lead complex support scenarios, drive operational excellence, and ensure high availability across Windows and VMware platforms. Key Responsibilities: Lead the administration and lifecycle management of Windows Server infrastructure and VMware vSphere environments. Serve as the technical lead in critical incidents, ensuring timely resolution and customer satisfaction. Act as a primary technical point of contact in customer-facing discussions for system performance, upgrades, and issue resolution. Mentor and guide junior engineers, ensuring best practices are followed in operations and incident handling. Plan, implement, and support Windows Server (2012/2016/2019/2022) and VMware (vCenter, ESXi, DRS, HA, vMotion) environments. Perform root cause analysis (RCA) for major incidents and lead the development of preventive measures. Ensure patching, upgrades, backups, and monitoring are carried out with minimal impact to business operations. Develop and maintain technical documentation, SOPs, and architectural diagrams. Ensure compliance with security policies, hardening guidelines, and internal audit requirements. Required Skills & Qualifications: 12+ years of enterprise IT experience, with 8+ years in a lead or senior-level role in Windows and VMware administration. Deep hands-on expertise in: Windows Server administration (AD, GPO, DNS, DHCP, Failover Clustering). VMware vSphere, including ESXi, vCenter, snapshots, DRS, and HA. Strong scripting and automation skills using PowerShell or equivalent. Experience with monitoring, backup, and disaster recovery tools like Veeam, SolarWinds, vRealize, or equivalent. Solid understanding of networking fundamentals (TCP/IP, VLANs, firewalls, VPN). Excellent customer-facing communication, problem-solving, and collaboration skills. Familiarity with ITIL practices, especially incident, change, and problem management. Preferred Skills & Certifications: VMware Certified Professional (VCP) or Microsoft Windows Server certification (e.g., AZ-800/AZ-801 or MCSA). Experience in hybrid environments with cloud integration (Azure/AWS). Exposure to infrastructure automation or infrastructure-as-code (IaC) tools like Ansible, Terraform. Knowledge of compliance frameworks such as ISO 27001 or NIST is an added advantage. Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Working as an AI/ML Engineer at Navtech, you will: * Design, develop, and deploy machine learning models for classification, regression, clustering, recommendations, or NLP tasks. Clean, preprocess, and analyze large datasets to extract meaningful insights and features. Work closely with data engineers to develop scalable and reliable data pipelines. Experiment with different algorithms and techniques to improve model performance. Monitor and maintain production ML models, including retraining and model drift detection. Collaborate with software engineers to integrate ML models into applications and services. Document processes, experiments, and decisions for reproducibility and transparency. Stay current with the latest research and trends in machine learning and AI. Who Are We Looking for Exactly? * 2–4 years of hands-on experience in building and deploying ML models in real-world applications. Strong knowledge of Python and ML libraries such as Scikit-learn, TensorFlow, PyTorch, XGBoost, or similar. Experience with data preprocessing, feature engineering, and model evaluation techniques. Solid understanding of ML concepts such as supervised and unsupervised learning, overfitting, regularization, etc. Experience working with Jupyter, pandas, NumPy, and visualization libraries like Matplotlib or Seaborn. Familiarity with version control (Git) and basic software engineering practices. You consistently demonstrate strong verbal and written communication skills as well as strong analytical and problem-solving abilities You should have a master’s degree /Bachelors (BS) in computer science, Software Engineering, IT, Technology Management or related degrees and throughout education in English medium. We’ll REALLY love you if you: * Have knowledge of cloud platforms (AWS, Azure, GCP) and ML services (SageMaker, Vertex AI, etc.) Have knowledge of GenAI prompting and hosting of LLMs. Have experience with NLP libraries (spaCy, Hugging Face Transformers, NLTK). Have familiarity with MLOps tools and practices (MLflow, DVC, Kubeflow, etc.). Have exposure to deep learning and neural network architectures. Have knowledge of REST APIs and how to serve ML models (e.g., Flask, FastAPI, Docker). Why Navtech? * Performance review and Appraisal Twice a year. Competitive pay package with additional bonus & benefits. Work with US, UK & Europe based industry renowned clients for exponential technical growth. Medical Insurance cover for self & immediate family. Work with a culturally diverse team from different geographies. About Us Navtech is a premier IT software and Services provider. Navtech’s mission is to increase public cloud adoption and build cloud-first solutions that become trendsetting platforms of the future. We have been recognized as the Best Cloud Service Provider at GoodFirms for ensuring good results with quality services. Here, we strive to innovate and push technology and service boundaries to provide best-in-class technology solutions to our clients at scale. We deliver to our clients globally from our state-of-the-art design and development centers in the US & Hyderabad. We’re a fast-growing company with clients in the United States, UK, and Europe. We are also a certified AWS partner. You will join a team of talented developers, quality engineers, product managers whose mission is to impact above 100 million people across the world with technological services by the year 2030. Navtech is looking for a AI/ML Engineer to join our growing data science and machine learning team. In this role, you will be responsible for building, deploying, and maintaining machine learning models and pipelines that power intelligent products and data-driven decisions. Show more Show less
Posted 1 week ago
5.0 - 8.0 years
6 - 10 Lacs
Pune
Work from Office
Job ID: 199776 Required Travel :Minimal Managerial - No Location: :India- Pune (Amdocs Site) Who are we Amdocs helps those who build the future to make it amazing. With our market-leading portfolio of software products and services, we unlock our customers innovative potential, empowering them to provide next-generation communication and media experiences for both the individual end user and enterprise customers. Our employees around the globe are here to accelerate service providers migration to the cloud, enable them to differentiate in the 5G era, and digitalize and automate their operations. Listed on the NASDAQ Global Select Market, Amdocs had revenue of $5.00 billion in fiscal 2024. For more information, visit www.amdocs.com In one sentence The DB specialist have the ultimate responsibility of physical and applicative database administration activities supporting end2end life cycle of product from database perspective What will your job look like You will support the following levels Physical - responsible for the physical and technical oriented aspects e.g. storage, security, networking and more; Application - you will handle all application-related issues (e.g. queries, users, embedded SQL s etc.) Ensure database resources are sized accurately and a design strategy is developed to make sure that the database is maintained at a healthy size. You will ensure availability and performance of multi database and application environments with very large volumes and sizes. You will perform routine DBA tasks like database maintenance, backups, recovery, table space management, upgrades, etc. You will execute periodic health checks for databases and recommend changes that should be executed in the production environment to ensure efficient performance. You will interact and work with multiple infra and IT teams as part of environment setup, maintenance and support You will work closely with developers, assist them with database structure design as of business needs (e.g. indexes, constraints, integrity) All you need is... DB2 DBA on Mainframe(Z/OS): DB Objects - Experience in creating and modifying DB2 objects - Database, Storage Group, Tablespace, Tables, Views, Indexes, etc. Experience with DB2 commands- Stop/Start Databases, Display command, Cancel thread and Term & Display utility, others. Hands on experience on DB2 utilitiesReorg and Runstat, Backup and Recovery of Table spaces (copy & recover), Repair, Load/Unload utilities. Ability to bind/rebind the Plans/Packages and Granting required accesses to Packages/Plans/Collections. Experience in Database Monitoring. Experience in Database replication - IBM DB2 Data Propagator DB2 administration - Strong Troubleshooting skills DB2 privileges and RACF z/OS - TSO, ISPF, JCL. Tools - DB2 Admin tool, QMF, FileManager, Changeman, ESP. Why you will love this job: You will have the opportunity to work in a growing organization, with ever growing opportunities for personal growth and one of the highest scores of employee engagement in Amdocs. You will be able to use your specific insights into variety of projects to overcome technical challenge while continuing to deepen your area of knowledge. You will have the opportunity to work in multinational environment for the global market leader in its field Amdocs is an equal opportunity employer. We welcome applicants from all backgrounds and are committed to fostering a diverse and inclusive workforce
Posted 1 week ago
7.0 - 8.0 years
6 - 10 Lacs
Gurugram
Work from Office
Job ID: 199536 Required Travel :Minimal Managerial - No LocationIndia- Gurgaon (Amdocs Site) Who are we Amdocs helps those who build the future to make it amazing. With our market-leading portfolio of software products and services, we unlock our customers innovative potential, empowering them to provide next-generation communication and media experiences for both the individual end user and enterprise customers. Our employees around the globe are here to accelerate service providers migration to the cloud, enable them to differentiate in the 5G era, and digitalize and automate their operations. Listed on the NASDAQ Global Select Market, Amdocs had revenue of $5.00 billion in fiscal 2024. For more information, visit www.amdocs.com In one sentence The DB specialist have the ultimate responsibility of physical and applicative database administration activities supporting end2end life cycle of product from database perspective What will your job look like You will provide database supreme level problem solving services to all stakeholders during project lifecycle (from Development till post production) by clarifying and addressing permanent and efficient solutions to complex and unusual problems You will provide DB professional guidance and coaching to project management and development teams, implementation groups as well as customer architects - internal and external Amdocs customers, including stakeholders without database understanding. You will design, develop, configure and administer large and critical database systems ensuring high performance and improve highly complex code where conventional approaches do not help You will lead and define tuning database parameters (physical layout and memory buffers) and promote the standardization per product cross customers world wide You will design strategy for DB working processes (e.g. backup & recovery) during project life cycle and translate and guide until implementation; you will lead and build team knowledge capabilities and versatility and personal skills variety You will serve as Authority in Amdocs by advanced degree of competence, technology guru demonstrating extreme dexterity and knowledge as the result of rich experience and high aptitudes in DB areas, defining DB vision and strategy in the organization, serve as ultimate level of technical escalations for critical showstopper incidents in Productions customers systems with direct business impact. You will be a consistent professional (you are the bridge builder, you set the example for others in to follow as it relates to communication, managing expectations, and building/growing partnerships). All you need is... Bachelor / B.Sc. in computer science or equivalent 7-8 years experience as a DBA and SQL knowledge in a SW company Significant experience and solid knowledge of RDBMS Experience with UNIX or UNIX variants and scripting Relevant Database Certifications are required. Experience in operating within a complex, multi-interface environment. Why you will love this job: You will have the opportunity to work in a growing organization, with ever growing opportunities for personal growth and one of the highest scores of employee engagement in Amdocs. You will be able to use your specific insights into variety of projects to overcome technical challenge while continuing to deepen your area of knowledge. You will have the opportunity to work in multinational environment for the global market leader in its field Amdocs is an equal opportunity employer. We welcome applicants from all backgrounds and are committed to fostering a diverse and inclusive workforce
Posted 1 week ago
10.0 - 20.0 years
22 - 32 Lacs
Noida, Greater Noida
Hybrid
Skills Required: SQL DBA, Clustering, Performance Tuning, Azure
Posted 1 week ago
0 years
0 Lacs
India
Remote
AI & Machine Learning Intern 📍 Location: Remote (100% Virtual) 📅 Duration: 3 Months 💸 Stipend for Top Interns: ₹15,000 🎁 Perks: Certificate | Letter of Recommendation | Full-Time Offer (Based on Performance) About INLIGHN TECH INLIGHN TECH is focused on delivering practical, project-driven learning experiences to help students and graduates build careers in emerging technologies. Our AI & Machine Learning Internship is designed to offer hands-on experience in building intelligent systems and solving real-world problems using data. 🚀 Internship Overview As an AI & ML Intern , you will work on projects involving machine learning models, data preprocessing, and algorithm development . This internship will equip you with the skills to apply AI techniques in various domains, including natural language processing, computer vision, and predictive analytics. 🔧 Key Responsibilities Clean and preprocess datasets for training and testing machine learning models Build, train, and evaluate ML models using Python libraries like scikit-learn, TensorFlow, PyTorch, and Keras Work on projects involving classification, regression, clustering, NLP , or image processing Analyze model performance and optimize results through hyperparameter tuning Collaborate with team members to implement AI solutions for real-world scenarios Present findings through visualizations, reports, and presentations ✅ Qualifications Pursuing or recently completed a degree in Computer Science, Data Science, Engineering, or related fields Strong foundation in Python programming and statistics Understanding of machine learning algorithms and AI concepts Familiarity with Jupyter Notebook , Pandas , NumPy , and visualization libraries like Matplotlib/Seaborn Bonus: Exposure to NLP, deep learning , or AI model deployment tools Curiosity, creativity, and a passion for solving problems with data 🎓 What You’ll Gain Hands-on experience with real datasets and applied ML projects Knowledge of industry-standard AI tools and workflows A portfolio of AI/ML projects you can showcase to employers Internship Certificate upon successful completion Letter of Recommendation for outstanding performers Opportunity for a Full-Time Offer based on performance Show more Show less
Posted 1 week ago
4.0 - 9.0 years
14 - 18 Lacs
Bengaluru
Work from Office
About us: As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up . Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful. Overview about TII At Target, we have a timeless purpose and a proven strategy. And that hasn t happened by accident. Some of the best minds from different backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Target s global team and has more than 4,000 team members supporting the company s global strategy and operations. Pyramid overview A role with Target Data Science & Engineering means the chance to help develop and manage state of the art predictive algorithms that use data at scale to automate and optimize decisions at scale. Whether you join our Statistics, Optimization or Machine Learning teams, you ll be challenged to harness Target s impressive data breadth to build the algorithms that power solutions our partners in in Marketing, Supply Chain Optimization, Network Security and Personalization rely on Every Scientist on Target s Data Sciences team can expect modeling and data science, software/product development of highly performant code for Model Performance, and to elevate Target s culture and apply retail domain knowledge. About the role As a Lead Data Scientist, you ll influence by interacting with the Data Sciences team, Product teams, Scientist/Engineer individual contributors from other pillars, and business partners. You will perform within the scale and scope of your role by defining solutions and beginning to identify problems to solve and contribute to Data Sciences and Target s culture by modeling and contributing to the culture. You ll get the opportunity to use your expertise in one or more of the following areasmachine learning, probability theory & statistics, optimization theory, Simulation, Econometrics, Deep Learning, Natural Language processing or computer vision. We will look to you to own design and implementation of an algorithmic solution (e.g., recommendation or forecasting algorithm), including data understanding, feature engineering, model development, validation and testing, and deployment to a production environment. You ll drive development of problem statements that capture the business considerations, define metrics/measurement to validate model performance, and drive feasibility study with data requirements and potential solutions approaches to be considered. You ll evaluate tradeoffs of simple vs complex models/solutions in determining the right technique to employ for a business problem and develop and maintain a nuanced understanding of the data generated by the business, including fundamental limitations of the data. You ll leverage your proficiency in one or more approved programming languages (Java, Scala, Python, R), and ensure foundational programming principles in developing code (best practices, writes unit tests, code organization, basics of CI/CD etc.) are followed in developing the team s products/models. You ll not only stitch together basic data pipelines for a given problem and own design and implementation of individual components within Data Science/Tech applications, but also articulate the technical strategy, value of technology, and impact on the business. As you do so, you ll collaborate with engineers, scientists, and business partners/product owners to create algorithmic solutions that are performant and integrated into applications. We ll look to you to mentor and provide technical support within a team, including mentoring junior team members, and present your work and your team s work to business partners and other Data Sciences teams. With a deeper understanding of your functional area of responsibility, you ll support agile ceremonies, collaborate with peers across multiple products, communicate and collaborate with business partners, and demonstrate an understanding of areas outside your scope of responsibility. The exciting part of retailIt s always changing! Core responsibilities of this job are described within this job description. Job duties may change at any time due to business needs. About you: 4-year degree in Quantitative disciplines (Science, Technology, Engineering, Mathematics) and 6+ years of professional experience or equivalent industry experience Master s degree in Quantitative disciplines (Science, Technology, Engineering, Mathematics) Good knowledge and experience developing optimization, simulation, and statistical models Strong analytical thinking skills. Ability to creatively solve business problems, innovating new approaches where required Strong hands-on programming skills in Python, SQL, Hadoop/Hive. Additional knowledge of Spark, Scala, R, Java desired but not mandatory Good working knowledge of mathematical and statistical concepts, MILP, algorithms, and computational complexity Passion for solving interesting and relevant real-world problems using a data science approach Experience in implementing advanced statistical techniques like regression, clustering, PCA, forecasting (time series) etc. Able to produce reasonable documents/narrative suggesting actionable insights Excellent communication skills. Ability to clearly tell data driven stories through appropriate visualizations, graphs, and narratives Self-driven and results oriented; able to meet tight timelines Strong team player with ability to collaborate effectively across geographies/time zones Know More About us here: Life at Target- https://india.target.com/ Benefits- https://india.target.com/life-at-target/workplace/benefits Culture- https://india.target.com/life-at-target/belonging
Posted 1 week ago
4.0 - 9.0 years
12 - 16 Lacs
Bengaluru
Work from Office
About us: As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up. Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful. Overview about TII At Target, we have a timeless purpose and a proven strategy. And that hasn t happened by accident. Some of the best minds from different backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Target s global team and has more than 4,000 team members supporting the company s global strategy and operations. Pyramid overview A role with Target Data Science & Engineering means the chance to help develop and manage state of the art predictive algorithms that use data at scale to automate and optimize decisions at scale. Whether you join our Statistics, Optimization or Machine Learning teams, you ll be challenged to harness Target s impressive data breadth to build the algorithms that power solutions our partners in in Marketing, Supply Chain Optimization, Network Security and Personalization rely on Every Scientist on Target s Data Sciences team can expect modeling and data science, software/product development of highly performant code for Model Performance, and to elevate Target s culture and apply retail domain knowledge. Position Overview: As a Senior Data Scientist, you will be involved in end-to-end development of Ad-Tech products and capabilities that fulfil strategic priorities and power growth of Roundel, Target s Retail media business. You will leverage understanding of data and algorithms to build prototypes and run experiments to evaluate them against given specifications. As you follow agile processes, you will implement and deploy a scalable data science solution by using MLOps best practices across model development life cycle. You will collaborate with product and business partners to seek feedback on effectiveness of solution and identify future opportunities for enhancements. You will work with your peers to create a well-maintainable & tested codebase with relevant documentation. The exciting part of retail and mediaIt s always changing! Core responsibilities of this job are described within this job description. Job duties may change at any time due to business needs. About You: 4-year degree in Quantitative disciplines (Science, Technology, Engineering, Mathematics) or equivalent experience 3+ years of professional experience or equivalent industry experience Good knowledge and experience developing optimization, simulation and statistical models Strong analytical thinking skills. Ability to creatively solve business problems, innovating new approaches, where required Strong hands-on programming skills in Python, SQL, Spark, Hadoop/Hive . Good working knowledge of mathematical and statistical concepts, MILP, algorithms and computational complexity Passion for solving interesting and relevant real-world problems using a data science approach Experience in implementing advanced statistical techniques like r egression, clustering, PCA, forecasting (time series) etc. Able to produce reasonable documents/narrative suggesting actionable insights Excellent communication skills. Ability to clearly tell data driven stories through appropriate visualizations, graphs and narratives Self-driven and results oriented; able to meet tight timelines Strong team player with ability to collaborate effectively across geographies/time zones Know More About Us here: Life at Target - https://india.target.com/ Benefits - https://india.target.com/life-at-target/workplace/benefits Culture- https://india.target.com/life-at-target/belonging
Posted 1 week ago
3.0 - 5.0 years
1 - 4 Lacs
Hyderabad
Work from Office
Job Information Job Opening ID ZR_1899_JOB Date Opened 29/04/2023 Industry Technology Job Type Work Experience 3-5 years Job Title Phantom/SOAR City Hyderabad Province Telangana Country India Postal Code 500081 Number of Positions 5 Phantom/SOAR & Python experience with Good Development skills Good in ITIS and Understanding and building playbooks with On-prem multi-site clustering Splunk environment Practical experience in monitoring and tuning Playbooks & Use cases Good knowledge of creating custom apps with dashboards / reports / alerts and demonstrate Understanding of Splunk apps Ownership of delivery for small to large Splunk onboarding projects Ability to automate repetitive tasks and reduce noise Implementing and supporting Phantom with good Python, Red Hat and Windows experience Location: Pan India check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
5.0 - 8.0 years
5 - 9 Lacs
Pune
Work from Office
Job Information Job Opening ID ZR_1761_JOB Date Opened 21/03/2023 Industry Technology Job Type Work Experience 5-8 years Job Title Database Management Specialist City Pune Province Maharashtra Country India Postal Code 411013 Number of Positions 1 Mandatory skills SQL server 2014-sql server 2019 experience Experience in Database Administration including Installation, Configuration, User Access Management, BackUp/Recovery, Monitoring and Performance Tuning, Space Utilization, DB migration, DB mirroring , Partitioning Should have strong knowledge on logical and physical database design activities, DDL constructs (data definition language constructs) Ticketing management experience Troubleshooting, analytical, attention to detail Experience with OEM, Idera, or other database monitoring tools Oracle version 19 and Exadata experience Powershell or other programming experience for automation of sql installs, and other processes Azure, AWS, cloud database management experience check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 week ago
0 years
0 Lacs
Greater Hyderabad Area
On-site
Description Role Description Our Tech and Product team is tasked with innovating and maintaining a massive distributed systems engineering platform that ships hundreds of features to production for tens of millions of users across all industries every day. Our users count on our platform to be highly reliable, lightning fast, supremely secure, and to preserve all of their customizations and integrations every time we ship. Our platform is deeply customizable to meet the differing demands of our vast user base, creating an exciting environment filled with complex challenges for our hundreds of agile engineering teams every day. Required Skills And Experience Salesforce is looking for Site Reliability Engineers to build and manage a multi-substrate kubernetes and microservices platform which powers Core CRM and a growing set of applications across Salesforce. This platform provides the ability to develop and deploy microservices quickly and efficiently, accelerating their path to production.In this role, You are responsible for the high availability of a large fleet of clusters running various technologies like Kubernetes, software load balancers, service mesh and so on. You’ll gain valuable experience troubleshooting real production issues which will expand your knowledge on the architecture of k8s ecosystem services and internals. You will contribute code wherever possible to drive improvement You will drive automation efforts in Python/Golang/Terraform/Spinnaker/Puppet/Jenkins to eliminate manual work with day-to-day operations. You will help improve the visibility of the platform by implementing necessary monitoring and metrics. You’ll implement self-healing mechanisms to proactively fix issues to reduce manual labor. You will get a chance to improve your communication and collaboration skills working with various other Infrastructure teams across Salesforce. You will be interacting with a highly innovative and creative team of developers and architects. You will evaluate new technologies to solve problems as neededYou are the ideal candidate if you have a passion for live site service ownership. You have demonstrated a strong ability to manage large distributed systems. You are comfortable with troubleshooting complex production issues that span multiple disciplines. You bring a solid understanding of how infrastructure software components work. You are able to automate tasks using a modern high-level language. You have good written and spoken communication skills.Required Skills:Experience operating large-scale distributed systems, especially in cloud environments Excellent troubleshooting skills with the ability to learn new technologies in complex distributed systems Strong working experience with Linux Systems Administration. Good knowledge of linux internals. Good experience in any of the scripting/programming languages: Python, GoLang etc ., Basic knowledge of Networking protocols and components: TCP/IP Stack, Switches, Routers, Load Balancers. Experience in any of Puppet, Chef, Ansible or other devops tools. Experience in any of the monitoring tools like Nagios, grafana, Zabbix etc., Experience with Kubernetes, Docker or Service Mesh Experience with AWS, Terraform, Spinnaker A continuous learner and a critical thinker A team player with great communication skills Areas where you may be working on include highly scalable, highly performant distributed systems with highly available and durable data storage capabilities that ensure high availability of the stack above that includes databases. A thorough understanding of distributed systems, system programming, working with system resources is required. Practical knowledge for challenges regarding clustering solutions, hands-on experience in deploying your code in the public cloud environments, working knowledge of Kubernetes and working with APIs provided by various public cloud vendors to handle data are highly desired skills. Benefits & Perks Comprehensive benefits package including well-being reimbursement, generous parental leave, adoption assistance, fertility benefits, and more! World-class enablement and on-demand training with Trailhead.com Exposure to executive thought leaders and regular 1:1 coaching with leadership Volunteer opportunities and participation in our 1:1:1 model for giving back to the community For more details, visit https://www.salesforcebenefits.com/ Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Gurgaon, Haryana, India
Remote
About This Role BlackRock Overview: BlackRock is one of the world’s preeminent asset management firms and a premier provider of global investment management, risk management and advisory services to institutional, intermediary and individual investors around the world. BlackRock offers a range of solutions — from rigorous fundamental and quantitative active management approaches aimed at maximizing outperformance to highly efficient indexing strategies designed to gain broad exposure to the world’s capital markets. Our clients can access our investment solutions through a variety of product structures, including individual and institutional separate accounts, mutual funds and other pooled investment vehicles, and the industry-leading iShares® ETFs. Aladdin Financial Engineering Group (AFE) AFE is a diverse and global team with a keen interest and expertise in all things related to technology and financial analytics. The group is responsible for the research and development of quantitative financial and behavioral models and tools across many different areas – single-security pricing, prepayment models, risk, return attribution, liquidity, optimization and portfolio construction, scenario analysis and simulations, etc. – and covering all asset classes. The group is also responsible for the technology platform that delivers those models to our internal partners and external clients, and their integration with Aladdin. AFE conducts leading research on the areas above, delivering state-of-the-art models. AFE publishes applied scientific research frequently, and our members present regularly at leading industry conferences. AFE engages constantly with the sales team in client visits and meetings. Job Description You can help conduct research to build quantitative financial models and portfolio analytics that help managing most of the money of the world’s largest asset manager. You can bring all yourself to the job. From the top of the firm down we embrace the values, identities and ideas brought by our employees. We are looking for curious people with a strong background in quantitative research, data science and machine learning, have awesome problem-solving skills, insatiable appetite for learning and innovating, adding to BlackRock’s vibrant research culture. If any of this excites you, we are looking to expand our team. We currently have quant researcher role with the AFE Investment AI (IAI) Team. The securities market is undergoing a massive transformation as the industry is embracing machine learning and, more broadly, AI, to help evolve the investment process. Pioneering this journey at BlackRock, the team has better deliver applied AI investment analytics to help both BlackRock and Aladdin clients achieve scale through automation while safeguarding alpha generation. The IAI team combines AI / ML methodology and technology skills with deep subject matter expertise in fixed income, equity, and multi-asset markets, and the buyside investment process. We are building next generation liquidity, security similarity and pricing models leveraging our expertise in quantitative research, data science and machine learning. The models we build use innovative machine learning approaches, have real practical value and are used by traders and portfolio managers alike. Our models use cutting edge econometric/statistical methods and tools. The models themselves have real practical value and are used by traders, portfolio managers and risk managers representing different investment styles (fundamental vs. quantitative) and across different investment horizons. Research is conducted predominantly in Python and Scala, and implemented into production by a separate, dedicated team of developers. These models have a huge footprint of usage across the entire Aladdin client base, and so we place special emphasis on scalability and ensuring adherence to BlackRock’s rigorous standards of model governance and control. Background And Responsibilities We are looking to hire a quant researcher with 4+ years’ experience to join AFE Investment AI team focusing on Trading and Liquidity to work closely with other data scientists/researchers to support Risk Mangers, Portfolio Managers and Traders. We build cutting edge liquidity analytics using a wide range of ML algos and a broad array of technologies (Python, Scala, Spark/Hadoop, GCP, Azure). This role is a great opportunity to work closely with the Portfolio Managers, Risk Managers and Trading team, spanning areas such as: Perform analysis of large data sets comprising of market data, trading data and derived analytics. Evaluate trading data, including pre-processing, feature engineering, variable selection, dimensionality reduction etc. Leverage machine learning to extract insights from data and work with investment managers to put those into action. Design, develop models/ML solutions for Trading & Liquidity. Implement the model and integrate the model into Aladdin analytical system in accordance with BlackRock’s model governance policy. Qualifications B.Tech / B.E. / M.Sc. degree in a quantitative discipline (Mathematics, Physics, Computer Science, Finance or similar area). M.Tech. / PhD is a plus. Strong background in Mathematics, Statistics, Probability, Linear Algebra Knowledgeable about data mining, data analytics, data modeling Confident in building models to solve problems including time series forecasting, clustering problems and hands on experience with a range of statistical and machine learning approaches. Ability to work independently and efficiently in a fast-paced and team-oriented environment. Knowledge of fixed income and credit instruments and markets a plus. Previous experience or knowledge in market liquidity is not required but a big plus. For professionals with no prior financial industry experience, this position is a unique opportunity to gain in-depth knowledge of the asset management process in a world-class organization. Our Benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description The Kaleris IT Infrastructure Engineer is responsible for providing IT infrastructure support to customers using N4 software, both onsite and cloud-hosted. This role involves working with a variety of customers to assist with strategic planning, technical design, and implementation, including on-premises/cloud hosting design, firewall setup and configuration, security audits, health checks, network hardware and software, server sizing and maintenance (such as patch management and antivirus), and WAN/communication links. Responsibilities Maintain and troubleshoot customer IT infrastructure via Managed Services, including network hardware and software, servers, disaster recovery, storage, WAN/communication links, and cloud hosting. Design and maintain N4 on-premises and cloud environments for hosting N4 TOS software. Monitor and diagnose N4 TOS infrastructure incidents impacting software and underlying systems. Consult and troubleshoot customer-reported issues with N4 TOS software and infrastructure environment. Review and administer customer hardware/cloud setups and configurations. Respond to and resolve customer issues in accordance with Service Level Agreements (SLAs). Be on standby for critical P1 incidents, with availability to work weekends or shifts as required to support customers 24/7. Requirements Min 3 years of experience Experience with server centralization, consolidation, and virtualization of servers, storage, and overall IT architecture. Deep technical knowledge of current network hardware, protocols, and internet standards. Strong understanding of underlying operating systems and their configurations. Good understanding of database technologies, including scaling, redundancy, and backup. Experience with network capacity planning, network security principles, and best practices. Ability to conduct research into networking issues and products as required. Excellent hardware troubleshooting experience. Expertise/qualification in: Load balancers Clustering Tomcat Oracle 11g or 11g RAC, SQL, and MySQL databases Red Hat Linux 5 RAID Microsoft Server 2008 ActiveMQ Microsoft SQL-Server 2012 JMS Firewalls . Knowledge, Skills, And Abilities Experience in the maritime or logistics industry is a significant plus. Familiarity with Navis TOS is a big advantage. Experience working in distributed virtual teams. Demonstrates a positive attitude and strong work ethic. Meticulous organizational and multitasking skills. Excellent customer service and follow-up skills. Ability to work well with others and follow instructions. Multilingual capabilities are a plus. Kaleris is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. Show more Show less
Posted 1 week ago
4.0 - 7.0 years
6 - 9 Lacs
Mumbai
Work from Office
Key Responsibilities: Design, implement, and manage Azure Kubernetes Service (AKS) clusters. Monitor and optimize the performance of AKS clusters. Troubleshoot and resolve issues related to AKS and containerized applications. Implement security measures to protect AKS clusters and containerized applications. Collaborate with development teams to support application deployment and maintenance. Maintain documentation for AKS configurations, processes, and procedures. Automate deployment, scaling, and management of containerized applications using AKS. Participate in on-call rotation for after-hours support. Upgrading Kubernetes Node.
Posted 1 week ago
50.0 years
0 Lacs
New Delhi, Delhi, India
On-site
Who is ERM? ERM is a leading global sustainability consulting firm, committed for nearly 50 years to helping organizations navigate complex environmental, social, and governance (ESG) challenges. We bring together a diverse and inclusive community of experts across regions and disciplines, providing a truly multicultural environment that fosters collaboration, professional growth, and meaningful global exposure. As a people-first organization, ERM values well-being, career development, and the power of collective expertise to drive sustainable impact for our clients—and the planet. Introducing our new Global Delivery Centre (GDC) Our Global Delivery Centre (GDC) in India is a unified platform designed to deliver high-value services and solutions to ERM’s global clientele. By centralizing key business and consulting functions, we streamline operations, optimize service delivery, and enable our teams to focus on what matters most—advising clients on sustainability challenges with agility and innovation. Through the GDC, you will collaborate with international teams, leverage emerging technologies, and further enhance ERM’s commitment to excellence—amplifying our shared mission to make a lasting, positive impact. Job Objective ERM is seeking a Modelling & Data Analyst to develop algorithms, financial models, and analytical tools that link non-financial ESG data with financial outcomes. This role is ideal for a professional with a strong quantitative background, proficient in statistical modelling, machine learning, and financial analysis. The candidate will work on transforming sustainability materiality and maturity frameworks into automated, scalable models that assess performance and valuation impacts. This is a non-client-facing offshore role focused on data-driven ESG research and tool development. The Ideal Candidate You bring a robust background in financial modelling and valuation with a deep passion for sustainability (e.g. climate, nature, employees wellbeing, sustainable revenue). You have demonstrated success in integrating ESG factors into transaction analysis and investment decision-making. With experience in investment banking, strategy consulting, or transaction advisory—and preferably exposure to private equity—you are adept at turning complex and qualitative ESG concepts into actionable financial insights. You will be able to communicate with senior stakeholders and provide thought leader in this evolving space. RESPONSIBILITIES: Quantitative Research & Algorithm Development Design data-driven models that quantify the impact of ESG factors on financial performance. Develop statistical algorithms that integrate materiality and maturity definitions into predictive financial models. Leverage machine learning techniques (e.g., regression analysis, clustering, time-series forecasting) to identify trends in ESG data. Data Analysis & Model Development Build automated financial modelling tools that incorporate non-financial (ESG) data and financial metrics. Develop custom ESG performance indicators that can be used in due diligence, exit readiness, and investment decision-making. Standardize ESG data inputs and apply weightings/scoring methodologies to determine financial relevance. Tool Development & Automation Work with developers to code ESG models into dashboards or automated financial tools. Implement AI/ML techniques to enhance model predictive capabilities. Ensure models are scalable and adaptable across multiple industries and investment types. Data Management & Validation Collect, clean, and structure large datasets from financial reports, ESG databases, and regulatory filings. Conduct sensitivity analyses to validate model accuracy and effectiveness. Ensure consistency in ESG metrics and definitions across all analytical frameworks. REQUIRED SKILLS & EXPERIENCE: Educational Background Master’s in Finance, Econometrics, Data Science, Quantitative Economics, Mathematics, Statistics, or a related field. CFA, FRM, or other financial analysis certifications are a plus. Technical & Analytical Proficiency Financial & Statistical Modelling: Advanced Excel, Python, R, or MATLAB for quantitative research and financial modelling. Machine Learning & AI: Proficiency in ML algorithms for forecasting, clustering, and risk modelling. Data Analysis & Automation: Experience with SQL, Power BI, or other data visualization tools. ESG & Financial Integration: Understanding of ESG materiality frameworks (SASB, MSCI, S&P, etc.) and their impact on valuations. Professional Experience Minimum 3-5 years in quantitative research, financial modelling, or ESG data analysis. Experience in building proprietary financial tools/models for investment firms or financial consultancies. Strong background in factor modelling, risk assessment, and alternative data analysis. Personal Attributes Highly analytical, structured thinker with attention to detail. Ability to work independently in an offshore role, managing multiple datasets and models. Passion for quantifying ESG impact in financial terms. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At Juniper, we believe the network is the single greatest vehicle for knowledge, understanding, and human advancement the world has ever known. To achieve real outcomes, we know that experience is the most important requirement for networking teams and the people they serve. Delivering an experience-first, AI-Native Network pivots on the creativity and commitment of our people. It requires a consistent and committed practice, something we call the Juniper Way. Job Title: Software Engineer III Experience: 2+ yrs of experience The AIOps team’s mission is to use advanced analytics, including AI/ML, to develop end-to-end solutions to automate (detect, remediate) networking workflows for our customers, and help extend AI/ML across the Juniper portfolio. We are looking for an experienced engineer to join our growing data science team of AI/ML and data-at-scale engineers. Our ideal candidate brings their skills and experience having developed performant inferencing implementations, practiced data science hygiene to develop ML models and is a team player. As a data scientist, you will collaborate with product managers and domain specialists to identify real customer problems, use your background in NLP/ML to develop solutions that scale with terabytes of data. Qualifications/Requirements: BS/MS in Computer Science or Data Science, Electrical Engineering, Statistics, Applied Math or equivalent fields with strong mathematical background General understanding of machine learning techniques and algorithms, including clustering, anomaly detection, optimization, Neural network, Graph ML, etc Experience building data science-driven solutions including data collection, feature selection, model training, post-deployment validation Strong hands-on coding skills (preferably in Python) processing large-scale data set and developing machine learning models Familiar with one or more machine learning or statistical modeling tools such as Numpy, ScikitLearn, MLlib, Tensorflow Works well in a team setting and is self-driven Desired Experience: Experience with some/equivalent: AWS, Flink, Spark, Kafka, Elastic Search, Kubeflow Knowledge with NLP technology Demonstrable problem-solving ability Conceptual understanding of system design concepts Responsibilities: Collaborate with team to understand feature, work with domain experts to identify relevant “signals” during feature engineering, deliver generic and performant ML solutions Keep up to date with newest technology trends Communicate results and ideas to key decision makers Implement new statistical or other mathematical methodologies as needed for specific models or analysis Optimize joint development efforts through appropriate database use and project design About Juniper Networks Juniper Networks challenges the inherent complexity that comes with networking and security in the multicloud era. We do this with products, solutions and services that transform the way people connect, work and live. We simplify the process of transitioning to a secure and automated multicloud environment to enable secure, AI-driven networks that connect the world. Additional information can be found at Juniper Networks (www.juniper.net) or connect with Juniper on Twitter, LinkedIn and Facebook. WHERE WILL YOU DO YOUR BEST WORK? Wherever you are in the world, whether it's downtown Sunnyvale or London, Westford or Bengaluru, Juniper is a place that was founded on disruptive thinking - where colleague innovation is not only valued, but expected. We believe that the great task of delivering a new network for the next decade is delivered through the creativity and commitment of our people. The Juniper Way is the commitment to all our colleagues that the culture and company inspire their best work-their life's work. At Juniper we believe this is more than a job - it's an opportunity to help change the world. At Juniper Networks, we are committed to elevating talent by creating a trust-based environment where we can all thrive together. If you think you have what it takes, but do not necessarily check every single box, please consider applying. We’d love to speak with you. Additional Information for United States jobs: ELIGIBILITY TO WORK AND E-VERIFY In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification form upon hire. Juniper Networks participates in the E-Verify program. E-Verify is an Internet-based system operated by the Department of Homeland Security (DHS) in partnership with the Social Security Administration (SSA) that allows participating employers to electronically verify the employment eligibility of new hires and the validity of their Social Security Numbers. Information for applicants about E-Verify / E-Verify Información en español: This Company Participates in E-Verify / Este Empleador Participa en E-Verify Immigrant and Employee Rights Section (IER) - The Right to Work / El Derecho a Trabajar E-Verify® is a registered trademark of the U.S. Department of Homeland Security. Juniper is an Equal Opportunity workplace. We do not discriminate in employment decisions on the basis of race, color, religion, gender (including pregnancy), national origin, political affiliation, sexual orientation, gender identity or expression, marital status, disability, genetic information, age, veteran status, or any other applicable legally protected characteristic. All employment decisions are made on the basis of individual qualifications, merit, and business need. Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Achieving our goals starts with supporting yours. Grow your career, access top-tier health and wellness benefits, build lasting connections with your team and our customers, and travel the world using our extensive route network. Come join us to create what’s next. Let’s define tomorrow, together. Description United's Digital Technology team designs, develops, and maintains massively scaling technology solutions brought to life with innovative architectures, data analytics, and digital solutions. Find your future at United! We’re reinventing what our industry looks like, and what an airline can be – from the planes we fly to the people who fly them. When you join us, you’re joining a global team of 100,000+ connected by a shared passion with a wide spectrum of experience and skills to lead the way forward. Achieving our ambitions starts with supporting yours. Evolve your career and find your next opportunity. Get the care you need with industry-leading health plans and best-in-class programs to support your emotional, physical, and financial wellness. Expand your horizons with travel across the world’s biggest route network. Connect outside your team through employee-led Business Resource Groups. Create what’s next with us. Let’s define tomorrow together. Job Overview And Responsibilities United Offshore SQL DBA Team supports critical after hours work to support timely releases and patching activities overnight along with 8pm-8am rotational on call to support for very critical DB operations monitoring and incident support. SQL DBA team in India works along with offshore development teams in code review and troubleshooting for performance issues essential for United’s 24x7 technology support structure. They are actively engaged in migration projects for SQL desupported version remediation and supporting upgrades.Team also works on AWS setup and support across all areas of clould migrations and production support. SQL Server Production Support Off-hours support for all Tier1 – Tier5 SQL Databases and InstancesCreate physical database structures based on physical design for development, test, and production environments Coordinate with systems engineers to configure servers for DBMS product installation and database creation Install, configure, and maintain DBMS product software on database and application servers Assist in the consultation to application development teams on DBMS product technical issues and techniques Implement monitoring procedures to maximize availability and performance of the database, while meeting defined SLA's Investigate, troubleshoot, and resolve database problems Communicate the required downtime with the application development teams and systems engineers to implement approved changes Identify, define and implement database backup / recovery and security strategies Install and support of DBMS (Database Management Systems) software and tools Perform various database activities which include monitoring, tuning, and troubleshooting, with appropriate supervision, if required Review deployment for all SQL database changes Complete pre-deployment code reviews with application teams as requested Review and provide feedback on all SQL code updates Work with deployment manages on dates and time for releases including assignments Performance Tunning and code review Migrations and DB setup (Cloud-AWS, SQL) Patching of all SQL Server and some Couchbase Work with application teams to create schedule Send advanced and timely notifications for database instances to be patched Conduct database patching including any troubleshooting and validation post patching Code release and Techincal Documentation Backup Recovery and DR This position is offered on local terms and conditions. Expatriate assignments and sponsorship for employment visas, even on a time-limited visa status, will not be awarded. This position is for United Airlines Business Services Pvt. Ltd - a wholly owned subsidiary of United Airlines Inc. Qualifications What’s needed to succeed (Minimum Qualifications): Bachelor's degree or 4 years of relevant work experience in Computer Science, Engineering, or related discipline Microsoft SQL Server Certification 5 Years of related experience Proficient in SQL development and administration disciplines with current hands-on experience with the latest SQL Server releases including SQL 2019, 2017, 2016 Strong background and experience with all BC and DR capabilities of Microsoft SQL Server including Always-On, Mirroring, Log Shipping, and Clustering with a practical understanding of other Infrastructure BC/DC capabilities Leverage metrics to drive capacity planning and trending to proactively identify potential problems and mitigate before they result in customer impact Understand the place of automation and standardization when delivering stable, maintainable, and performant database services at scale Perform platform, database, and query optimization Must be legally authorized to work in India for any employer without sponsorship Must be fluent in English (written and spoken) Successful completion of interview required to meet job qualification Reliable, punctual attendance is an essential function of the position What will help you propel from the pack (Preferred Qualifications): Master's degree in Computer Science, Engineering, or related discipline Microsoft/AWS certifications on DB track preferred Hands-On experience with AWS native databases, compute, storage, monitoring technologies, and continuous integration pipelines Experience implementing automation of Microsoft SQL Server deployment and maintenance, and support activities preferred Collaborate both vertically and horizontally to evolve overall database services and technology strategies Experience supporting SSAS, SSIS, and SSRS Very large Database (10+ TB) experience preferred Experience with PowerShell or other scripting languages a plus Experience with PCI, SOX, GDPR, and SQL Auditing a plus Ability to support 24 X 7 United operations databases. Quick learner of new technology and guidelines with flexible, positive attitude and team player with independent decision making GGN00001993 Show more Show less
Posted 1 week ago
5.0 - 8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
What We Offer At Magna, you can expect an engaging and dynamic environment where you can help to develop industry-leading automotive technologies. We invest in our employees, providing them with the support and resources they need to succeed. As a member of our global team, you can expect exciting, varied responsibilities as well as a wide range of development prospects. Because we believe that your career path should be as unique as you are. Group Summary Transforming mobility. Making automotive technology that is smarter, cleaner, safer and lighter. That’s what we’re passionate about at Magna Powertrain, and we do it by creating world-class powertrain systems. We are a premier supplier for the global automotive industry with full capabilities in design, development, testing and manufacturing of complex powertrain systems. Our name stands for quality, environmental consciousness, and safety. Innovation is what drives us and we drive innovation. Dream big and create the future of mobility at Magna Powertrain. Job Responsibilities Job Introduction In this challenging and interesting position, you are the expert for all topics related to databases. You will be part of an international team, that ensures the smooth and efficient operation of various database systems, including Microsoft SQL Server, Azure SQL, Oracle, DB2, MariaDB, and PostgreSQL. Your responsibilities include providing expert support for database-related issues, troubleshooting problems promptly, and collaborating with users and business stakeholders to achieve high customer satisfaction. Your expertise in cloud database services and general IT infrastructure will be crucial in supporting the development of the future data environment at Magna Powertrain. Major Responsibilities Responsible for ensuring the smooth and efficient operation of all database systems, including but not limited to Microsoft SQL Server, Azure SQL, Oracle, DB2, MariaDB, PostgreSQL. Provide expert support for database-related issues, troubleshoot and resolve problems quickly as they arise to ensure minimal disruption. Deliver professional assistance for database-related requests, working collaboratively with users and business stakeholders to achieve high customer satisfaction. Manage the installation, implementation, configuration, administration and decommission of database systems. Plan and execute database upgrades, updates, migrations, and implement changes, new patches and versions when required. Monitor database systems, database activities and overall database performance proactively, to identify issues and implement solutions to optimize performance. Develop and implement backup and recovery strategies, execute backups and restores to ensure data integrity and availability across all database systems. Perform database tuning and optimization, including indexing, query optimization, and storage management. Implement and maintain database security measures, including user access controls, encryption, and regular security audits to protect sensitive data from unauthorized access and breaches. Create and maintain proper documentation for all database systems and processes. Ensure constant evaluation, analysis and modernization of the database systems. Knowledge and Education Bachelor’s degree in computer science / information technology, or equivalent (Master’s preferred). Work Experience Minimum 5-8 years of proven experience as a database administrator in a similar position. Excellent verbal and written communication skills in English. German language skills are optional, but of advantage. Skills And Competencies We are looking for a qualified person with: In-depth expertise of database concepts, theory and best practices including but not limited to high availability/clustering, replication, indexing, backup and recovery, performance tuning, database security, data integrity, data modeling and query optimization. Expert knowledge of Microsoft SQL Server and its components, including but not limited to Failover Clustering, SQL Server Integration Services (SSIS), SQL Server Reporting Services (SSRS), and SQL Server Analysis Services (SSAS). Excellent knowledge of various database management systems, including but not limited to Oracle, IBM DB2, MariaDB and PostgreSQL. Familiarity with further database management systems (e.g. MySQL, MongoDB, Redis, etc.) is an advantage. Extensive expertise about Microsoft Azure database services (Azure SQL Databases, Azure SQL Managed Instances, SQL Server on Azure VMs). Proficiency with other major cloud platforms such as AWS or Google Cloud, as well as experience with their cloud database services (e.g. Amazon RDS, Google Cloud SQL) are an advantage. Comprehensive understanding of cloud technologies, including but not limited to cloud architecture, cloud service models and cloud security best practices. Good general knowledge about IT infrastructure, networking, firewalls and storage systems. High proficiency in T-SQL and other query languages. Knowledge of other scripting languages are an advantage (e.g. Python, PowerShell, Visual Basic, etc.). Experience with Databricks and similar data engineering tools for big data processing, analytics, and machine learning are an advantage. A working knowledge of Microsoft Power Platform tools including PowerApps, Power Automate, and Power BI is an advantage. Excellent analytical and problem-solving skills and strong attention to detail. Ability to work effectively in an intercultural team, strong organizational skills, and high self-motivation. Work Environment Regular overnight travel 10-25% of the time For dedicated and motivated employees, we offer an interesting and diversified job within a dynamic global team together with the individual and functional development in a professional environment of a global acting business. Fair treatment and a sense of responsibility towards employees are the principle of the Magna culture. We strive to offer an inspiring and motivating work environment. Awareness, Unity, Empowerment At Magna, we believe that a diverse workforce is critical to our success. That’s why we are proud to be an equal opportunity employer. We hire on the basis of experience and qualifications, and in consideration of job requirements, regardless of, in particular, color, ancestry, religion, gender, origin, sexual orientation, age, citizenship, marital status, disability or gender identity. Magna takes the privacy of your personal information seriously. We discourage you from sending applications via email or traditional mail to comply with GDPR requirements and your local Data Privacy Law. Worker Type Regular / Permanent Group Magna Powertrain Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. Years of Experience: Candidates with 4+ years of hands on experience Position: Senior Associate Industry: Telecom / Network Analytics / Customer Analytics Required Skills: Successful candidates will have demonstrated the following skills and characteristics: Must Have Proven experience with telco data including call detail records (CDRs), customer churn models, and network analytics Deep understanding of predictive modeling for customer lifetime value and usage behavior Experience working with telco clients or telco data platforms (like Amdocs, Ericsson, Nokia, AT&T etc) Proficiency in machine learning techniques, including classification, regression, clustering, and time-series forecasting Strong command of statistical techniques (e.g., logistic regression, hypothesis testing, segmentation models) Strong programming in Python or R, and SQL with telco-focused data wrangling Exposure to big data technologies used in telco environments (e.g., Hadoop, Spark) Experience working in the telecom industry across domains such as customer churn prediction, ARPU modeling, pricing optimization, and network performance analytics Strong communication skills to interface with technical and business teams Nice To Have Exposure to cloud platforms (Azure ML, AWS SageMaker, GCP Vertex AI) Experience working with telecom OSS/BSS systems or customer segmentation tools Familiarity with network performance analytics, anomaly detection, or real-time data processing Strong client communication and presentation skills Roles And Responsibilities Assist analytics projects within the telecom domain, driving design, development, and delivery of data science solutions Develop and execute on project & analysis plans under the guidance of Project Manager Interact with and advise consultants/clients in US as a subject matter expert to formalize data sources to be used, datasets to be acquired, data & use case clarifications that are needed to get a strong hold on data and the business problem to be solved Drive and Conduct analysis using advanced analytics tools and coach the junior team members Implement necessary quality control measures in place to ensure the deliverable integrity like data quality, model robustness, and explainability for deployments. Validate analysis outcomes, recommendations with all stakeholders including the client team Build storylines and make presentations to the client team and/or PwC project leadership team Contribute to the knowledge and firm building activities Professional And Educational Background BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA from reputed institute Show more Show less
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
RabbitMQ Administrator - Prog Leasing1 Job Title: RabbitMQ Cluster Migration Engineer Job Summary We are seeking an experienced RabbitMQ Cluster Migration Engineer to lead and execute the seamless migration of our existing RabbitMQ infrastructure to a AWS - new high-availability cluster environment. This role requires deep expertise in RabbitMQ, clustering, messaging architecture, and production-grade migrations with minimal downtime. Key Responsibilities Design and implement a migration plan to move existing RabbitMQ instances to a new clustered setup. Evaluate the current messaging architecture, performance bottlenecks, and limitations. Configure, deploy, and test RabbitMQ clusters (with or without federation/mirroring as needed). Ensure high availability, fault tolerance, and disaster recovery configurations. Collaborate with development, DevOps, and SRE teams to ensure smooth cutover and rollback plans. Automate setup and configuration using tools such as Ansible, Terraform, or Helm (for Kubernetes). Monitor message queues during migration to ensure message durability and delivery guarantees. Document all aspects of the architecture, configurations, and migration process. Required Qualifications Strong experience with RabbitMQ, especially in clustered and high-availability environments. Deep understanding of RabbitMQ internals: queues, exchanges, bindings, vhosts, federation, mirrored queues. Experience with RabbitMQ management plugins, monitoring, and performance tuning. Proficiency with scripting languages (e.g., Bash, Python) for automation. Hands-on experience with infrastructure-as-code tools (e.g., Ansible, Terraform, Helm). Familiarity with containerization and orchestration (e.g., Docker, Kubernetes). Strong understanding of messaging patterns and guarantees (at-least-once, exactly-once, etc.). Experience with zero-downtime migration and rollback strategies. Preferred Qualifications Experience migrating RabbitMQ clusters in production environments. Working knowledge of cloud platforms (AWS, Azure, or GCP) and managed RabbitMQ services. Understanding of security in messaging systems (TLS, authentication, access control). Familiarity with alternative messaging systems (Kafka, NATS, ActiveMQ) is a plus. Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The job market for clustering roles in India is thriving, with numerous opportunities available for job seekers with expertise in this area. Clustering professionals are in high demand across various industries, including IT, data science, and research. If you are considering a career in clustering, this article will provide you with valuable insights into the job market in India.
Here are 5 major cities in India actively hiring for clustering roles: 1. Bangalore 2. Pune 3. Hyderabad 4. Mumbai 5. Delhi
The average salary range for clustering professionals in India varies based on experience levels. Entry-level positions may start at around INR 3-6 lakhs per annum, while experienced professionals can earn upwards of INR 12-20 lakhs per annum.
In the field of clustering, a typical career path may look like: - Junior Data Analyst - Data Scientist - Senior Data Scientist - Tech Lead
Apart from expertise in clustering, professionals in this field are often expected to have skills in: - Machine Learning - Data Analysis - Python/R programming - Statistics
Here are 25 interview questions for clustering roles: - What is clustering and how does it differ from classification? (basic) - Explain the K-means clustering algorithm. (medium) - What are the different types of distance metrics used in clustering? (medium) - How do you determine the optimal number of clusters in K-means clustering? (medium) - What is the Elbow method in clustering? (basic) - Define hierarchical clustering. (medium) - What is the purpose of clustering in machine learning? (basic) - Can you explain the difference between supervised and unsupervised learning? (basic) - What are the advantages of hierarchical clustering over K-means clustering? (advanced) - How does DBSCAN clustering algorithm work? (medium) - What is the curse of dimensionality in clustering? (advanced) - Explain the concept of silhouette score in clustering. (medium) - How do you handle missing values in clustering algorithms? (medium) - What is the difference between agglomerative and divisive clustering? (advanced) - How would you handle outliers in clustering analysis? (medium) - Can you explain the concept of cluster centroids? (basic) - What are the limitations of K-means clustering? (medium) - How do you evaluate the performance of a clustering algorithm? (medium) - What is the role of inertia in K-means clustering? (basic) - Describe the process of feature scaling in clustering. (basic) - How does the GMM algorithm differ from K-means clustering? (advanced) - What is the importance of feature selection in clustering? (medium) - How can you assess the quality of clustering results? (medium) - Explain the concept of cluster density in DBSCAN. (advanced) - How do you handle high-dimensional data in clustering? (medium)
As you venture into the world of clustering jobs in India, remember to stay updated with the latest trends and technologies in the field. Equip yourself with the necessary skills and knowledge to stand out in interviews and excel in your career. Good luck on your job search journey!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.