Home
Jobs

1261 Clustering Jobs - Page 22

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

12.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

JD: Windows & VMware Specialist We are looking for a highly skilled Windows & VMware Specialist – L4 (Lead/Admin) to join our IT Infrastructure team. This is a customer-facing role that requires strong interpersonal communication, technical leadership, and advanced troubleshooting and analytical skills. The ideal candidate will lead complex support scenarios, drive operational excellence, and ensure high availability across Windows and VMware platforms. Key Responsibilities: Lead the administration and lifecycle management of Windows Server infrastructure and VMware vSphere environments. Serve as the technical lead in critical incidents, ensuring timely resolution and customer satisfaction. Act as a primary technical point of contact in customer-facing discussions for system performance, upgrades, and issue resolution. Mentor and guide junior engineers, ensuring best practices are followed in operations and incident handling. Plan, implement, and support Windows Server (2012/2016/2019/2022) and VMware (vCenter, ESXi, DRS, HA, vMotion) environments. Perform root cause analysis (RCA) for major incidents and lead the development of preventive measures. Ensure patching, upgrades, backups, and monitoring are carried out with minimal impact to business operations. Develop and maintain technical documentation, SOPs, and architectural diagrams. Ensure compliance with security policies, hardening guidelines, and internal audit requirements. Required Skills & Qualifications: 12+ years of enterprise IT experience, with 8+ years in a lead or senior-level role in Windows and VMware administration. Deep hands-on expertise in: Windows Server administration (AD, GPO, DNS, DHCP, Failover Clustering). VMware vSphere, including ESXi, vCenter, snapshots, DRS, and HA. Strong scripting and automation skills using PowerShell or equivalent. Experience with monitoring, backup, and disaster recovery tools like Veeam, SolarWinds, vRealize, or equivalent. Solid understanding of networking fundamentals (TCP/IP, VLANs, firewalls, VPN). Excellent customer-facing communication, problem-solving, and collaboration skills. Familiarity with ITIL practices, especially incident, change, and problem management. Preferred Skills & Certifications: VMware Certified Professional (VCP) or Microsoft Windows Server certification (e.g., AZ-800/AZ-801 or MCSA). Experience in hybrid environments with cloud integration (Azure/AWS). Exposure to infrastructure automation or infrastructure-as-code (IaC) tools like Ansible, Terraform. Knowledge of compliance frameworks such as ISO 27001 or NIST is an added advantage. Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Working as an AI/ML Engineer at Navtech, you will: * Design, develop, and deploy machine learning models for classification, regression, clustering, recommendations, or NLP tasks. Clean, preprocess, and analyze large datasets to extract meaningful insights and features. Work closely with data engineers to develop scalable and reliable data pipelines. Experiment with different algorithms and techniques to improve model performance. Monitor and maintain production ML models, including retraining and model drift detection. Collaborate with software engineers to integrate ML models into applications and services. Document processes, experiments, and decisions for reproducibility and transparency. Stay current with the latest research and trends in machine learning and AI. Who Are We Looking for Exactly? * 2–4 years of hands-on experience in building and deploying ML models in real-world applications. Strong knowledge of Python and ML libraries such as Scikit-learn, TensorFlow, PyTorch, XGBoost, or similar. Experience with data preprocessing, feature engineering, and model evaluation techniques. Solid understanding of ML concepts such as supervised and unsupervised learning, overfitting, regularization, etc. Experience working with Jupyter, pandas, NumPy, and visualization libraries like Matplotlib or Seaborn. Familiarity with version control (Git) and basic software engineering practices. You consistently demonstrate strong verbal and written communication skills as well as strong analytical and problem-solving abilities You should have a master’s degree /Bachelors (BS) in computer science, Software Engineering, IT, Technology Management or related degrees and throughout education in English medium. We’ll REALLY love you if you: * Have knowledge of cloud platforms (AWS, Azure, GCP) and ML services (SageMaker, Vertex AI, etc.) Have knowledge of GenAI prompting and hosting of LLMs. Have experience with NLP libraries (spaCy, Hugging Face Transformers, NLTK). Have familiarity with MLOps tools and practices (MLflow, DVC, Kubeflow, etc.). Have exposure to deep learning and neural network architectures. Have knowledge of REST APIs and how to serve ML models (e.g., Flask, FastAPI, Docker). Why Navtech? * Performance review and Appraisal Twice a year. Competitive pay package with additional bonus & benefits. Work with US, UK & Europe based industry renowned clients for exponential technical growth. Medical Insurance cover for self & immediate family. Work with a culturally diverse team from different geographies. About Us Navtech is a premier IT software and Services provider. Navtech’s mission is to increase public cloud adoption and build cloud-first solutions that become trendsetting platforms of the future. We have been recognized as the Best Cloud Service Provider at GoodFirms for ensuring good results with quality services. Here, we strive to innovate and push technology and service boundaries to provide best-in-class technology solutions to our clients at scale. We deliver to our clients globally from our state-of-the-art design and development centers in the US & Hyderabad. We’re a fast-growing company with clients in the United States, UK, and Europe. We are also a certified AWS partner. You will join a team of talented developers, quality engineers, product managers whose mission is to impact above 100 million people across the world with technological services by the year 2030. Navtech is looking for a AI/ML Engineer to join our growing data science and machine learning team. In this role, you will be responsible for building, deploying, and maintaining machine learning models and pipelines that power intelligent products and data-driven decisions. Show more Show less

Posted 1 week ago

Apply

5.0 - 8.0 years

6 - 10 Lacs

Pune

Work from Office

Naukri logo

Job ID: 199776 Required Travel :Minimal Managerial - No Location: :India- Pune (Amdocs Site) Who are we Amdocs helps those who build the future to make it amazing. With our market-leading portfolio of software products and services, we unlock our customers innovative potential, empowering them to provide next-generation communication and media experiences for both the individual end user and enterprise customers. Our employees around the globe are here to accelerate service providers migration to the cloud, enable them to differentiate in the 5G era, and digitalize and automate their operations. Listed on the NASDAQ Global Select Market, Amdocs had revenue of $5.00 billion in fiscal 2024. For more information, visit www.amdocs.com In one sentence The DB specialist have the ultimate responsibility of physical and applicative database administration activities supporting end2end life cycle of product from database perspective What will your job look like You will support the following levels Physical - responsible for the physical and technical oriented aspects e.g. storage, security, networking and more; Application - you will handle all application-related issues (e.g. queries, users, embedded SQL s etc.) Ensure database resources are sized accurately and a design strategy is developed to make sure that the database is maintained at a healthy size. You will ensure availability and performance of multi database and application environments with very large volumes and sizes. You will perform routine DBA tasks like database maintenance, backups, recovery, table space management, upgrades, etc. You will execute periodic health checks for databases and recommend changes that should be executed in the production environment to ensure efficient performance. You will interact and work with multiple infra and IT teams as part of environment setup, maintenance and support You will work closely with developers, assist them with database structure design as of business needs (e.g. indexes, constraints, integrity) All you need is... DB2 DBA on Mainframe(Z/OS): DB Objects - Experience in creating and modifying DB2 objects - Database, Storage Group, Tablespace, Tables, Views, Indexes, etc. Experience with DB2 commands- Stop/Start Databases, Display command, Cancel thread and Term & Display utility, others. Hands on experience on DB2 utilitiesReorg and Runstat, Backup and Recovery of Table spaces (copy & recover), Repair, Load/Unload utilities. Ability to bind/rebind the Plans/Packages and Granting required accesses to Packages/Plans/Collections. Experience in Database Monitoring. Experience in Database replication - IBM DB2 Data Propagator DB2 administration - Strong Troubleshooting skills DB2 privileges and RACF z/OS - TSO, ISPF, JCL. Tools - DB2 Admin tool, QMF, FileManager, Changeman, ESP. Why you will love this job: You will have the opportunity to work in a growing organization, with ever growing opportunities for personal growth and one of the highest scores of employee engagement in Amdocs. You will be able to use your specific insights into variety of projects to overcome technical challenge while continuing to deepen your area of knowledge. You will have the opportunity to work in multinational environment for the global market leader in its field Amdocs is an equal opportunity employer. We welcome applicants from all backgrounds and are committed to fostering a diverse and inclusive workforce

Posted 1 week ago

Apply

7.0 - 8.0 years

6 - 10 Lacs

Gurugram

Work from Office

Naukri logo

Job ID: 199536 Required Travel :Minimal Managerial - No LocationIndia- Gurgaon (Amdocs Site) Who are we Amdocs helps those who build the future to make it amazing. With our market-leading portfolio of software products and services, we unlock our customers innovative potential, empowering them to provide next-generation communication and media experiences for both the individual end user and enterprise customers. Our employees around the globe are here to accelerate service providers migration to the cloud, enable them to differentiate in the 5G era, and digitalize and automate their operations. Listed on the NASDAQ Global Select Market, Amdocs had revenue of $5.00 billion in fiscal 2024. For more information, visit www.amdocs.com In one sentence The DB specialist have the ultimate responsibility of physical and applicative database administration activities supporting end2end life cycle of product from database perspective What will your job look like You will provide database supreme level problem solving services to all stakeholders during project lifecycle (from Development till post production) by clarifying and addressing permanent and efficient solutions to complex and unusual problems You will provide DB professional guidance and coaching to project management and development teams, implementation groups as well as customer architects - internal and external Amdocs customers, including stakeholders without database understanding. You will design, develop, configure and administer large and critical database systems ensuring high performance and improve highly complex code where conventional approaches do not help You will lead and define tuning database parameters (physical layout and memory buffers) and promote the standardization per product cross customers world wide You will design strategy for DB working processes (e.g. backup & recovery) during project life cycle and translate and guide until implementation; you will lead and build team knowledge capabilities and versatility and personal skills variety You will serve as Authority in Amdocs by advanced degree of competence, technology guru demonstrating extreme dexterity and knowledge as the result of rich experience and high aptitudes in DB areas, defining DB vision and strategy in the organization, serve as ultimate level of technical escalations for critical showstopper incidents in Productions customers systems with direct business impact. You will be a consistent professional (you are the bridge builder, you set the example for others in to follow as it relates to communication, managing expectations, and building/growing partnerships). All you need is... Bachelor / B.Sc. in computer science or equivalent 7-8 years experience as a DBA and SQL knowledge in a SW company Significant experience and solid knowledge of RDBMS Experience with UNIX or UNIX variants and scripting Relevant Database Certifications are required. Experience in operating within a complex, multi-interface environment. Why you will love this job: You will have the opportunity to work in a growing organization, with ever growing opportunities for personal growth and one of the highest scores of employee engagement in Amdocs. You will be able to use your specific insights into variety of projects to overcome technical challenge while continuing to deepen your area of knowledge. You will have the opportunity to work in multinational environment for the global market leader in its field Amdocs is an equal opportunity employer. We welcome applicants from all backgrounds and are committed to fostering a diverse and inclusive workforce

Posted 1 week ago

Apply

10.0 - 20.0 years

22 - 32 Lacs

Noida, Greater Noida

Hybrid

Naukri logo

Skills Required: SQL DBA, Clustering, Performance Tuning, Azure

Posted 1 week ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

AI & Machine Learning Intern 📍 Location: Remote (100% Virtual) 📅 Duration: 3 Months 💸 Stipend for Top Interns: ₹15,000 🎁 Perks: Certificate | Letter of Recommendation | Full-Time Offer (Based on Performance) About INLIGHN TECH INLIGHN TECH is focused on delivering practical, project-driven learning experiences to help students and graduates build careers in emerging technologies. Our AI & Machine Learning Internship is designed to offer hands-on experience in building intelligent systems and solving real-world problems using data. 🚀 Internship Overview As an AI & ML Intern , you will work on projects involving machine learning models, data preprocessing, and algorithm development . This internship will equip you with the skills to apply AI techniques in various domains, including natural language processing, computer vision, and predictive analytics. 🔧 Key Responsibilities Clean and preprocess datasets for training and testing machine learning models Build, train, and evaluate ML models using Python libraries like scikit-learn, TensorFlow, PyTorch, and Keras Work on projects involving classification, regression, clustering, NLP , or image processing Analyze model performance and optimize results through hyperparameter tuning Collaborate with team members to implement AI solutions for real-world scenarios Present findings through visualizations, reports, and presentations ✅ Qualifications Pursuing or recently completed a degree in Computer Science, Data Science, Engineering, or related fields Strong foundation in Python programming and statistics Understanding of machine learning algorithms and AI concepts Familiarity with Jupyter Notebook , Pandas , NumPy , and visualization libraries like Matplotlib/Seaborn Bonus: Exposure to NLP, deep learning , or AI model deployment tools Curiosity, creativity, and a passion for solving problems with data 🎓 What You’ll Gain Hands-on experience with real datasets and applied ML projects Knowledge of industry-standard AI tools and workflows A portfolio of AI/ML projects you can showcase to employers Internship Certificate upon successful completion Letter of Recommendation for outstanding performers Opportunity for a Full-Time Offer based on performance Show more Show less

Posted 1 week ago

Apply

4.0 - 9.0 years

14 - 18 Lacs

Bengaluru

Work from Office

Naukri logo

About us: As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up . Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful. Overview about TII At Target, we have a timeless purpose and a proven strategy. And that hasn t happened by accident. Some of the best minds from different backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Target s global team and has more than 4,000 team members supporting the company s global strategy and operations. Pyramid overview A role with Target Data Science & Engineering means the chance to help develop and manage state of the art predictive algorithms that use data at scale to automate and optimize decisions at scale. Whether you join our Statistics, Optimization or Machine Learning teams, you ll be challenged to harness Target s impressive data breadth to build the algorithms that power solutions our partners in in Marketing, Supply Chain Optimization, Network Security and Personalization rely on Every Scientist on Target s Data Sciences team can expect modeling and data science, software/product development of highly performant code for Model Performance, and to elevate Target s culture and apply retail domain knowledge. About the role As a Lead Data Scientist, you ll influence by interacting with the Data Sciences team, Product teams, Scientist/Engineer individual contributors from other pillars, and business partners. You will perform within the scale and scope of your role by defining solutions and beginning to identify problems to solve and contribute to Data Sciences and Target s culture by modeling and contributing to the culture. You ll get the opportunity to use your expertise in one or more of the following areasmachine learning, probability theory & statistics, optimization theory, Simulation, Econometrics, Deep Learning, Natural Language processing or computer vision. We will look to you to own design and implementation of an algorithmic solution (e.g., recommendation or forecasting algorithm), including data understanding, feature engineering, model development, validation and testing, and deployment to a production environment. You ll drive development of problem statements that capture the business considerations, define metrics/measurement to validate model performance, and drive feasibility study with data requirements and potential solutions approaches to be considered. You ll evaluate tradeoffs of simple vs complex models/solutions in determining the right technique to employ for a business problem and develop and maintain a nuanced understanding of the data generated by the business, including fundamental limitations of the data. You ll leverage your proficiency in one or more approved programming languages (Java, Scala, Python, R), and ensure foundational programming principles in developing code (best practices, writes unit tests, code organization, basics of CI/CD etc.) are followed in developing the team s products/models. You ll not only stitch together basic data pipelines for a given problem and own design and implementation of individual components within Data Science/Tech applications, but also articulate the technical strategy, value of technology, and impact on the business. As you do so, you ll collaborate with engineers, scientists, and business partners/product owners to create algorithmic solutions that are performant and integrated into applications. We ll look to you to mentor and provide technical support within a team, including mentoring junior team members, and present your work and your team s work to business partners and other Data Sciences teams. With a deeper understanding of your functional area of responsibility, you ll support agile ceremonies, collaborate with peers across multiple products, communicate and collaborate with business partners, and demonstrate an understanding of areas outside your scope of responsibility. The exciting part of retailIt s always changing! Core responsibilities of this job are described within this job description. Job duties may change at any time due to business needs. About you: 4-year degree in Quantitative disciplines (Science, Technology, Engineering, Mathematics) and 6+ years of professional experience or equivalent industry experience Master s degree in Quantitative disciplines (Science, Technology, Engineering, Mathematics) Good knowledge and experience developing optimization, simulation, and statistical models Strong analytical thinking skills. Ability to creatively solve business problems, innovating new approaches where required Strong hands-on programming skills in Python, SQL, Hadoop/Hive. Additional knowledge of Spark, Scala, R, Java desired but not mandatory Good working knowledge of mathematical and statistical concepts, MILP, algorithms, and computational complexity Passion for solving interesting and relevant real-world problems using a data science approach Experience in implementing advanced statistical techniques like regression, clustering, PCA, forecasting (time series) etc. Able to produce reasonable documents/narrative suggesting actionable insights Excellent communication skills. Ability to clearly tell data driven stories through appropriate visualizations, graphs, and narratives Self-driven and results oriented; able to meet tight timelines Strong team player with ability to collaborate effectively across geographies/time zones Know More About us here: Life at Target- https://india.target.com/ Benefits- https://india.target.com/life-at-target/workplace/benefits Culture- https://india.target.com/life-at-target/belonging

Posted 1 week ago

Apply

4.0 - 9.0 years

12 - 16 Lacs

Bengaluru

Work from Office

Naukri logo

About us: As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up. Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful. Overview about TII At Target, we have a timeless purpose and a proven strategy. And that hasn t happened by accident. Some of the best minds from different backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Target s global team and has more than 4,000 team members supporting the company s global strategy and operations. Pyramid overview A role with Target Data Science & Engineering means the chance to help develop and manage state of the art predictive algorithms that use data at scale to automate and optimize decisions at scale. Whether you join our Statistics, Optimization or Machine Learning teams, you ll be challenged to harness Target s impressive data breadth to build the algorithms that power solutions our partners in in Marketing, Supply Chain Optimization, Network Security and Personalization rely on Every Scientist on Target s Data Sciences team can expect modeling and data science, software/product development of highly performant code for Model Performance, and to elevate Target s culture and apply retail domain knowledge. Position Overview: As a Senior Data Scientist, you will be involved in end-to-end development of Ad-Tech products and capabilities that fulfil strategic priorities and power growth of Roundel, Target s Retail media business. You will leverage understanding of data and algorithms to build prototypes and run experiments to evaluate them against given specifications. As you follow agile processes, you will implement and deploy a scalable data science solution by using MLOps best practices across model development life cycle. You will collaborate with product and business partners to seek feedback on effectiveness of solution and identify future opportunities for enhancements. You will work with your peers to create a well-maintainable & tested codebase with relevant documentation. The exciting part of retail and mediaIt s always changing! Core responsibilities of this job are described within this job description. Job duties may change at any time due to business needs. About You: 4-year degree in Quantitative disciplines (Science, Technology, Engineering, Mathematics) or equivalent experience 3+ years of professional experience or equivalent industry experience Good knowledge and experience developing optimization, simulation and statistical models Strong analytical thinking skills. Ability to creatively solve business problems, innovating new approaches, where required Strong hands-on programming skills in Python, SQL, Spark, Hadoop/Hive . Good working knowledge of mathematical and statistical concepts, MILP, algorithms and computational complexity Passion for solving interesting and relevant real-world problems using a data science approach Experience in implementing advanced statistical techniques like r egression, clustering, PCA, forecasting (time series) etc. Able to produce reasonable documents/narrative suggesting actionable insights Excellent communication skills. Ability to clearly tell data driven stories through appropriate visualizations, graphs and narratives Self-driven and results oriented; able to meet tight timelines Strong team player with ability to collaborate effectively across geographies/time zones Know More About Us here: Life at Target - https://india.target.com/ Benefits - https://india.target.com/life-at-target/workplace/benefits Culture- https://india.target.com/life-at-target/belonging

Posted 1 week ago

Apply

3.0 - 5.0 years

1 - 4 Lacs

Hyderabad

Work from Office

Naukri logo

Job Information Job Opening ID ZR_1899_JOB Date Opened 29/04/2023 Industry Technology Job Type Work Experience 3-5 years Job Title Phantom/SOAR City Hyderabad Province Telangana Country India Postal Code 500081 Number of Positions 5 Phantom/SOAR & Python experience with Good Development skills Good in ITIS and Understanding and building playbooks with On-prem multi-site clustering Splunk environment Practical experience in monitoring and tuning Playbooks & Use cases Good knowledge of creating custom apps with dashboards / reports / alerts and demonstrate Understanding of Splunk apps Ownership of delivery for small to large Splunk onboarding projects Ability to automate repetitive tasks and reduce noise Implementing and supporting Phantom with good Python, Red Hat and Windows experience Location: Pan India check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested

Posted 1 week ago

Apply

5.0 - 8.0 years

5 - 9 Lacs

Pune

Work from Office

Naukri logo

Job Information Job Opening ID ZR_1761_JOB Date Opened 21/03/2023 Industry Technology Job Type Work Experience 5-8 years Job Title Database Management Specialist City Pune Province Maharashtra Country India Postal Code 411013 Number of Positions 1 Mandatory skills SQL server 2014-sql server 2019 experience Experience in Database Administration including Installation, Configuration, User Access Management, BackUp/Recovery, Monitoring and Performance Tuning, Space Utilization, DB migration, DB mirroring , Partitioning Should have strong knowledge on logical and physical database design activities, DDL constructs (data definition language constructs) Ticketing management experience Troubleshooting, analytical, attention to detail Experience with OEM, Idera, or other database monitoring tools Oracle version 19 and Exadata experience Powershell or other programming experience for automation of sql installs, and other processes Azure, AWS, cloud database management experience check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested

Posted 1 week ago

Apply

0 years

0 Lacs

Greater Hyderabad Area

On-site

Linkedin logo

Description Role Description Our Tech and Product team is tasked with innovating and maintaining a massive distributed systems engineering platform that ships hundreds of features to production for tens of millions of users across all industries every day. Our users count on our platform to be highly reliable, lightning fast, supremely secure, and to preserve all of their customizations and integrations every time we ship. Our platform is deeply customizable to meet the differing demands of our vast user base, creating an exciting environment filled with complex challenges for our hundreds of agile engineering teams every day. Required Skills And Experience Salesforce is looking for Site Reliability Engineers to build and manage a multi-substrate kubernetes and microservices platform which powers Core CRM and a growing set of applications across Salesforce. This platform provides the ability to develop and deploy microservices quickly and efficiently, accelerating their path to production.In this role, You are responsible for the high availability of a large fleet of clusters running various technologies like Kubernetes, software load balancers, service mesh and so on. You’ll gain valuable experience troubleshooting real production issues which will expand your knowledge on the architecture of k8s ecosystem services and internals. You will contribute code wherever possible to drive improvement You will drive automation efforts in Python/Golang/Terraform/Spinnaker/Puppet/Jenkins to eliminate manual work with day-to-day operations. You will help improve the visibility of the platform by implementing necessary monitoring and metrics. You’ll implement self-healing mechanisms to proactively fix issues to reduce manual labor. You will get a chance to improve your communication and collaboration skills working with various other Infrastructure teams across Salesforce. You will be interacting with a highly innovative and creative team of developers and architects. You will evaluate new technologies to solve problems as neededYou are the ideal candidate if you have a passion for live site service ownership. You have demonstrated a strong ability to manage large distributed systems. You are comfortable with troubleshooting complex production issues that span multiple disciplines. You bring a solid understanding of how infrastructure software components work. You are able to automate tasks using a modern high-level language. You have good written and spoken communication skills.Required Skills:Experience operating large-scale distributed systems, especially in cloud environments Excellent troubleshooting skills with the ability to learn new technologies in complex distributed systems Strong working experience with Linux Systems Administration. Good knowledge of linux internals. Good experience in any of the scripting/programming languages: Python, GoLang etc ., Basic knowledge of Networking protocols and components: TCP/IP Stack, Switches, Routers, Load Balancers. Experience in any of Puppet, Chef, Ansible or other devops tools. Experience in any of the monitoring tools like Nagios, grafana, Zabbix etc., Experience with Kubernetes, Docker or Service Mesh Experience with AWS, Terraform, Spinnaker A continuous learner and a critical thinker A team player with great communication skills Areas where you may be working on include highly scalable, highly performant distributed systems with highly available and durable data storage capabilities that ensure high availability of the stack above that includes databases. A thorough understanding of distributed systems, system programming, working with system resources is required. Practical knowledge for challenges regarding clustering solutions, hands-on experience in deploying your code in the public cloud environments, working knowledge of Kubernetes and working with APIs provided by various public cloud vendors to handle data are highly desired skills. Benefits & Perks Comprehensive benefits package including well-being reimbursement, generous parental leave, adoption assistance, fertility benefits, and more! World-class enablement and on-demand training with Trailhead.com Exposure to executive thought leaders and regular 1:1 coaching with leadership Volunteer opportunities and participation in our 1:1:1 model for giving back to the community For more details, visit https://www.salesforcebenefits.com/ Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Gurgaon, Haryana, India

Remote

Linkedin logo

About This Role BlackRock Overview: BlackRock is one of the world’s preeminent asset management firms and a premier provider of global investment management, risk management and advisory services to institutional, intermediary and individual investors around the world. BlackRock offers a range of solutions — from rigorous fundamental and quantitative active management approaches aimed at maximizing outperformance to highly efficient indexing strategies designed to gain broad exposure to the world’s capital markets. Our clients can access our investment solutions through a variety of product structures, including individual and institutional separate accounts, mutual funds and other pooled investment vehicles, and the industry-leading iShares® ETFs. Aladdin Financial Engineering Group (AFE) AFE is a diverse and global team with a keen interest and expertise in all things related to technology and financial analytics. The group is responsible for the research and development of quantitative financial and behavioral models and tools across many different areas – single-security pricing, prepayment models, risk, return attribution, liquidity, optimization and portfolio construction, scenario analysis and simulations, etc. – and covering all asset classes. The group is also responsible for the technology platform that delivers those models to our internal partners and external clients, and their integration with Aladdin. AFE conducts leading research on the areas above, delivering state-of-the-art models. AFE publishes applied scientific research frequently, and our members present regularly at leading industry conferences. AFE engages constantly with the sales team in client visits and meetings. Job Description You can help conduct research to build quantitative financial models and portfolio analytics that help managing most of the money of the world’s largest asset manager. You can bring all yourself to the job. From the top of the firm down we embrace the values, identities and ideas brought by our employees. We are looking for curious people with a strong background in quantitative research, data science and machine learning, have awesome problem-solving skills, insatiable appetite for learning and innovating, adding to BlackRock’s vibrant research culture. If any of this excites you, we are looking to expand our team. We currently have quant researcher role with the AFE Investment AI (IAI) Team. The securities market is undergoing a massive transformation as the industry is embracing machine learning and, more broadly, AI, to help evolve the investment process. Pioneering this journey at BlackRock, the team has better deliver applied AI investment analytics to help both BlackRock and Aladdin clients achieve scale through automation while safeguarding alpha generation. The IAI team combines AI / ML methodology and technology skills with deep subject matter expertise in fixed income, equity, and multi-asset markets, and the buyside investment process. We are building next generation liquidity, security similarity and pricing models leveraging our expertise in quantitative research, data science and machine learning. The models we build use innovative machine learning approaches, have real practical value and are used by traders and portfolio managers alike. Our models use cutting edge econometric/statistical methods and tools. The models themselves have real practical value and are used by traders, portfolio managers and risk managers representing different investment styles (fundamental vs. quantitative) and across different investment horizons. Research is conducted predominantly in Python and Scala, and implemented into production by a separate, dedicated team of developers. These models have a huge footprint of usage across the entire Aladdin client base, and so we place special emphasis on scalability and ensuring adherence to BlackRock’s rigorous standards of model governance and control. Background And Responsibilities We are looking to hire a quant researcher with 4+ years’ experience to join AFE Investment AI team focusing on Trading and Liquidity to work closely with other data scientists/researchers to support Risk Mangers, Portfolio Managers and Traders. We build cutting edge liquidity analytics using a wide range of ML algos and a broad array of technologies (Python, Scala, Spark/Hadoop, GCP, Azure). This role is a great opportunity to work closely with the Portfolio Managers, Risk Managers and Trading team, spanning areas such as: Perform analysis of large data sets comprising of market data, trading data and derived analytics. Evaluate trading data, including pre-processing, feature engineering, variable selection, dimensionality reduction etc. Leverage machine learning to extract insights from data and work with investment managers to put those into action. Design, develop models/ML solutions for Trading & Liquidity. Implement the model and integrate the model into Aladdin analytical system in accordance with BlackRock’s model governance policy. Qualifications B.Tech / B.E. / M.Sc. degree in a quantitative discipline (Mathematics, Physics, Computer Science, Finance or similar area). M.Tech. / PhD is a plus. Strong background in Mathematics, Statistics, Probability, Linear Algebra Knowledgeable about data mining, data analytics, data modeling Confident in building models to solve problems including time series forecasting, clustering problems and hands on experience with a range of statistical and machine learning approaches. Ability to work independently and efficiently in a fast-paced and team-oriented environment. Knowledge of fixed income and credit instruments and markets a plus. Previous experience or knowledge in market liquidity is not required but a big plus. For professionals with no prior financial industry experience, this position is a unique opportunity to gain in-depth knowledge of the asset management process in a world-class organization. Our Benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description The Kaleris IT Infrastructure Engineer is responsible for providing IT infrastructure support to customers using N4 software, both onsite and cloud-hosted. This role involves working with a variety of customers to assist with strategic planning, technical design, and implementation, including on-premises/cloud hosting design, firewall setup and configuration, security audits, health checks, network hardware and software, server sizing and maintenance (such as patch management and antivirus), and WAN/communication links. Responsibilities Maintain and troubleshoot customer IT infrastructure via Managed Services, including network hardware and software, servers, disaster recovery, storage, WAN/communication links, and cloud hosting. Design and maintain N4 on-premises and cloud environments for hosting N4 TOS software. Monitor and diagnose N4 TOS infrastructure incidents impacting software and underlying systems. Consult and troubleshoot customer-reported issues with N4 TOS software and infrastructure environment. Review and administer customer hardware/cloud setups and configurations. Respond to and resolve customer issues in accordance with Service Level Agreements (SLAs). Be on standby for critical P1 incidents, with availability to work weekends or shifts as required to support customers 24/7. Requirements Min 3 years of experience Experience with server centralization, consolidation, and virtualization of servers, storage, and overall IT architecture. Deep technical knowledge of current network hardware, protocols, and internet standards. Strong understanding of underlying operating systems and their configurations. Good understanding of database technologies, including scaling, redundancy, and backup. Experience with network capacity planning, network security principles, and best practices. Ability to conduct research into networking issues and products as required. Excellent hardware troubleshooting experience. Expertise/qualification in: Load balancers Clustering Tomcat Oracle 11g or 11g RAC, SQL, and MySQL databases Red Hat Linux 5 RAID Microsoft Server 2008 ActiveMQ Microsoft SQL-Server 2012 JMS Firewalls . Knowledge, Skills, And Abilities Experience in the maritime or logistics industry is a significant plus. Familiarity with Navis TOS is a big advantage. Experience working in distributed virtual teams. Demonstrates a positive attitude and strong work ethic. Meticulous organizational and multitasking skills. Excellent customer service and follow-up skills. Ability to work well with others and follow instructions. Multilingual capabilities are a plus. Kaleris is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. Show more Show less

Posted 1 week ago

Apply

4.0 - 7.0 years

6 - 9 Lacs

Mumbai

Work from Office

Naukri logo

Key Responsibilities: Design, implement, and manage Azure Kubernetes Service (AKS) clusters. Monitor and optimize the performance of AKS clusters. Troubleshoot and resolve issues related to AKS and containerized applications. Implement security measures to protect AKS clusters and containerized applications. Collaborate with development teams to support application deployment and maintenance. Maintain documentation for AKS configurations, processes, and procedures. Automate deployment, scaling, and management of containerized applications using AKS. Participate in on-call rotation for after-hours support. Upgrading Kubernetes Node.

Posted 1 week ago

Apply

50.0 years

0 Lacs

New Delhi, Delhi, India

On-site

Linkedin logo

Who is ERM? ERM is a leading global sustainability consulting firm, committed for nearly 50 years to helping organizations navigate complex environmental, social, and governance (ESG) challenges. We bring together a diverse and inclusive community of experts across regions and disciplines, providing a truly multicultural environment that fosters collaboration, professional growth, and meaningful global exposure. As a people-first organization, ERM values well-being, career development, and the power of collective expertise to drive sustainable impact for our clients—and the planet. Introducing our new Global Delivery Centre (GDC) Our Global Delivery Centre (GDC) in India is a unified platform designed to deliver high-value services and solutions to ERM’s global clientele. By centralizing key business and consulting functions, we streamline operations, optimize service delivery, and enable our teams to focus on what matters most—advising clients on sustainability challenges with agility and innovation. Through the GDC, you will collaborate with international teams, leverage emerging technologies, and further enhance ERM’s commitment to excellence—amplifying our shared mission to make a lasting, positive impact. Job Objective ERM is seeking a Modelling & Data Analyst to develop algorithms, financial models, and analytical tools that link non-financial ESG data with financial outcomes. This role is ideal for a professional with a strong quantitative background, proficient in statistical modelling, machine learning, and financial analysis. The candidate will work on transforming sustainability materiality and maturity frameworks into automated, scalable models that assess performance and valuation impacts. This is a non-client-facing offshore role focused on data-driven ESG research and tool development. The Ideal Candidate You bring a robust background in financial modelling and valuation with a deep passion for sustainability (e.g. climate, nature, employees wellbeing, sustainable revenue). You have demonstrated success in integrating ESG factors into transaction analysis and investment decision-making. With experience in investment banking, strategy consulting, or transaction advisory—and preferably exposure to private equity—you are adept at turning complex and qualitative ESG concepts into actionable financial insights. You will be able to communicate with senior stakeholders and provide thought leader in this evolving space. RESPONSIBILITIES: Quantitative Research & Algorithm Development Design data-driven models that quantify the impact of ESG factors on financial performance. Develop statistical algorithms that integrate materiality and maturity definitions into predictive financial models. Leverage machine learning techniques (e.g., regression analysis, clustering, time-series forecasting) to identify trends in ESG data. Data Analysis & Model Development Build automated financial modelling tools that incorporate non-financial (ESG) data and financial metrics. Develop custom ESG performance indicators that can be used in due diligence, exit readiness, and investment decision-making. Standardize ESG data inputs and apply weightings/scoring methodologies to determine financial relevance. Tool Development & Automation Work with developers to code ESG models into dashboards or automated financial tools. Implement AI/ML techniques to enhance model predictive capabilities. Ensure models are scalable and adaptable across multiple industries and investment types. Data Management & Validation Collect, clean, and structure large datasets from financial reports, ESG databases, and regulatory filings. Conduct sensitivity analyses to validate model accuracy and effectiveness. Ensure consistency in ESG metrics and definitions across all analytical frameworks. REQUIRED SKILLS & EXPERIENCE: Educational Background Master’s in Finance, Econometrics, Data Science, Quantitative Economics, Mathematics, Statistics, or a related field. CFA, FRM, or other financial analysis certifications are a plus. Technical & Analytical Proficiency Financial & Statistical Modelling: Advanced Excel, Python, R, or MATLAB for quantitative research and financial modelling. Machine Learning & AI: Proficiency in ML algorithms for forecasting, clustering, and risk modelling. Data Analysis & Automation: Experience with SQL, Power BI, or other data visualization tools. ESG & Financial Integration: Understanding of ESG materiality frameworks (SASB, MSCI, S&P, etc.) and their impact on valuations. Professional Experience Minimum 3-5 years in quantitative research, financial modelling, or ESG data analysis. Experience in building proprietary financial tools/models for investment firms or financial consultancies. Strong background in factor modelling, risk assessment, and alternative data analysis. Personal Attributes Highly analytical, structured thinker with attention to detail. Ability to work independently in an offshore role, managing multiple datasets and models. Passion for quantifying ESG impact in financial terms. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

At Juniper, we believe the network is the single greatest vehicle for knowledge, understanding, and human advancement the world has ever known. To achieve real outcomes, we know that experience is the most important requirement for networking teams and the people they serve. Delivering an experience-first, AI-Native Network pivots on the creativity and commitment of our people. It requires a consistent and committed practice, something we call the Juniper Way. Job Title: Software Engineer III Experience: 2+ yrs of experience The AIOps team’s mission is to use advanced analytics, including AI/ML, to develop end-to-end solutions to automate (detect, remediate) networking workflows for our customers, and help extend AI/ML across the Juniper portfolio. We are looking for an experienced engineer to join our growing data science team of AI/ML and data-at-scale engineers. Our ideal candidate brings their skills and experience having developed performant inferencing implementations, practiced data science hygiene to develop ML models and is a team player. As a data scientist, you will collaborate with product managers and domain specialists to identify real customer problems, use your background in NLP/ML to develop solutions that scale with terabytes of data. Qualifications/Requirements: BS/MS in Computer Science or Data Science, Electrical Engineering, Statistics, Applied Math or equivalent fields with strong mathematical background General understanding of machine learning techniques and algorithms, including clustering, anomaly detection, optimization, Neural network, Graph ML, etc Experience building data science-driven solutions including data collection, feature selection, model training, post-deployment validation Strong hands-on coding skills (preferably in Python) processing large-scale data set and developing machine learning models Familiar with one or more machine learning or statistical modeling tools such as Numpy, ScikitLearn, MLlib, Tensorflow Works well in a team setting and is self-driven Desired Experience: Experience with some/equivalent: AWS, Flink, Spark, Kafka, Elastic Search, Kubeflow Knowledge with NLP technology Demonstrable problem-solving ability Conceptual understanding of system design concepts Responsibilities: Collaborate with team to understand feature, work with domain experts to identify relevant “signals” during feature engineering, deliver generic and performant ML solutions Keep up to date with newest technology trends Communicate results and ideas to key decision makers Implement new statistical or other mathematical methodologies as needed for specific models or analysis Optimize joint development efforts through appropriate database use and project design About Juniper Networks Juniper Networks challenges the inherent complexity that comes with networking and security in the multicloud era. We do this with products, solutions and services that transform the way people connect, work and live. We simplify the process of transitioning to a secure and automated multicloud environment to enable secure, AI-driven networks that connect the world. Additional information can be found at Juniper Networks (www.juniper.net) or connect with Juniper on Twitter, LinkedIn and Facebook. WHERE WILL YOU DO YOUR BEST WORK? Wherever you are in the world, whether it's downtown Sunnyvale or London, Westford or Bengaluru, Juniper is a place that was founded on disruptive thinking - where colleague innovation is not only valued, but expected. We believe that the great task of delivering a new network for the next decade is delivered through the creativity and commitment of our people. The Juniper Way is the commitment to all our colleagues that the culture and company inspire their best work-their life's work. At Juniper we believe this is more than a job - it's an opportunity to help change the world. At Juniper Networks, we are committed to elevating talent by creating a trust-based environment where we can all thrive together. If you think you have what it takes, but do not necessarily check every single box, please consider applying. We’d love to speak with you. Additional Information for United States jobs: ELIGIBILITY TO WORK AND E-VERIFY In compliance with federal law, all persons hired will be required to verify identity and eligibility to work in the United States and to complete the required employment eligibility verification form upon hire. Juniper Networks participates in the E-Verify program. E-Verify is an Internet-based system operated by the Department of Homeland Security (DHS) in partnership with the Social Security Administration (SSA) that allows participating employers to electronically verify the employment eligibility of new hires and the validity of their Social Security Numbers. Information for applicants about E-Verify / E-Verify Información en español: This Company Participates in E-Verify / Este Empleador Participa en E-Verify Immigrant and Employee Rights Section (IER) - The Right to Work / El Derecho a Trabajar E-Verify® is a registered trademark of the U.S. Department of Homeland Security. Juniper is an Equal Opportunity workplace. We do not discriminate in employment decisions on the basis of race, color, religion, gender (including pregnancy), national origin, political affiliation, sexual orientation, gender identity or expression, marital status, disability, genetic information, age, veteran status, or any other applicable legally protected characteristic. All employment decisions are made on the basis of individual qualifications, merit, and business need. Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Achieving our goals starts with supporting yours. Grow your career, access top-tier health and wellness benefits, build lasting connections with your team and our customers, and travel the world using our extensive route network. Come join us to create what’s next. Let’s define tomorrow, together. Description United's Digital Technology team designs, develops, and maintains massively scaling technology solutions brought to life with innovative architectures, data analytics, and digital solutions. Find your future at United! We’re reinventing what our industry looks like, and what an airline can be – from the planes we fly to the people who fly them. When you join us, you’re joining a global team of 100,000+ connected by a shared passion with a wide spectrum of experience and skills to lead the way forward. Achieving our ambitions starts with supporting yours. Evolve your career and find your next opportunity. Get the care you need with industry-leading health plans and best-in-class programs to support your emotional, physical, and financial wellness. Expand your horizons with travel across the world’s biggest route network. Connect outside your team through employee-led Business Resource Groups. Create what’s next with us. Let’s define tomorrow together. Job Overview And Responsibilities United Offshore SQL DBA Team supports critical after hours work to support timely releases and patching activities overnight along with 8pm-8am rotational on call to support for very critical DB operations monitoring and incident support. SQL DBA team in India works along with offshore development teams in code review and troubleshooting for performance issues essential for United’s 24x7 technology support structure. They are actively engaged in migration projects for SQL desupported version remediation and supporting upgrades.Team also works on AWS setup and support across all areas of clould migrations and production support. SQL Server Production Support Off-hours support for all Tier1 – Tier5 SQL Databases and InstancesCreate physical database structures based on physical design for development, test, and production environments Coordinate with systems engineers to configure servers for DBMS product installation and database creation Install, configure, and maintain DBMS product software on database and application servers Assist in the consultation to application development teams on DBMS product technical issues and techniques Implement monitoring procedures to maximize availability and performance of the database, while meeting defined SLA's Investigate, troubleshoot, and resolve database problems Communicate the required downtime with the application development teams and systems engineers to implement approved changes Identify, define and implement database backup / recovery and security strategies Install and support of DBMS (Database Management Systems) software and tools Perform various database activities which include monitoring, tuning, and troubleshooting, with appropriate supervision, if required Review deployment for all SQL database changes Complete pre-deployment code reviews with application teams as requested Review and provide feedback on all SQL code updates Work with deployment manages on dates and time for releases including assignments Performance Tunning and code review Migrations and DB setup (Cloud-AWS, SQL) Patching of all SQL Server and some Couchbase Work with application teams to create schedule Send advanced and timely notifications for database instances to be patched Conduct database patching including any troubleshooting and validation post patching Code release and Techincal Documentation Backup Recovery and DR This position is offered on local terms and conditions. Expatriate assignments and sponsorship for employment visas, even on a time-limited visa status, will not be awarded. This position is for United Airlines Business Services Pvt. Ltd - a wholly owned subsidiary of United Airlines Inc. Qualifications What’s needed to succeed (Minimum Qualifications): Bachelor's degree or 4 years of relevant work experience in Computer Science, Engineering, or related discipline Microsoft SQL Server Certification 5 Years of related experience Proficient in SQL development and administration disciplines with current hands-on experience with the latest SQL Server releases including SQL 2019, 2017, 2016 Strong background and experience with all BC and DR capabilities of Microsoft SQL Server including Always-On, Mirroring, Log Shipping, and Clustering with a practical understanding of other Infrastructure BC/DC capabilities Leverage metrics to drive capacity planning and trending to proactively identify potential problems and mitigate before they result in customer impact Understand the place of automation and standardization when delivering stable, maintainable, and performant database services at scale Perform platform, database, and query optimization Must be legally authorized to work in India for any employer without sponsorship Must be fluent in English (written and spoken) Successful completion of interview required to meet job qualification Reliable, punctual attendance is an essential function of the position What will help you propel from the pack (Preferred Qualifications): Master's degree in Computer Science, Engineering, or related discipline Microsoft/AWS certifications on DB track preferred Hands-On experience with AWS native databases, compute, storage, monitoring technologies, and continuous integration pipelines Experience implementing automation of Microsoft SQL Server deployment and maintenance, and support activities preferred Collaborate both vertically and horizontally to evolve overall database services and technology strategies Experience supporting SSAS, SSIS, and SSRS Very large Database (10+ TB) experience preferred Experience with PowerShell or other scripting languages a plus Experience with PCI, SOX, GDPR, and SQL Auditing a plus Ability to support 24 X 7 United operations databases. Quick learner of new technology and guidelines with flexible, positive attitude and team player with independent decision making GGN00001993 Show more Show less

Posted 1 week ago

Apply

5.0 - 8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

What We Offer At Magna, you can expect an engaging and dynamic environment where you can help to develop industry-leading automotive technologies. We invest in our employees, providing them with the support and resources they need to succeed. As a member of our global team, you can expect exciting, varied responsibilities as well as a wide range of development prospects. Because we believe that your career path should be as unique as you are. Group Summary Transforming mobility. Making automotive technology that is smarter, cleaner, safer and lighter. That’s what we’re passionate about at Magna Powertrain, and we do it by creating world-class powertrain systems. We are a premier supplier for the global automotive industry with full capabilities in design, development, testing and manufacturing of complex powertrain systems. Our name stands for quality, environmental consciousness, and safety. Innovation is what drives us and we drive innovation. Dream big and create the future of mobility at Magna Powertrain. Job Responsibilities Job Introduction In this challenging and interesting position, you are the expert for all topics related to databases. You will be part of an international team, that ensures the smooth and efficient operation of various database systems, including Microsoft SQL Server, Azure SQL, Oracle, DB2, MariaDB, and PostgreSQL. Your responsibilities include providing expert support for database-related issues, troubleshooting problems promptly, and collaborating with users and business stakeholders to achieve high customer satisfaction. Your expertise in cloud database services and general IT infrastructure will be crucial in supporting the development of the future data environment at Magna Powertrain. Major Responsibilities Responsible for ensuring the smooth and efficient operation of all database systems, including but not limited to Microsoft SQL Server, Azure SQL, Oracle, DB2, MariaDB, PostgreSQL. Provide expert support for database-related issues, troubleshoot and resolve problems quickly as they arise to ensure minimal disruption. Deliver professional assistance for database-related requests, working collaboratively with users and business stakeholders to achieve high customer satisfaction. Manage the installation, implementation, configuration, administration and decommission of database systems. Plan and execute database upgrades, updates, migrations, and implement changes, new patches and versions when required. Monitor database systems, database activities and overall database performance proactively, to identify issues and implement solutions to optimize performance. Develop and implement backup and recovery strategies, execute backups and restores to ensure data integrity and availability across all database systems. Perform database tuning and optimization, including indexing, query optimization, and storage management. Implement and maintain database security measures, including user access controls, encryption, and regular security audits to protect sensitive data from unauthorized access and breaches. Create and maintain proper documentation for all database systems and processes. Ensure constant evaluation, analysis and modernization of the database systems. Knowledge and Education Bachelor’s degree in computer science / information technology, or equivalent (Master’s preferred). Work Experience Minimum 5-8 years of proven experience as a database administrator in a similar position. Excellent verbal and written communication skills in English. German language skills are optional, but of advantage. Skills And Competencies We are looking for a qualified person with: In-depth expertise of database concepts, theory and best practices including but not limited to high availability/clustering, replication, indexing, backup and recovery, performance tuning, database security, data integrity, data modeling and query optimization. Expert knowledge of Microsoft SQL Server and its components, including but not limited to Failover Clustering, SQL Server Integration Services (SSIS), SQL Server Reporting Services (SSRS), and SQL Server Analysis Services (SSAS). Excellent knowledge of various database management systems, including but not limited to Oracle, IBM DB2, MariaDB and PostgreSQL. Familiarity with further database management systems (e.g. MySQL, MongoDB, Redis, etc.) is an advantage. Extensive expertise about Microsoft Azure database services (Azure SQL Databases, Azure SQL Managed Instances, SQL Server on Azure VMs). Proficiency with other major cloud platforms such as AWS or Google Cloud, as well as experience with their cloud database services (e.g. Amazon RDS, Google Cloud SQL) are an advantage. Comprehensive understanding of cloud technologies, including but not limited to cloud architecture, cloud service models and cloud security best practices. Good general knowledge about IT infrastructure, networking, firewalls and storage systems. High proficiency in T-SQL and other query languages. Knowledge of other scripting languages are an advantage (e.g. Python, PowerShell, Visual Basic, etc.). Experience with Databricks and similar data engineering tools for big data processing, analytics, and machine learning are an advantage. A working knowledge of Microsoft Power Platform tools including PowerApps, Power Automate, and Power BI is an advantage. Excellent analytical and problem-solving skills and strong attention to detail. Ability to work effectively in an intercultural team, strong organizational skills, and high self-motivation. Work Environment Regular overnight travel 10-25% of the time For dedicated and motivated employees, we offer an interesting and diversified job within a dynamic global team together with the individual and functional development in a professional environment of a global acting business. Fair treatment and a sense of responsibility towards employees are the principle of the Magna culture. We strive to offer an inspiring and motivating work environment. Awareness, Unity, Empowerment At Magna, we believe that a diverse workforce is critical to our success. That’s why we are proud to be an equal opportunity employer. We hire on the basis of experience and qualifications, and in consideration of job requirements, regardless of, in particular, color, ancestry, religion, gender, origin, sexual orientation, age, citizenship, marital status, disability or gender identity. Magna takes the privacy of your personal information seriously. We discourage you from sending applications via email or traditional mail to comply with GDPR requirements and your local Data Privacy Law. Worker Type Regular / Permanent Group Magna Powertrain Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. Years of Experience: Candidates with 4+ years of hands on experience Position: Senior Associate Industry: Telecom / Network Analytics / Customer Analytics Required Skills: Successful candidates will have demonstrated the following skills and characteristics: Must Have Proven experience with telco data including call detail records (CDRs), customer churn models, and network analytics Deep understanding of predictive modeling for customer lifetime value and usage behavior Experience working with telco clients or telco data platforms (like Amdocs, Ericsson, Nokia, AT&T etc) Proficiency in machine learning techniques, including classification, regression, clustering, and time-series forecasting Strong command of statistical techniques (e.g., logistic regression, hypothesis testing, segmentation models) Strong programming in Python or R, and SQL with telco-focused data wrangling Exposure to big data technologies used in telco environments (e.g., Hadoop, Spark) Experience working in the telecom industry across domains such as customer churn prediction, ARPU modeling, pricing optimization, and network performance analytics Strong communication skills to interface with technical and business teams Nice To Have Exposure to cloud platforms (Azure ML, AWS SageMaker, GCP Vertex AI) Experience working with telecom OSS/BSS systems or customer segmentation tools Familiarity with network performance analytics, anomaly detection, or real-time data processing Strong client communication and presentation skills Roles And Responsibilities Assist analytics projects within the telecom domain, driving design, development, and delivery of data science solutions Develop and execute on project & analysis plans under the guidance of Project Manager Interact with and advise consultants/clients in US as a subject matter expert to formalize data sources to be used, datasets to be acquired, data & use case clarifications that are needed to get a strong hold on data and the business problem to be solved Drive and Conduct analysis using advanced analytics tools and coach the junior team members Implement necessary quality control measures in place to ensure the deliverable integrity like data quality, model robustness, and explainability for deployments. Validate analysis outcomes, recommendations with all stakeholders including the client team Build storylines and make presentations to the client team and/or PwC project leadership team Contribute to the knowledge and firm building activities Professional And Educational Background BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA from reputed institute Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

RabbitMQ Administrator - Prog Leasing1 Job Title: RabbitMQ Cluster Migration Engineer Job Summary We are seeking an experienced RabbitMQ Cluster Migration Engineer to lead and execute the seamless migration of our existing RabbitMQ infrastructure to a AWS - new high-availability cluster environment. This role requires deep expertise in RabbitMQ, clustering, messaging architecture, and production-grade migrations with minimal downtime. Key Responsibilities Design and implement a migration plan to move existing RabbitMQ instances to a new clustered setup. Evaluate the current messaging architecture, performance bottlenecks, and limitations. Configure, deploy, and test RabbitMQ clusters (with or without federation/mirroring as needed). Ensure high availability, fault tolerance, and disaster recovery configurations. Collaborate with development, DevOps, and SRE teams to ensure smooth cutover and rollback plans. Automate setup and configuration using tools such as Ansible, Terraform, or Helm (for Kubernetes). Monitor message queues during migration to ensure message durability and delivery guarantees. Document all aspects of the architecture, configurations, and migration process. Required Qualifications Strong experience with RabbitMQ, especially in clustered and high-availability environments. Deep understanding of RabbitMQ internals: queues, exchanges, bindings, vhosts, federation, mirrored queues. Experience with RabbitMQ management plugins, monitoring, and performance tuning. Proficiency with scripting languages (e.g., Bash, Python) for automation. Hands-on experience with infrastructure-as-code tools (e.g., Ansible, Terraform, Helm). Familiarity with containerization and orchestration (e.g., Docker, Kubernetes). Strong understanding of messaging patterns and guarantees (at-least-once, exactly-once, etc.). Experience with zero-downtime migration and rollback strategies. Preferred Qualifications Experience migrating RabbitMQ clusters in production environments. Working knowledge of cloud platforms (AWS, Azure, or GCP) and managed RabbitMQ services. Understanding of security in messaging systems (TLS, authentication, access control). Familiarity with alternative messaging systems (Kafka, NATS, ActiveMQ) is a plus. Show more Show less

Posted 2 weeks ago

Apply

170.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

About Us: Birlasoft, a global leader at the forefront of Cloud, AI, and Digital technologies, seamlessly blends domain expertise with enterprise solutions. The company’s consultative and design-thinking approach empowers societies worldwide, enhancing the efficiency and productivity of businesses. As part of the multibillion-dollar diversified CKA Birla Group, Birlasoft with its 12,000+ professionals, is committed to continuing the Group’s 170-year heritage of building sustainable communities. Job Summary: The Windows and VMware Architect is responsible for the design, implementation, administration, and support of enterprise-grade Microsoft Windows Server and VMware environments. This role plays a critical part in ensuring infrastructure stability, performance, and scalability, with a strong focus on migration projects, virtualization, and automation. ________________________________________ Job Description: 1. Windows Server Architecture & Design • Architect and oversee the deployment, configuration, and lifecycle management of Windows Server environments (2012–2022). • Design and lead in-place and parallel upgrade strategies to minimize downtime and risk. • Define standards for Active Directory, DNS, DHCP, Group Policy, and system hardening. • Architect and implement Windows Server Clustering for high availability of application and database workloads. • Establish performance baselines and ensure system reliability through proactive monitoring and tuning. • Define patching, backup, and security policies aligned with enterprise standards. 2. VMware Infrastructure Strategy • Architect and manage enterprise-grade VMware environments including vSphere, ESXi, vCenter, NSX, and SRM. • Design and optimize HA, DRS, vMotion, and Storage vMotion configurations for performance and availability. • Lead VMware infrastructure upgrades, patching cycles, and capacity planning. • Provide L4-L5-level support and root cause analysis for complex virtualization issues. 3. Infrastructure Modernization & Migration • Lead end-to-end planning and execution of legacy system migrations, hardware refreshes, and data center builds. • Design and execute P2V and V2V migrations using tools like VMware Converter and PlateSpin. • Collaborate on cloud migration strategies (Azure, AWS, hybrid models) and integration with on-prem infrastructure. 4. Business Continuity, Security & Automation • Define and implement backup and disaster recovery architectures. • Ensure compliance with regulatory and security frameworks (PCI-DSS, ISO, DISA STIGs). • Collaborate with InfoSec teams to apply baselines, perform vulnerability remediation, and enforce access controls. • Develop and maintain automation scripts using PowerShell and PowerCLI to streamline operations. 5. Documentation, Governance & Collaboration • Produce and maintain high-level and low-level design documents, runbooks, and operational procedures. • Participate in architectural reviews, change advisory boards, and incident response planning. • Act as a technical liaison between infrastructure, application, network, and database teams. ________________________________________ Qualifications: • Bachelor’s degree in computer science, Information Technology, or a related field. • 15–25 years of experience in enterprise Windows Server and VMware environments. • Proven track record in infrastructure architecture, modernization, and migration projects. • Strong scripting and automation skills (PowerShell, PowerCLI). • Preferred Certifications: VMware VCP-DCV / VCAP-DCV, Microsoft MCSE / Azure Architect, ITIL Foundation Show more Show less

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

India

On-site

Linkedin logo

Introduction We are looking for candidates with 8+ years of experience for this role. Project Duration: 6+ Months, - Based on the performance and Productivity. Max Budget: 12 - 12.2 LPA fixed. Job Description/ Primary job role The primary job role of a Senior Database Administrator (DBA) includes overseeing the management, maintenance, and optimization of databases within the organization. Works on strategic initiatives to align the database infrastructure with long-term business goals and ensures that best practices in database management are consistently followed. Main duties/responsibilities • Optimize database queries to ensure fast and efficient data retrieval, particularly for complex or high-volume operations. • Design and implement effective indexing strategies to reduce query execution times and improve overall database performance. • Monitor and profile slow or inefficient queries and recommend best practices for rewriting or re-architecting queries. • Continuously analyze execution plans for SQL queries to identify bottlenecks and optimize them. • Database Maintenance: Schedule and execute regular maintenance tasks, including backups, consistency checks, and index rebuilding. • Health Monitoring: Implement automated monitoring systems to track database performance, availability, and critical parameters such as CPU usage, memory, disk I/O, and replication status. • Proactive Issue Resolution: Diagnose and resolve database issues (e.g., locking, deadlocks, data corruption) proactively, before they impact users or operations. • High Availability: Implement and manage database clustering, replication, and failover strategies to ensure high availability and disaster recovery (e.g., using tools like SQL Server Always On, Oracle RAC, MySQL Group Replication). • Capacity Planning: Monitor resource consumption and plan for growth to ensure the database can scale effectively with increasing data volume and transaction load. • Resource Optimization: Analyze and optimize resource usage (CPU, memory, disk, network) to reduce operational costs. • Licensing Management: Ensure that database licensing models are correctly adhered to and identify opportunities for reducing licensing costs. • Cloud Cost Management: Use cost analysis tools (e.g., AWS Cost Explorer, Azure Cost Management) to monitor and optimize cloud database spend, identifying opportunities for right sizing or reserving instances. Primary skills • 8-10 years of experience in Microsoft SQL Server administration Qualifications • Bachelor's degree in computer science, software engineering or a related field • Microsoft SQL certifications (MTA Database, MCSA: SQL Server, MCSE: Data Management and Analytics) will be an advantage. Secondary Skills • Experience in MySQL, PostgreSQL, and Oracle database administration. • Exposure to Data Lake, Hadoop, and Azure technologies • Exposure to DevOps or ITIL Behavioral competencies • Communication • Teamwork • Digital Mindset • Operational Excellence • Analytical Ability • Customer Centricity • Business and Market Acumen • Empathy • Growth Mindset Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

About The Role Grade Level (for internal use): 10 The Team As a member of the Data Transformation team you will work on building ML powered products and capabilities to power natural language understanding, data extraction, information retrieval and data sourcing solutions for S&P Global Market Intelligence and our clients. You will spearhead development of production-ready AI products and pipelines while leading-by-example in a highly engaging work environment. You will work in a (truly) global team and encouraged for thoughtful risk-taking and self-initiative. The Impact The Data Transformation team has already delivered breakthrough products and significant business value over the last 3 years. In this role you will be developing our next generation of new products while enhancing existing ones aiming at solving high-impact business problems. What’s In It For You Be a part of a global company and build solutions at enterprise scale Collaborate with a highly skilled and technically strong team Contribute to solving high complexity, high impact problems Key Responsibilities Design, Develop and Deploy ML powered products and pipelines Play a central role in all stages of the data science project life cycle, including: Identification of suitable data science project opportunities Partnering with business leaders, domain experts, and end-users to gain business understanding, data understanding, and collect requirements Evaluation/interpretation of results and presentation to business leaders Performing exploratory data analysis, proof-of-concept modelling, model benchmarking and setup model validation experiments Training large models both for experimentation and production Develop production ready pipelines for enterprise scale projects Perform code reviews & optimization for your projects and team Spearhead deployment and model scaling strategies Stakeholder management and representing the team in front of our leadership Leading and mentoring by example including project scrums What We’re Looking For 3+ years of professional experience in Data Science domain Expertise in Python (Numpy, Pandas, Spacy, Sklearn, Pytorch/TF2, HuggingFace etc.) Experience with SOTA models related to NLP and expertise in text matching techniques, including sentence transformers, word embeddings, and similarity measures Expertise in probabilistic machine learning model for classification, regression & clustering Strong experience in feature engineering, data preprocessing, and building machine learning models for large datasets. Exposure to Information Retrieval, Web scraping and Data Extraction at scale OOP Design patterns, Test-Driven Development and Enterprise System design SQL (any variant, bonus if this is a big data variant) Linux OS (e.g. bash toolset and other utilities) Version control system experience with Git, GitHub, or Azure DevOps. Problem-solving and debugging skills Software craftsmanship, adherence to Agile principles and taking pride in writing good code Techniques to communicate change to non-technical people Nice to have Prior work to show on Github, Kaggle, StackOverflow etc. Cloud expertise (AWS and GCP preferably) Expertise in deploying machine learning models in cloud environments Familiarity in working with LLMs What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 315681 Posted On: 2025-05-14 Location: Gurgaon, Haryana, India Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Skills Needed Problem-Solving and Analytical Thinking : Strong ability to deeply understand complex business challenges, translate them into structured data science problems, and deliver actionable insights and solutions Communication and Stakeholder Management Skills Skilled in conveying technical concepts to non-technical stakeholders and collaborating across cross-functional teams Programming & Cloud Proficiency Highly skilled in Python and PySpark for data manipulation, modeling, and pipeline development. Experience working within cloud-based environments, particularly the Google Cloud Platform (GCP) including BigQuery Machine Learning Expertise: Hands-on experience with a broad range of ML techniques including supervised and unsupervised learning, ensemble methods (e.g., bag: ging, boosting, stacking), and model evaluation techniques Marketing and Customer Analytics : Practical experience applying data science to marketing problems such as churn prediction, customer lifetime value (LTV) forecasting, customer segmentation, campaign targeting, and behavioral clustering MLOps and Productionization Exposure: Understanding of MLOps principles, including model versioning, CI/CD pipelines, automated retraining, monitoring, and deployment in production environments Domain Knowledge in Media & Publishing (Preferred) : Familiarity with the Media & Publishing industry, including content consumption patterns, audience engagement metrics, and monetization strategies Industry Experience and Relevance : Background in industries such as digital media, content platforms, marketing analytics, or customer intelligence Show more Show less

Posted 2 weeks ago

Apply

3.0 - 5.0 years

0 Lacs

Hyderābād

On-site

GlassDoor logo

Job requisition ID :: 82489 Date: Jun 4, 2025 Location: Hyderabad Designation: Associate Director Entity: Job Overview: We are seeking a skilled and motivated Cloud Database Administrator (Cloud DBA) to join our growing team. The ideal candidate will be responsible for the administration, optimization, and management of cloud-based database platforms across major providers like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud. As a Cloud DBA, you will work closely with the infrastructure, development, and DevOps teams to ensure that the cloud databases are secure, scalable, and performant. You will have the opportunity to implement cutting-edge cloud database solutions and work with large-scale, mission-critical systems. Key Responsibilities: Cloud Database Administration: Administer and manage cloud-based databases across AWS, Azure, and Google Cloud, including databases like Amazon RDS, Azure SQL Database, Google Cloud SQL, and NoSQL databases like DynamoDB, Cosmos DB, and MongoDB. Manage database instances in cloud environments, including provisioning, scaling, patching, and backups. Collaborate with cloud architecture teams to ensure the database infrastructure is aligned with best practices for scalability, security, and performance. Performance Tuning & Optimization: Continuously monitor cloud database performance, including query optimization, resource usage, and storage optimization. Implement proactive tuning techniques to ensure high availability and efficient performance of cloud-based databases. Review and optimize SQL queries, indexes, and database structures for optimal performance in the cloud. High Availability & Disaster Recovery: Design and implement high-availability solutions in the cloud, such as automated failover, replication, and clustering for cloud-based databases. Implement robust disaster recovery plans with real-time backup and restore procedures to minimize downtime. Regularly test and update disaster recovery strategies to ensure quick and efficient data restoration during emergencies. Cloud Database Security & Compliance: Enforce cloud database security policies, including data encryption, secure user access management, and access control configurations. Implement secure cloud database environments by applying cloud provider security tools and best practices (e.g., IAM policies, encryption, VPC configurations). Ensure compliance with regulations and industry standards (e.g., GDPR, HIPAA, PCI-DSS) within the cloud databases and perform regular security audits. Automation & Infrastructure as Code: Leverage cloud-native tools and frameworks (e.g., AWS CloudFormation, Azure Resource Manager, Terraform) to automate database provisioning, scaling, and management. Develop and maintain automation scripts for routine tasks, such as backups, health checks, and monitoring. Implement infrastructure as code (IaC) practices for reproducible and scalable cloud database deployments. Database Monitoring & Reporting: Utilize cloud-native monitoring tools such as Amazon CloudWatch, Azure Monitor, or Google Stackdriver to track database performance, availability, and health. Design and maintain custom dashboards and alerting systems for proactive database management. Generate regular performance and health reports to provide insights into database operations and guide improvements. Collaboration & Support: Collaborate with application development teams to design, deploy, and maintain cloud-native database solutions that meet business needs. Provide second- and third-level support for database-related issues, troubleshooting performance or security problems in the cloud environment. Assist with cloud database migrations from on-premises systems to cloud environments, including hybrid and multi-cloud strategies. Innovation & Continuous Improvement: Stay current with the latest trends, tools, and technologies in cloud computing and database management. Propose and implement database innovations that improve performance, security, and cost-efficiency in the cloud. Continuously optimize cloud database costs by monitoring and optimizing resource usage and storage. Experience: At least 3-5 years of experience as a Database Administrator, with a strong focus on cloud-based databases. Hands-on experience with major cloud platforms such as AWS, Microsoft Azure, or Google Cloud, and their database services (Amazon RDS, Azure SQL Database, Google Cloud SQL, etc.). Experience with database migration to the cloud and managing hybrid cloud environments. Familiarity with cloud-based NoSQL databases, including DynamoDB, Cosmos DB, or Firebase. Technical Skills: Strong expertise in managing relational databases (SQL Server, MySQL, PostgreSQL) and NoSQL databases (MongoDB, DynamoDB, Cosmos DB). Familiarity with cloud-native database tools and services for high availability, backups, security, and scaling. Proficient with SQL, database performance tuning, and optimization techniques. Strong scripting skills (e.g., Python, PowerShell, Bash) and experience with cloud automation tools (Terraform, CloudFormation, Ansible). Understanding of cloud security principles (encryption, VPCs, IAM, security groups, access control). Experience with cloud monitoring and logging services (Amazon CloudWatch, Azure Monitor, Google Stackdriver). Certifications (Preferred): AWS Certified Database - Specialty. Microsoft Certified: Azure Database Administrator Associate (DP-300). Google Professional Cloud Database Engineer. AWS Certified Solutions Architect – Associate. Microsoft Certified: Azure Solutions Architect Expert. Key Attributes: Problem-Solving: Strong troubleshooting skills and the ability to analyze complex database problems and implement efficient solutions. Adaptability: Comfort with cloud technologies, willingness to learn new tools and platforms. Collaboration: Ability to work effectively with cross-functional teams, including development, infrastructure, and security teams. Attention to Detail: Commitment to database security, data integrity, and performance monitoring. Communication: Clear communication skills for liaising with technical and non-technical stakeholders.

Posted 2 weeks ago

Apply

Exploring Clustering Jobs in India

The job market for clustering roles in India is thriving, with numerous opportunities available for job seekers with expertise in this area. Clustering professionals are in high demand across various industries, including IT, data science, and research. If you are considering a career in clustering, this article will provide you with valuable insights into the job market in India.

Top Hiring Locations in India

Here are 5 major cities in India actively hiring for clustering roles: 1. Bangalore 2. Pune 3. Hyderabad 4. Mumbai 5. Delhi

Average Salary Range

The average salary range for clustering professionals in India varies based on experience levels. Entry-level positions may start at around INR 3-6 lakhs per annum, while experienced professionals can earn upwards of INR 12-20 lakhs per annum.

Career Path

In the field of clustering, a typical career path may look like: - Junior Data Analyst - Data Scientist - Senior Data Scientist - Tech Lead

Related Skills

Apart from expertise in clustering, professionals in this field are often expected to have skills in: - Machine Learning - Data Analysis - Python/R programming - Statistics

Interview Questions

Here are 25 interview questions for clustering roles: - What is clustering and how does it differ from classification? (basic) - Explain the K-means clustering algorithm. (medium) - What are the different types of distance metrics used in clustering? (medium) - How do you determine the optimal number of clusters in K-means clustering? (medium) - What is the Elbow method in clustering? (basic) - Define hierarchical clustering. (medium) - What is the purpose of clustering in machine learning? (basic) - Can you explain the difference between supervised and unsupervised learning? (basic) - What are the advantages of hierarchical clustering over K-means clustering? (advanced) - How does DBSCAN clustering algorithm work? (medium) - What is the curse of dimensionality in clustering? (advanced) - Explain the concept of silhouette score in clustering. (medium) - How do you handle missing values in clustering algorithms? (medium) - What is the difference between agglomerative and divisive clustering? (advanced) - How would you handle outliers in clustering analysis? (medium) - Can you explain the concept of cluster centroids? (basic) - What are the limitations of K-means clustering? (medium) - How do you evaluate the performance of a clustering algorithm? (medium) - What is the role of inertia in K-means clustering? (basic) - Describe the process of feature scaling in clustering. (basic) - How does the GMM algorithm differ from K-means clustering? (advanced) - What is the importance of feature selection in clustering? (medium) - How can you assess the quality of clustering results? (medium) - Explain the concept of cluster density in DBSCAN. (advanced) - How do you handle high-dimensional data in clustering? (medium)

Closing Remark

As you venture into the world of clustering jobs in India, remember to stay updated with the latest trends and technologies in the field. Equip yourself with the necessary skills and knowledge to stand out in interviews and excel in your career. Good luck on your job search journey!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies