Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
3.0 years
3 - 7 Lacs
Chennai
Remote
Responsibilities We are seeking a highly skilled and motivated AI/ML Developer to join our dynamic team. The ideal candidate will have a strong background in machine learning, natural language processing (NLP), and deep learning, with a proven ability to develop and deploy AI/ML solutions. This role requires a deep understanding of AI/ML concepts, excellent programming skills, and the ability to work collaboratively in a fast-paced environment. Qualifications Design, develop, and implement AI/ML models and solutions. Collaborate with cross-functional teams to identify and solve complex business problems using AI/ML techniques. Develop and maintain machine learning pipelines, including data preprocessing, feature engineering, model training, evaluation, and deployment. Conduct experiments, analyze results, and iterate on models to improve performance. Stay up-to-date with the latest advancements in AI/ML, including new algorithms, techniques, and tools. Write clean, well-documented, and testable code. Deploy and monitor AI/ML models in production environments. Contribute to the development of AI/ML best practices and standards. Skills Minimum 3 years of experience in AI/ML development. Proficiency in Python and Go (Golang) programming languages. Programming Languages: Python, Go (Golang) Strong experience with machine Learning Frameworks: PyTorch, TensorFlow (optional), Keras (optional) Solid understanding of NLP concepts and techniques. Experience with data manipulation and analysis tools (e.g., Pandas, NumPy). Experience with cloud platforms (e.g., AWS, Azure, GCP) is a plus. Experience with machine learning algorithms (e.g., regression, classification, clustering, etc.). Version Control: Git Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Bachelor's or Master's degree in Computer Science, Artificial Intelligence, or a related field. Job Type: Full-time Pay: ₹300,000.00 - ₹700,000.00 per year Benefits: Paid sick time Paid time off Work from home Location Type: In-person Schedule: Day shift Work Location: In person Speak with the employer +91 9176760030
Posted 1 week ago
8.0 - 11.0 years
6 - 9 Lacs
Noida
On-site
Snowflake - Senior Technical Lead Full-time Company Description About Sopra Steria Sopra Steria, a major Tech player in Europe with 50,000 employees in nearly 30 countries, is recognised for its consulting, digital services and solutions. It helps its clients drive their digital transformation and obtain tangible and sustainable benefits. The Group provides end-to-end solutions to make large companies and organisations more competitive by combining in-depth knowledge of a wide range of business sectors and innovative technologies with a collaborative approach. Sopra Steria places people at the heart of everything it does and is committed to putting digital to work for its clients in order to build a positive future for all. In 2024, the Group generated revenues of €5.8 billion. The world is how we shape it. Job Description Position: Snowflake - Senior Technical Lead Experience: 8-11 years Location: Noida/ Bangalore Education: B.E./ B.Tech./ MCA Primary Skills: Snowflake, Snowpipe, SQL, Data Modelling, DV 2.0, Data Quality, AWS, Snowflake Security Good to have Skills: Snowpark, Data Build Tool, Finance Domain Preferred Skills Experience with Snowflake-specific features: Snowpipe, Streams & Tasks, Secure Data Sharing. Experience in data warehousing, with at least 2 years focused on Snowflake. Hands-on expertise in SQL, Snowflake scripting (JavaScript UDFs), and Snowflake administration. Proven experience with ETL/ELT tools (e.g., dbt, Informatica, Talend, Matillion) and orchestration frameworks. Deep knowledge of data modeling techniques (star schema, data vault) and performance tuning. Familiarity with data security, compliance requirements, and governance best practices. Experience in Python, Scala, or Java for Snowpark development. Strong understanding of cloud platforms (AWS, Azure, or GCP) and related services (S3, ADLS, IAM) Key Responsibilities Define data partitioning, clustering, and micro-partition strategies to optimize performance and cost. Lead the implementation of ETL/ELT processes using Snowflake features (Streams, Tasks, Snowpipe). Automate schema migrations, deployments, and pipeline orchestration (e.g., with dbt, Airflow, or Matillion). Monitor query performance and resource utilization; tune warehouses, caching, and clustering. Implement workload isolation (multi-cluster warehouses, resource monitors) for concurrent workloads. Define and enforce role-based access control (RBAC), masking policies, and object tagging. Ensure data encryption, compliance (e.g., GDPR, HIPAA), and audit logging are correctly configured. Establish best practices for dimensional modeling, data vault architecture, and data quality. Create and maintain data dictionaries, lineage documentation, and governance standards. Partner with business analysts and data scientists to understand requirements and deliver analytics-ready datasets. Stay current with Snowflake feature releases (e.g., Snowpark, Native Apps) and propose adoption strategies. Contribute to the long-term data platform roadmap and cloud cost-optimization initiatives. Qualifications BTech/MCA Additional Information At our organization, we are committed to fighting against all forms of discrimination. We foster a work environment that is inclusive and respectful of all differences. All of our positions are open to people with disabilities.
Posted 1 week ago
15.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
We are looking for a passionate and curious AI/ML Engineer (Fresher) to join our growing engineering team. This is a unique opportunity to work on real-world machine learning applications and contribute to building cutting-edge AI solutions. Your Responsibilities: • Assist in designing, developing, and training machine learning models using structured and unstructured data • Collect, clean, and preprocess large datasets for model building • Perform exploratory data analysis and statistical modeling • Collaborate with senior data scientists and engineers to build scalable AI systems • Run experiments, tune hyperparameters, and evaluate model performance using industry-standard metrics • Document models, processes, and experiment results clearly and consistently • Support in integrating AI/ML models into production environments • Stay updated with the latest trends and techniques in machine learning, deep learning, and AI • Participate in code reviews, sprint planning, and product discussions • Follow best practices in software development, version control, and model reproducibility Skill Sets / Experience We Require: • Strong understanding of machine learning fundamentals (regression, classification, clustering, etc.) • Hands-on experience with Python and ML libraries such as scikit-learn, pandas, NumPy • Basic familiarity with deep learning frameworks like TensorFlow, PyTorch, or Keras • Knowledge of data preprocessing, feature engineering, and model validation techniques • Understanding of probability, statistics, and linear algebra • Familiarity with tools like Jupyter, Git, and cloud-based notebooks • Problem-solving mindset and eagerness to learn • Good communication skills and the ability to work in a team • Internship/project experience in AI/ML is a plus Education: • B.Tech / M.Tech / M.Sc in Computer Science, Data Science, Artificial Intelligence, or related field • Relevant certifications in AI/ML (Coursera, edX, etc.) are a plus About Us: TechAhead is a global digital transformation company with a strong presence in the USA and India. We specialize in AI-first product design and bespoke development solutions. With over 15 years of expertise, we've partnered with Fortune 500 companies and global brands to drive innovation and deliver excellence. Join us to shape the future of intelligent technology and contribute to impactful, world-class AI solutions. Show more Show less
Posted 1 week ago
16.0 years
0 Lacs
Calcutta
On-site
Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : SAP HANA DB Administration, PostgreSQL Administration, Hadoop Administration Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : 16 years full time education Cloud Database Engineer HANA Required Skills: • SAP HANA Database Administration - Knowledge of clustering, replication, and load balancing techniques to ensure database availability and reliability • Proficiency in monitoring and maintaining the health and performance of high availability systems • Experience with public cloud platforms such as GCP, AWS, or Azure • Strong troubleshooting skills and the ability to provide effective resolutions for technical issues Desired Skills: • Understanding of Cassandra, Ansible, Terraform, Kafka, Redis, Hadoop or Postgres. • Growth and product mindset and a strong focus on automation. • Working knowledge of Kubernetes for container orchestration and scalability. Activities: • Collaborate closely with cross-functional teams to gather requirements and support SAP teams to execute database initiatives. • Automate the provisioning and configuration of cloud infrastructure, ensuring efficient and reliable deployments. • Provide operational support to monitor database performance, implement changes, and apply new patches and versions when required and previously agreed . • Act as the point of contact for escalated technical issues with our Engineering colleagues, demonstrating deep troubleshooting skills to provide effective resolutions to unblock our partners. Requirements: • Bachelor’s degree in computer science, Engineering, or a related field. • Proven experience in planning, deploying, supporting, and optimizing highly scalable and resilient SAP HANA database systems. • Ability to collaborate effectively with cross-functional teams to gather requirements and convert them into measurable scopes. • troubleshooting skills and the ability to provide effective resolutions for technical issues. • Familiarity with public cloud platforms such as GCP, AWS, or Azure. • Understands Agile principles and methodologies. 16 years full time education
Posted 1 week ago
0 years
0 - 0 Lacs
Indore
On-site
1. Python Backend Development: Flask(more importantly)or Django frameworks RESTful API design and development (fast API) Database interactions (SQL/NoSQL) Vector DB - Chroma DB Authentication and authorization 2. AI/ML Basics: Understanding of basic machine learning algorithms (ex. linear regression, classification, logistic regression, decision tree, clustering, KNN) Deep learning (DL) - Neural network, ANN, CNN, Frameworks like TensorFlow or PyTorch Model training and evaluation 3. Project Experience: Any AI/ML projects they’ve worked on Challenges faced and how they were overcome Job Types: Full-time, Permanent Pay: ₹9,271.17 - ₹12,000.00 per month Benefits: Health insurance Provident Fund Location Type: In-person Schedule: Day shift Fixed shift Morning shift Work Location: In person Speak with the employer +91 9685896876
Posted 1 week ago
4.0 years
0 Lacs
Indore
On-site
Indore, Madhya Pradesh, India Qualification : Deploying various Open-Source Network Security Solutions Integrate relevant components. Performance Optimization and Optimization of Rules set. Event driven process flow and actions – customization of IPC and enrichments. System Engineering for reliability and system performance improvement Research on new approaches and IP creation. Skills Required : IP Networks, Linux Internals, Scripting, LUA, Event Driven Scripting, YARA, SIGMA Role : Rich Experience in working on Network Security Products such as IDS / IPS, Next Generation Firewall, Experience of as product Development / Solution Engineering Experience in working on IP networking, IP networking Protocols, Computer System internals, IPCs. Good understanding and knowledge of TCP/IP networking: Including L2/L3/L4/L7 protocols (SIP, RTP, SMTP, HTTP, POP3, FTP, STP, VLAN, VTP, TCP/IP, BGP, OSPF, GTP, GRE,DHCP, DNS, FTP, HTTP/S and SNMP) Strong Understanding of PCAP, DPI (Deep Packet Inspection) Deployment and performance optimization of– Suricata / SNORT/ Zeek. Creating and adopting Rules for IDS/IPS, Experience in working large networks ~ 10G/100G/400G. Network Clustering, Parallel processing, Virtual Appliances, Working on Linux, Cloud Environment, Network Processing Cards (NICs), Parallel processing, Off-loading, Accelerations Qualifications Postgraduate in Com Sc. Engineering with specialization in IP Networking Programming Skills in C/C++, Python Operating Systems: Linux Experience: 4-6 years. Experience : 4 to 6 years Job Reference Number : 11592
Posted 1 week ago
1.0 years
0 - 0 Lacs
Chittoor
On-site
About the Role: We are looking for skilled and passionate women developers who are proficient in either AI/ML or Python with Robotics , and can contribute to both development projects and training programs . You will be responsible for building real-world solutions, guiding student projects, and helping shape the future of tech education and innovation. This opportunity is open to women developers and engineers interested in hands-on development as well as mentoring students. Open Positions: AI/ML Developer & Trainer (Women Only) Python with Robotics Developer & Trainer (Women Only) Key Responsibilities: Development: Design, develop, and test real-time AI/ML models or Robotics-based applications. Collaborate on R&D projects related to smart systems, IoT, or automation. Build and document reusable components and tools for ongoing projects. Work on project-based modules that can be implemented in educational use cases. Support deployment, debugging, and enhancement of tech solutions. Training: Deliver 2-hour sessions (online or offline) to students and interns. Guide students through projects, coding tasks, and tech challenges. Assist in developing educational content, tutorials, and assessments. Provide mentorship to junior developers or trainees. Support students with code reviews and conceptual clarity. Required Skills: For AI/ML Role: Strong Python skills Experience with data analysis and machine learning algorithms Familiarity with libraries: NumPy, Pandas, Scikit-learn, TensorFlow/Keras Knowledge of regression, classification, clustering, model deployment For Python with Robotics Role: Solid Python programming experience Exposure to microcontrollers, sensors, and hardware integration Familiarity with Raspberry Pi, Arduino, or robotics simulators Understanding of control systems, IoT protocols, or real-world interfacing Preferred Qualifications: Any Degree 1+ year of experience in development or training roles (freshers with projects may apply) Strong communication and problem-solving skills Passion for hands-on development and mentoring Previous experience in ed-tech, R&D, or robotics projects is a plus Salary Range: ₹15,000 – ₹30,000 per month (based on experience and role) Additional incentives for performance and project delivery Application Deadline: Rolling applications – immediate joiners preferred How to Apply: Send your resume and a brief note about your experience and interest in development + mentoring to: shiva.a@ignitewave.in Note: This is a diversity hiring initiative aimed at supporting women in technology . Only female candidates will be considered for these roles. Job Types: Full-time, Permanent Pay: ₹15,000.00 - ₹25,000.00 per month Schedule: Day shift Supplemental Pay: Performance bonus Yearly bonus Work Location: In person
Posted 1 week ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About The Role Grade Level (for internal use): 11 The Team As a member of the Data Transformation team you will work on building ML powered products and capabilities to power natural language understanding, data extraction, information retrieval and data sourcing solutions for S&P Global Market Intelligence and our clients. You will spearhead development of production-ready AI products and pipelines while leading-by-example in a highly engaging work environment. You will work in a (truly) global team and encouraged for thoughtful risk-taking and self-initiative. The Impact The Data Transformation team has already delivered breakthrough products and significant business value over the last 3 years. In this role you will be developing our next generation of new products while enhancing existing ones aiming at solving high-impact business problems. What’s In It For You Be a part of a global company and build solutions at enterprise scale Collaborate with a highly skilled and technically strong team Contribute to solving high complexity, high impact problems Key Responsibilities Design, Develop and Deploy ML powered products and pipelines Play a central role in all stages of the data science project life cycle, including: Identification of suitable data science project opportunities Partnering with business leaders, domain experts, and end-users to gain business understanding, data understanding, and collect requirements Evaluation/interpretation of results and presentation to business leaders Performing exploratory data analysis, proof-of-concept modelling, model benchmarking and setup model validation experiments Training large models both for experimentation and production Develop production ready pipelines for enterprise scale projects Perform code reviews & optimization for your projects and team Spearhead deployment and model scaling strategies Stakeholder management and representing the team in front of our leadership Leading and mentoring by example including project scrums What We’re Looking For 7+ years of professional experience in Data Science domain Expertise in Python (Numpy, Pandas, Spacy, Sklearn, Pytorch/TF2, HuggingFace etc.) Experience with SOTA models related to NLP and expertise in text matching techniques, including sentence transformers, word embeddings, and similarity measures Expertise in probabilistic machine learning model for classification, regression & clustering Strong experience in feature engineering, data preprocessing, and building machine learning models for large datasets. Exposure to Information Retrieval, Web scraping and Data Extraction at scale OOP Design patterns, Test-Driven Development and Enterprise System design SQL (any variant, bonus if this is a big data variant) Linux OS (e.g. bash toolset and other utilities) Version control system experience with Git, GitHub, or Azure DevOps. Problem-solving and debugging skills Software craftsmanship, adherence to Agile principles and taking pride in writing good code Techniques to communicate change to non-technical people Nice to have Prior work to show on Github, Kaggle, StackOverflow etc. Cloud expertise (AWS and GCP preferably) Expertise in deploying machine learning models in cloud environments Familiarity in working with LLMs What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.2 - Middle Professional Tier II (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 315680 Posted On: 2025-05-20 Location: Gurgaon, Haryana, India Show more Show less
Posted 1 week ago
12.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description Role Proficiency: Leverage expertise in a technology area (e.g. Infromatica Transformation Terradata data warehouse Hadoop Analytics) Responsible for Architecture for a small/mid-size projects. Outcomes Implement either data extract and transformation a data warehouse (ETL Data Extracts Data Load Logic Mapping Work Flows stored procedures data warehouse) data analysis solution data reporting solutions or cloud data tools in any one of the cloud providers(AWS/AZURE/GCP) Understand business workflows and related data flows. Develop design for data acquisitions and data transformation or data modelling; applying business intelligence on data or design data fetching and dashboards Design information structure work-and dataflow navigation. Define backup recovery and security specifications Enforce and maintain naming standards and data dictionary for data models Provide or guide team to perform estimates Help team to develop proof of concepts (POC) and solution relevant to customer problems. Able to trouble shoot problems while developing POCs Architect/Big Data Speciality Certification in (AWS/AZURE/GCP/General for example Coursera or similar learning platform/Any ML) Measures Of Outcomes Percentage of billable time spent in a year for developing and implementing data transformation or data storage Number of best practices documented in any new tool and technology emerging in the market Number of associates trained on the data service practice Outputs Expected Strategy & Planning: Create or contribute short-term tactical solutions to achieve long-term objectives and an overall data management roadmap Implement methods and procedures for tracking data quality completeness redundancy and improvement Ensure that data strategies and architectures meet regulatory compliance requirements Begin engaging external stakeholders including standards organizations regulatory bodies operators and scientific research communities or attend conferences with respect to data in cloud Operational Management Help Architects to establish governance stewardship and frameworks for managing data across the organization Provide support in implementing the appropriate tools software applications and systems to support data technology goals Collaborate with project managers and business teams for all projects involving enterprise data Analyse data-related issues with systems integration compatibility and multi-platform integration Project Control And Review Provide advice to teams facing complex technical issues in the course of project delivery Define and measure project and program specific architectural and technology quality metrics Knowledge Management & Capability Development Publish and maintain a repository of solutions best practices and standards and other knowledge articles for data management Conduct and facilitate knowledge sharing and learning sessions across the team Gain industry standard certifications on technology or area of expertise Support technical skill building (including hiring and training) for the team based on inputs from project manager /RTE’s Mentor new members in the team in technical areas Gain and cultivate domain expertise to provide best and optimized solution to customer (delivery) Requirement Gathering And Analysis Work with customer business owners and other teams to collect analyze and understand the requirements including NFRs/define NFRs Analyze gaps/ trade-offs based on current system context and industry practices; clarify the requirements by working with the customer Define the systems and sub-systems that define the programs People Management Set goals and manage performance of team engineers Provide career guidance to technical specialists and mentor them Alliance Management Identify alliance partners based on the understanding of service offerings and client requirements In collaboration with Architect create a compelling business case around the offerings Conduct beta testing of the offerings and relevance to program Technology Consulting In collaboration with Architects II and III analyze the application and technology landscapers process and tolls to arrive at the architecture options best fit for the client program Analyze Cost Vs Benefits of solution options Support Architects II and III to create a technology/ architecture roadmap for the client Define Architecture strategy for the program Innovation And Thought Leadership Participate in internal and external forums (seminars paper presentation etc) Understand clients existing business at the program level and explore new avenues to save cost and bring process efficiency Identify business opportunities to create reusable components/accelerators and reuse existing components and best practices Project Management Support Assist the PM/Scrum Master/Program Manager to identify technical risks and come-up with mitigation strategies Stakeholder Management Monitor the concerns of internal stakeholders like Product Managers & RTE’s and external stakeholders like client architects on Architecture aspects. Follow through on commitments to achieve timely resolution of issues Conduct initiatives to meet client expectations Work to expand professional network in the client organization at team and program levels New Service Design Identify potential opportunities for new service offerings based on customer voice/ partner inputs Conduct beta testing / POC as applicable Develop collaterals guides for GTM Skill Examples Use data services knowledge creating POC to meet a business requirements; contextualize the solution to the industry under guidance of Architects Use technology knowledge to create Proof of Concept (POC) / (reusable) assets under the guidance of the specialist. Apply best practices in own area of work helping with performance troubleshooting and other complex troubleshooting. Define decide and defend the technology choices made review solution under guidance Use knowledge of technology t rends to provide inputs on potential areas of opportunity for UST Use independent knowledge of Design Patterns Tools and Principles to create high level design for the given requirements. Evaluate multiple design options and choose the appropriate options for best possible trade-offs. Conduct knowledge sessions to enhance team's design capabilities. Review the low and high level design created by Specialists for efficiency (consumption of hardware memory and memory leaks etc.) Use knowledge of Software Development Process Tools & Techniques to identify and assess incremental improvements for software development process methodology and tools. Take technical responsibility for all stages in the software development process. Conduct optimal coding with clear understanding of memory leakage and related impact. Implement global standards and guidelines relevant to programming and development come up with 'points of view' and new technological ideas Use knowledge of Project Management & Agile Tools and Techniques to support plan and manage medium size projects/programs as defined within UST; identifying risks and mitigation strategies Use knowledge of Project Metrics to understand relevance in project. Collect and collate project metrics and share with the relevant stakeholders Use knowledge of Estimation and Resource Planning to create estimate and plan resources for specific modules or small projects with detailed requirements or user stories in place Strong proficiencies in understanding data workflows and dataflow Attention to details High analytical capabilities Knowledge Examples Data visualization Data migration RDMSs (relational database management systems SQL Hadoop technologies like MapReduce Hive and Pig. Programming languages especially Python and Java Operating systems like UNIX and MS Windows. Backup/archival software. Additional Comments Snowflake Architect Key Responsibilities: Solution Design: Designing the overall data architecture within Snowflake, including database/schema structures, data flow patterns (ELT/ETL strategies involving Snowflake), and integration points with other systems (source systems, BI tools, data science platforms). Data Modeling: Designing efficient and scalable physical data models within Snowflake. Defining table structures, distribution/clustering keys, data types, and constraints to optimize storage and query performance. Security Architecture: Designing the overall security framework, including the RBAC strategy, data masking policies, encryption standards, and how Snowflake security integrates with broader enterprise security policies. Performance and Scalability Strategy: Designing solutions with performance and scalability in mind. Defining warehouse sizing strategies, query optimization patterns, and best practices for development teams. Ensuring the architecture can handle future growth in data volume and user concurrency. Cost Optimization Strategy: Designing architectures that are inherently cost-effective. Making strategic choices about data storage, warehouse usage patterns, and feature utilization (e.g., when to use materialized views, streams, tasks). Technology Evaluation and Selection: Evaluating and recommending specific Snowflake features (e.g., Snowpark, Streams, Tasks, External Functions, Snowpipe) and third-party tools (ETL/ELT, BI, governance) that best fit the requirements. Standards and Governance: Defining best practices, naming conventions, development guidelines, and governance policies for using Snowflake effectively and consistently across the organization. Roadmap and Strategy: Aligning the Snowflake data architecture with overall business intelligence and data strategy goals. Planning for future enhancements and platform evolution. Technical Leadership: Providing guidance and mentorship to developers, data engineers, and administrators working with Snowflake. Key Skills: Deep understanding of Snowflake's advanced features and architecture. Strong data warehousing concepts and data modeling expertise. Solution architecture and system design skills. Experience with cloud platforms (AWS, Azure, GCP) and how Snowflake integrates. Expertise in performance tuning principles and techniques at an architectural level. Strong understanding of data security principles and implementation patterns. Knowledge of various data integration patterns (ETL, ELT, Streaming). Excellent communication and presentation skills to articulate designs to technical and non-technical audiences. Strategic thinking and planning abilities. Looking for 12+ years of experience to join our team. Skills Snowflake,Data modeling,Cloud platforms,Solution architecture Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Pune, Maharashtra, India
Remote
About Arctera Arctera keeps the world’s IT systems working. We can trust that our credit cards will work at the store, that power will be routed to our homes and that factories will produce our medications because those companies themselves trust Arctera. Arctera is behind the scenes making sure that many of the biggest organizations in the world – and many of the smallest too – can face down ransomware attacks, natural disasters, and compliance challenges without missing a beat. We do this through the power of data and our flagship products, Insight, InfoScale and Backup Exec. Illuminating data also helps our customers maintain personal privacy, reduce the environmental impact of data storage, and defend against illegal or immoral use of information. It’s a task that continues to get more complex as data volumes surge. Every day, the world produces more data than it ever has before. And global digital transformation – and the arrival of the age of AI – has set the course for a new explosion in data creation. Joining the Arctera team, you’ll be part of a group innovating to harness the opportunity of the latest technologies to protect the world’s critical infrastructure and to keep all our data safe. Job Summary Respond to customer inquiries (voice or other digital communications) for an assigned product(s) within a Technical Support Engineer environment. Provide technical support to Arctera customers, partners, and field support staff with varying levels of support maintenance entitlements i.e., entry level, through to premier level entitlements, with focus on diagnosing, troubleshooting, and debugging Arctera software and hardware, including cloud platforms. Position provides an opportunity to continuously develop technical skills through learning and supporting a platform that brings together availability, protection, and insights for our customers. Position requires a motivated, self-starter and self-learner with a customer-first attitude. Primary Accountabilities Provide technical support for Arctera products, on-premises and in cloud platforms. Answer technical questions from customers, partners, and field reps. Resolve cases per productivity, performance and SLA standards and support goals. Document and diagnose system issues resulting in production outages. Research, document, and collaborate on cases as required. Author or update technical documents in Arctera Knowledge Base. Address multiple issues simultaneously, with a case for each issue raised. Establish close interactions with team members. Determine when necessary to engage team members to enable timely case resolution. Participate in weekly meetings and forums with other technical support engineers. Participate in evaluation of new products and features. Knowledge Core Technologies: Experience across one or more of the following. Operating Systems: Windows Servers. System Administration: Server Hardware, Software, maintenance, and troubleshooting. Networking: TCP/IP, TLS, PKI, Firewalls, Routing, VLANs, Link Aggregation (802.3ad, balanced-alb), Authentication (LDAP, Active Directory), DNS, NFS, CIFS. Storage: LVM, RAID, DAS, SAN, NAS, Software-Defined Storage, SAS, Fibre Channel. Diagnostics: Log Analysis, Process Tracing, Debugging, Kernel Panic, Root Cause Analysis. Observability: Application Performance Management, reliability, availability, and serviceability. Infrastructure: Data Center Operations / Management. Arctera product offerings. Additional knowledge: Working knowledge in one or more of the following. Enterprise Information Systems, Application Servers, and Hardware Infrastructure. Virtualization: VMware, Hyper-V, RHV, Nutanix, and Containers (Docker, Podman). Databases: Microsoft SQL Server / MySQL / PostgreSQL. Oracle Database. IBM DB2. Microsoft Exchange / Microsoft 365. Storage: DAS/NAS/SAN: Switches, Zoning, HBA, SFP, WWN, WWPN. Cloud: Object Storage (AWS, Azure, GCP) and on-premises disaster recovery solutions. Basic familiarity with SaaS, PaaS, IaaS, and APIs. Clustering and High Availability systems. Experience with scripting languages (i.e., Python, Perl, and PHP) is beneficial. Skills & Competencies Customer Service Positive attitude and customer centric mindset. Commitment to delivering customer value. Assist customers on live calls via remote assistance. Collaboration Engagement with peers in an open and collaborative environment. Ability to work with multiple stakeholders: Sales, Engineering, Development. Demonstrate b sense of willingness to learn, share, and work together as team. Communication Skills Effective customer relationship management. Capable of navigating customer expectations with empathy. Active and reflective listening, problem solving and troubleshooting techniques. Clear and concise technical documentation: Problem Statement, Case Notes, Knowledge Articles. Ability to simplify technical topics in common terms. Time management Plan and prioritize activities effectively. Ability to pivot swiftly to meet customer needs. Apply flexibility and adapt to changing priorities in a dynamic working environment. Maximize engagement with team members to effectively drive case resolution. Troubleshooting Apply decision making and problem-solving techniques. Use systems knowledge to formulate a clear problem statement. Ability to trace application faults at a process level in distributed system environments. Think quickly and react to situations with customer impact. Ability to break down complex problems into simple components Preferred Certifications CompTIA: Network+, Server+. Cloud Certifications: Amazon, Microsoft, Google. Job Complexity Works on problems of moderate scope where analysis of situations or data requires a review of a variety of factors. Exercises judgment within defined procedures and practices to determine appropriate action. Builds productive internal/external working relationships. Supervision Normally receives general instructions on routine work, detailed instructions on new assignments under general supervision. Follows established directions. Work is reviewed for accuracy and overall adequacy. Experience / Education / Qualifications Diploma holders / Graduates / Postgraduates in Engineering / Science. 3+ years’ of Sys Admin or related enterprise Technical Support Certification in one’s product area. 3+ years’ experience providing 2nd /3rd level support in an enterprise class product company, or 3+ years’ experience working in a LIVE production environment or datacenter with heterogeneous IT infrastructure. 2+ years’ experience of public and/or private cloud platform experience preferred. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About The Role Grade Level (for internal use): 11 The Team As a member of the Data Transformation team you will work on building ML powered products and capabilities to power natural language understanding, data extraction, information retrieval and data sourcing solutions for S&P Global Market Intelligence and our clients. You will spearhead development of production-ready AI products and pipelines while leading-by-example in a highly engaging work environment. You will work in a (truly) global team and encouraged for thoughtful risk-taking and self-initiative. The Impact The Data Transformation team has already delivered breakthrough products and significant business value over the last 3 years. In this role you will be developing our next generation of new products while enhancing existing ones aiming at solving high-impact business problems. What’s In It For You Be a part of a global company and build solutions at enterprise scale Collaborate with a highly skilled and technically strong team Contribute to solving high complexity, high impact problems Key Responsibilities Design, Develop and Deploy ML powered products and pipelines Play a central role in all stages of the data science project life cycle, including: Identification of suitable data science project opportunities Partnering with business leaders, domain experts, and end-users to gain business understanding, data understanding, and collect requirements Evaluation/interpretation of results and presentation to business leaders Performing exploratory data analysis, proof-of-concept modelling, model benchmarking and setup model validation experiments Training large models both for experimentation and production Develop production ready pipelines for enterprise scale projects Perform code reviews & optimization for your projects and team Spearhead deployment and model scaling strategies Stakeholder management and representing the team in front of our leadership Leading and mentoring by example including project scrums What We’re Looking For 7+ years of professional experience in Data Science domain Expertise in Python (Numpy, Pandas, Spacy, Sklearn, Pytorch/TF2, HuggingFace etc.) Experience with SOTA models related to NLP and expertise in text matching techniques, including sentence transformers, word embeddings, and similarity measures Expertise in probabilistic machine learning model for classification, regression & clustering Strong experience in feature engineering, data preprocessing, and building machine learning models for large datasets. Exposure to Information Retrieval, Web scraping and Data Extraction at scale OOP Design patterns, Test-Driven Development and Enterprise System design SQL (any variant, bonus if this is a big data variant) Linux OS (e.g. bash toolset and other utilities) Version control system experience with Git, GitHub, or Azure DevOps. Problem-solving and debugging skills Software craftsmanship, adherence to Agile principles and taking pride in writing good code Techniques to communicate change to non-technical people Nice to have Prior work to show on Github, Kaggle, StackOverflow etc. Cloud expertise (AWS and GCP preferably) Expertise in deploying machine learning models in cloud environments Familiarity in working with LLMs What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.2 - Middle Professional Tier II (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 315680 Posted On: 2025-05-20 Location: Gurgaon, Haryana, India Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : SAP HANA DB Administration, PostgreSQL Administration, Hadoop Administration Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 16 years full time education Cloud Database Engineer HANA Required Skills: SAP HANA Database Administration - Knowledge of clustering, replication, and load balancing techniques to ensure database availability and reliability Proficiency in monitoring and maintaining the health and performance of high availability systems Experience with public cloud platforms such as GCP, AWS, or Azure Strong troubleshooting skills and the ability to provide effective resolutions for technical issues Desired Skills: Understanding of Cassandra, Ansible, Terraform, Kafka, Redis, Hadoop or Postgres. Growth and product mindset and a strong focus on automation. Working knowledge of Kubernetes for container orchestration and scalability. Activities: Collaborate closely with cross-functional teams to gather requirements and support SAP teams to execute database initiatives. Automate the provisioning and configuration of cloud infrastructure, ensuring efficient and reliable deployments. Provide operational support to monitor database performance, implement changes, and apply new patches and versions when required and previously agreed . Act as the point of contact for escalated technical issues with our Engineering colleagues, demonstrating deep troubleshooting skills to provide effective resolutions to unblock our partners. Requirements: Bachelor’s degree in computer science, Engineering, or a related field. Proven experience in planning, deploying, supporting, and optimizing highly scalable and resilient SAP HANA database systems. Ability to collaborate effectively with cross-functional teams to gather requirements and convert them into measurable scopes. troubleshooting skills and the ability to provide effective resolutions for technical issues. Familiarity with public cloud platforms such as GCP, AWS, or Azure. Understands Agile principles and methodologies. Show more Show less
Posted 1 week ago
1.0 - 2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
This is NewCold NewCold is a service provider in cold chain logistics with a focus on development and operation of large, highly automated cold stores. NewCold strives to be crucial in the cold chain of leading food companies, by offering advanced logistic services worldwide. NewCold is one of the fastest growing companies (over 2,000 employees) in the cold chain logistics and they are expanding teams to support this growth. They use the latest technology that empowers people, to handle food responsibly and guarantee food safety in a sustainable way. They challenge the industry, believe in long-term partnerships, and deliver solid investment opportunities that enable next generation logistic solutions. NewCold has leading market in-house expertise in designing, engineering, developing and operating state-of-the-art automated cold stores: a result of successful development and operation of over 15 automated warehouses cross three continents. With the prospect of many new construction projects around the world in the very near future, this vacancy offers an interesting opportunity to join an internationally growing and ambitious organization. Job Title: AI Associate-Machine Learning Location: Bengaluru Experience: 1-2 Years Compensation: Up to 15 Lakhs PA Position: AI Associate – Machine Learning Join our growing AI team as an AI Associate (ML) and build real-world machine learning solutions that directly impact business performance. This is an exciting opportunity to work with experienced professionals, apply your skills to live data problems, and grow your career in a fast-paced, collaborative environment. Your Role: You’ll help design, train, and deploy ML models that power applications in supply chain, logistics, finance, and operations. From predicting delivery times to detecting anomalies in large datasets, your work will drive smarter decision-making. What You’ll Do: Build ML models for forecasting, classification, clustering, and anomaly detection using real business data. Work on the full ML lifecycle: data prep, feature engineering, model selection, evaluation, and deployment. Collaborate with cross-functional teams (engineering, data, operations) to understand business needs and build ML-driven solutions. Deploy and monitor models in production environments using APIs and MLOps tools. Document experiments and contribute to reusable ML components and workflows. What We’re Looking For: B.Tech / M.Tech in Computer Science, Engineering, or related fields. 1 -2 years of experience applying machine learning in real-world scenarios. Strong programming skills in Python and familiarity with ML libraries (scikit-learn, XGBoost, LightGBM, PyTorch, etc.). Hands-on experience working with structured and semi-structured data. Familiarity with cloud platforms (Azure preferred) and tools for ML deployment. Bonus: Experience with time series, anomaly detection, or working in supply chain/logistics. Why Join Us? Be part of a growing AI team solving real industry challenges. Work on high-impact projects with supportive mentors and strong learning culture. Gain experience in production ML pipelines and cloud deployment at scale. Access opportunities to grow in computer vision, LLMs, or advanced MLOps. Show more Show less
Posted 1 week ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Summary Position Summary Analyst – NOC (Network Operations Center) Engineer- Deloitte Support Services India Private Limited The Monitoring & Control Function is accountable for the delivery of end to end infrastructure monitoring of NL member firm – including Server Monitoring, Application monitoring, Network monitoring and storage backup monitoring Services as well as Azure platforms underpinning SAP, Enterprise (IT) Services. It also provides technology support to the multiple service lines in NL IT, delivering monitoring solutions. Work you’ll do The Network Operations Center Engineer is responsible for monitoring entire NL IT datacenters and infrastructure devices & services in 24X7 working environment with rotational week offs. This includes working with multiple teams of NL IT such as Application services, platform, storage, network, SAP, ITSM who are responsible for delivering several services to clients. The Network Operations Center Engineer remit encompasses core monitoring services with the help of SCOM. Reporting to the Monitoring & Control team manager, you will be expected to handle day-to-day operations during the allocated shift with enough knowledge on IT infrastructure. Responsibilities Strategic Exploring ways of improving processes and procedure to support our infrastructure NL IT Service management function engagements for quick resolutions Problem solving, critical & analytical skills while handling daily operational activities Operational Monitoring the health of the infrastructure and ICT services of NL IT infrastructure Monitoring the performance and capacity of systems. Proactive and reactive escalation of potential disruptions in our IT services Detect, analyze and log events, errors and alarms concerning the infrastructure Investigate and resolve 1st level of Alerts and Incidents Analyze, identify trends, and come with possible solutions for bulk alerts. Registration of incidents/changes/requests in SNOW ticketing system and assigning to respective teams for resolutions Coordinating with Infra and other technical teams for alert resolutions and Major Incidents/outages. Capable to take ownership on day-to-day operational activities and issues. Coordinating and communicating in monthly maintenance window activities Contribute to the creation of technical working documents and procedures Share knowledge of new solutions with Monitoring and control team Regularly and efficiently handle the requests and incidents which enter out ticketing system The team At Deloitte, we’re all about collaboration. And nowhere is this more apparent than among our 2,000-strong internal services team. With our combined specialist skills, we provide all the essential support and advice our client-facing colleagues need, right across the firm. This enables them to focus all their efforts on delivering the best service possible to their clients. Covering seven distinct areas; Human Resources, Clients & Industries, Finance & Legal, Practice Support Services, Quality & Risk Services, IT Services, and Workplace Services & Real Estate, together we live, breathe, and deliver the Deloitte experience. About Deloitte Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited, a UK private company limited by guarantee (“DTTL”), its network of member firms, and their related entities. DTTL and each of its member firms are legally separate and independent entities. DTTL (also referred to as “Deloitte Global”) does not provide services to clients. In the United States, Deloitte refers to one or more of the US members firms of DTTL, their related entities that operate using the “Deloitte” name in the United States and their respective affiliates. Certain services may not be available to attest clients under the rules and regulations of public accounting. Please see www.deloitte.com/about to learn more about our global network of member firms. Copyright © 2017 Deloitte Development LLC. All rights reserved. Location: Hyderabad Experience: 2 to 4 years Work shift Timings: 24x7, 3 shifts, Rotational working Environment Morning shift (5:30 a.m. to 2:30 p.m. IST) Afternoon shift (02:00 p.m. to 11.00 IST) Night shift (09:00 pm to 06:00 a.m. IST) Rotational Week offs depending upon business requirements Qualifications Bachelor of Engineering/ Bachelor of Technology 2+ years’ experience in a similar role and Enterprise organisation. Essential Exceptional communications skills, both written and verbal A strong track record of delivering continual service improvement Be able to communicate effectively, technical issues with technical and non-technical audience Knowledge of IT infrastructures, Infra Network and associated protocols such as TCP/IP and SNMP and system centre suite components (SCCM, SCVMM, SCOM DPM, FCM) Understanding of clustering, failover and High availability concepts. Knowledge and support (hands on) skills on Microsoft Windows 2016/2019/2022 server. Well acquainted with server/application monitoring through SCOM 2019/2022. Hands on knowledge on Infra Backup monitoring through System Center DPM. Knowledge on IIS, SharePoint, SQL DB/Availabilty groups and basic understanding of SAN and Data Center infrastructure. Basic understanding of cloud infrastructure such as Azure, AWS. Hands on knowledge on ServiceNow ticketing system for creating and managing Incidents, Problems, Changes A solid understanding of the ITIL framework Experience recording and maintaining incident & requests within a ticketing system Good to have Understanding/Experience on Dynatrace, SolarWinds ITIL Certification Azure 900 certification MCSE certification, CCNA knowledge All round experience of infrastructure areas. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300874 Show more Show less
Posted 1 week ago
18.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
It's fun to work in a company where people truly BELIEVE in what they are doing! We're committed to bringing passion and customer focus to the business. Brief About The Team & Fractal Fractal Analytics is a strategic analytics partner to the most admired Fortune 500 companies globally and helps them power every human decision in the enterprise by bringing analytics & AI to the decision-making process. We deliver insight, innovation, and impact to them by leveraging Big Data, analytics and technology and help them drive smarter, faster, and more accurate decisions in every aspect of their business. Role Brief The Client Partner is responsible for providing best in class analytics delivery services for the cluster of clients s/he is responsible for. The individual will lead delivery and operations for the consulting and analytics engagements with the clear responsibility of managing and helping grow revenues profitably. S/he will manage cross-functional teams and should have a client service orientation with the ability to influence the client's decision-making process, improving the overall client engagement profitably. Key Expectations As a Consulting leader embedded within the client, the person will be expected to provide best in the class domain and analytics leadership to his/her team. Insights, impact, and innovation delivered to clients measured by metrics like NPS, innovative solutions created, positive impact on KPIs critical to client. Drive profitable business through managing margins, cost of delivery and every associated metrics while ensuring complete client satisfaction. Job Responsibilities As a delivery lead, grow revenues by identifying opportunities to scale existing projects, develop new solutions for the client, drive productivity improvements, efficiency to enhance gross margins and bring about optimized staffing across all engagements. Drive client-specific business planning, forecasting, budgeting, and measurements / engage with clients to identify opportunities to institutionalize analytics and engineering across client organizations Lead a team of highly motivated individuals and encourage to develop new capabilities through learning and development and knowledge sharing initiatives Lead client consulting in D&A delivery by guiding the team in data synthesis, modelling techniques and culling out actionable insights, implications, and recommendations to address client's business KPIs and /or identify opportunities Ability to attract and retain high calibre talent for the organization and clearly communicate the vision, goals, and objectives of the organization and guide others in linking their activities to the success of the organization. Work effectively with peer groups in sales & marketing, capability teams and enabling functions to drive higher value-add to clients. Participate in the sales process to scope client needs and consult the client to frame up solution and delivery approach. The Person: Qualification & Experience An entrepreneurial mindset with heavy bias for action; executes effectively and efficiently wit natural comfort to be hands-on and be willing to jump in to do what is needed. 18+ years of experience in analytics delivery and business consulting with at least 10 + years of leadership experience along with 7 + years of experience in CPG domain Experience in design and review of new solution concepts and leading the delivery of high-impact analytics solutions and programs for global clients. Knowledge of advanced analytics and machine learning techniques such as segmentation/clustering, recommendation engines, propensity models, and forecasting to drive growth throughout the customer lifecycle. Should be able to evaluate and bring in new advanced techniques to enhance the value-add for clients. good to have familiarity with engineering projects such as developing e2e business applications etc. Should be able to apply CPG domain knowledge to functional areas like market size estimation, business growth strategy, impact of govt policies on product, strategic revenue management, marketing effectiveness Must have excellent project/program management skills and have experience managing multiple work streams and projects at one time Have business acumen to manage revenues profitably and meet financial goals consistently. Able to quantify business value for clients and create win-win commercial propositions. Good thought leadership & ability to structure & solve business problems, innovating, where required Outstanding presentation and communication skills (Oral and written) with the ability to inspire others to make informed decisions. Must have the ability to adapt to changing business priorities in a fast-paced business environment Education: Post graduate degree in a quantitative discipline from a reputed institute. If you like wild growth and working with happy, enthusiastic over-achievers, you'll enjoy your career with us! Not the right fit? Let us know you're interested in a future opportunity by clicking Introduce Yourself in the top-right corner of the page or create an account to set up email alerts as new job postings become available that meet your interest! Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : SAP HANA DB Administration, PostgreSQL Administration, Hadoop Administration, Ansible on Microsoft Azure Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 16 years full time education Cloud Database Engineer HANA Required Skills: SAP HANA Database Administration - Knowledge of clustering, replication, and load balancing techniques to ensure database availability and reliability Proficiency in monitoring and maintaining the health and performance of high availability systems Experience with public cloud platforms such as GCP, AWS, or Azure Strong troubleshooting skills and the ability to provide effective resolutions for technical issues Desired Skills: Understanding of Cassandra, Ansible, Terraform, Kafka, Redis, Hadoop or Postgres. Growth and product mindset and a strong focus on automation. Working knowledge of Kubernetes for container orchestration and scalability. Activities: Collaborate closely with cross-functional teams to gather requirements and support SAP teams to execute database initiatives. Automate the provisioning and configuration of cloud infrastructure, ensuring efficient and reliable deployments. Provide operational support to monitor database performance, implement changes, and apply new patches and versions when required and previously agreed . Act as the point of contact for escalated technical issues with our Engineering colleagues, demonstrating deep troubleshooting skills to provide effective resolutions to unblock our partners. Requirements: Bachelor’s degree in computer science, Engineering, or a related field. Proven experience in planning, deploying, supporting, and optimizing highly scalable and resilient SAP HANA database systems. Ability to collaborate effectively with cross-functional teams to gather requirements and convert them into measurable scopes. troubleshooting skills and the ability to provide effective resolutions for technical issues. Familiarity with public cloud platforms such as GCP, AWS, or Azure. Understands Agile principles and methodologies. Show more Show less
Posted 1 week ago
10.0 - 12.0 years
20 - 30 Lacs
Greater Noida
Hybrid
Experience Required: 10+ years in Database Administration Key Responsibilities: Oversee and manage enterprise SQL Server databases , ensuring high availability, security, and performance tuning. Implement database backup, recovery, and disaster recovery strategies . Monitor database health, conduct capacity planning , and optimize performance. Lead database upgrades, migrations, and maintenance activities. Ensure compliance with data security policies , access control, and regulatory standards. Collaborate with development, infrastructure, and security teams to support business objectives. Implement automation for database deployments, monitoring, and maintenance. Mentor and guide a team of DBAs, ensuring best practices are followed. Manage vendor relationships for database licensing, support, and troubleshooting . Technical Skills Required: Must: SQL Server Administration (Performance tuning, Always On, High Availability, Clustering). Good to Know: PostgreSQL Administration (basic to intermediate knowledge). Understanding of PowerShell, Shell scripting for database automation is an advantage. Soft Skills Required: Strong leadership and team management skills. Ability to troubleshoot and resolve complex database issues. Excellent communication skills to interact with cross-functional teams. Experience in working under tight deadlines and managing multiple projects.
Posted 1 week ago
9.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description Job Description The Manager Data Analytics position is responsible you’ll be a lead in developing and driving our team and our business to get the most out of data. You’ll push thinking and delivery across multiple accounts leveraging your deep experience in data solutions to ensure success. You’ll coach and develop colleagues to achieve continually high performance and outputs. This position will be working with multiple teams and will need to communicate effectively with people of all job levels and personalities. The ideal applicants will be eager to help on a variety of projects and be continuously looking to grow their skills Your Impact Delivering analytics and optimization solutions Strong analytical skills and data driven approach to problem solving Deep understanding of online marketing disciplines and how they contribute to customer acquisition, conversion and retention Demonstrated experience working across multi-disciplinary teams using analytics to drive connected solutions which impact customer outcomes Representing Sapient as an expert in the Digital Analytics industry with deep technical expertise. Ability to drive and manage analytics and business teams through requirements gathering, solution design, planning and implementation, and reporting. Qualifications Your Skills & Experience: Experience in various marketing analytics models such as Market Mix Modeling, Attribution Modelling, visitor behavior analysis, and customer sentiment analysis. Experienced in Marketing Performance analysis i.e. data aggregation (leveraging marketing & click-stream APIs, data cleaning & transformation), analysis & segmentation, targeting & integration, and advanced analytics (lead scoring, churn analysis, lifetime value analysis, clustering, regression, forecasting, etc.). Experience working on R/Python to run statistical analysis. Experience working on cloud data platforms Exposure to CDP and/or DMP platforms and hands-on experience in integrating digital data with these platforms is a must Proven experience in testing, both A/B and MVT using tools such as Maxymiser, Adobe Target, Google Web Optimizer, Qubit Deliver Bachelor’s degree and year of work experience of 9-12 years of working experience in data analytics, with a marketing team on campaign planning, campaign analytics, or business analytics Set Yourself Apart With Hands-on experience in multiple Industries Strong Articulation skills Self-starter who requires minimal oversight Ability to prioritize and manage multiple tasks Process orientation and the ability to define and set up processes Ability to manage, lead and grow teams Ability to work with distributed teams and stakeholders Additional Information A Tip From The Hiring Manager Join the team to sharpen your skills and expand your collaborative methods. Make an impact on our clients and their businesses directly through your work. Additional Information Gender-Neutral Policy 18 paid holidays throughout the year Generous parental leave and new parent transition program Flexible work arrangements Employee Assistance Programs to help you in wellness and well being Company Description Publicis Sapient is a digital transformation partner helping established organizations get to their future, digitally-enabled state, both in the way they work and the way they serve their customers.We help unlock value through a start-up mindset and modern methods, fusing strategy, consulting and customer experience with agile engineering and problem-solving creativity.United by our core values and our purpose of helping people thrive in the brave pursuit of next, our 20,000+ people in 53 offices around the world combine experience across truly value. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Dear Aspirants, Greetings from ValueLabs! We are looking for AIML Engineer Role: AIML Engineer Skill set: AI/ML, NLP, Deep learning, Python, LLM, Gen-AI, Rag Implementation. Experience : 5+ Years NP: Immediate Location : Hyderabad - Hybrid Job Description - Proficiency in Python to manipulate data and draw insights from large data sets is a must. - Knowledge of a variety of machine learning techniques (clustering, decision trees, boosting, artificial neural networks, etc.) and their real-world advantages/drawbacks. - Knowledge of popular ML and non-ML libraries: TensorFlow, Torch, Sklenar, etc. - Solid understanding of machine learning concepts and techniques, particularly in NLP. - Experience on popular NLP and Computer vision libraries is a plus: Spacy, NLTK, OpenCV, … - Knowledge of popular cloud infrastructure: Google Cloud, AWS, Microsoft Azure, … - Experience in code management using Git Secondary Skills - Strong problem-solving skills and the ability to work collaboratively in a team environment. - Excellent written and verbal communication skills. - Experience in leveraging large language models (LLMs) and frameworks such as Lang Chain and Lang Smith for creating and optimizing AI-driven conversational applications. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Delhi, India
On-site
TCS Hiring for Network Data Experience Range: - 05 To 8 Yrs Job Locations : New Delhi Job Description 1.Experience in designing; supporting and implementing IP based networks for large enterprises. 2.Strong knowledge and experience with Cisco Nexus switches, ASRs, ISR and 9000 series ,4500, Catalyst switches etc. 3.Strong knowledge and experience with Cisco routing and switching protocols (ie. BGP,EIGRP, MPLS, QoS, STP, VTP etc.) 4.Configuring Cisco Wireless Access points & controllers and CISCO ISE. 5.Hand-ons Experience with CISCO firewalls and clustering. 6.Experience with analyzing traffic and utilizing packet sniffer utilities (ie. Wireshark, Netscout) 7.Familiarity with management tools such as Solar Winds, ITNM etc. 8.Expertise in LAN and WAN technologies to provide advanced troubleshooting and escalation support 9.Strong documentation skills and ability to create high-level and low-level designs that meet business requirements. 10.Switching and Routing on Cisco products 11.Configuring , troubleshooting of client to site and Site to Site VPNs . 12. SDN technologies like ACI , NSX , SD WAN Cisco Network Engineer Responsibilities: Analysing existing hardware, software, and networking systems. Creating and implementing scalable Cisco networks according to client specifications. Testing and troubleshooting installed Cisco systems. Resolving technical issues with networks, hardware, and software. Performing speed and security tests on installed networks. Applying network security upgrades. Upgrading/replacing hardware and software systems when required. Creating and presenting networking reports. Training end-users on installed Cisco networking products. Cisco Network Engineer Requirements: Bachelor's degree in computer science, networking administration, information technology, or a similar field. CCNA, CCNP certification. At least 5 years' experience as a network engineer. Detailed knowledge of Cisco networking systems. Experience with storage engineering, wide-area networking, and network virtualization. Advanced troubleshooting skills. Ability to identify, deploy, and manage complex networking systems. Good communication and interpersonal skills. Experience with end-user training. Show more Show less
Posted 1 week ago
6.0 years
0 Lacs
India
Remote
Job Title: Network Engineer – IT Location: ["Remote"] Employment Type: [Contract ] Experience Level: 3–6 Years Certifications Preferred: CCNA / CCNP Position Overview We are looking for a skilled and driven Network Engineer to join our fast-paced and growing IT team. This role is ideal for a technically adept professional with strong networking knowledge, hands-on experience with Cisco technologies, and a passion for maintaining secure, efficient, and highly available network systems. The ideal candidate will demonstrate an ability to design, monitor, and support complex enterprise-level network infrastructures across hybrid environments. Key Responsibilities Design, deploy, and maintain secure and scalable network infrastructure, including Cisco MCUs, CMS, and VQCM systems. Configure and manage Cisco and Pfsense firewalls, ensuring all security policies and access control protocols are enforced. Monitor network performance using tools such as Nagios and Graylog, proactively identifying and resolving bottlenecks or anomalies. Administer Proxmox virtualized environments and perform system upgrades and patch management. Set up and manage secure remote access solutions using tools like Cisco AnyConnect and Perimeter81. Provide technical assistance and infrastructure support for Azure, Office 365, and other cloud-based services. Install, configure, and troubleshoot Windows machines, manage Active Directory, user access policies, and group management. Collaborate with IT support and infrastructure teams to resolve technical issues using internal ticketing systems. Support and manage VMware environments, focusing on performance tuning, availability, and resource optimization. Participate in IT projects including network upgrades, hardware refresh cycles, and cloud migrations. Required Qualifications & Skills Certifications: Active CCNA or CCNP certification is required. Cisco Networking: Strong experience with Cisco hardware and software solutions including MCUs, CMS, VQCM, and Cisco Firewalls. Firewall Management: Hands-on experience configuring and maintaining Pfsense firewalls and managing rule sets. Monitoring Tools: Proficiency in using monitoring tools like Nagios and Graylog for performance and security analysis. Virtualization: Experience with Proxmox and VMware, including setup, clustering, and troubleshooting. Remote Access: Familiarity with secure remote access technologies including Perimeter81, Cisco AnyConnect, or similar VPN solutions. Cloud Infrastructure: Working knowledge of Microsoft Azure and Office 365 infrastructure, including administration and support. System Administration: Strong background in managing Windows operating systems, Active Directory, and Group Policies. IT Support: Exposure to structured IT support operations and handling issue resolution through a formal ticketing system. Documentation & SOPs: Ability to document network changes, write SOPs, and ensure knowledge sharing within the team. Preferred Qualifications (Nice To Have) Familiarity with ITIL frameworks and change management processes. Exposure to scripting or automation tools for network management. Experience in network segmentation and compliance with industry security standards. Skills: firewalls,cisco,management,firewall management,monitoring tools,cisco networking,infrastructure,ccna,security,virtualization,remote access,ccnp,documentation & sops,it,it support,cloud,cloud infrastructure,system administration Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Large Language Models Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an AI/ML Engineer, you will develop applications and systems utilizing AI tools, Cloud AI services, and GenAI models. Your role involves implementing deep learning, neural networks, chatbots, and image processing in production-ready quality solutions. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work-related problems. - Develop applications and systems using AI tools and Cloud AI services. - Implement deep learning and neural networks in solutions. - Create chatbots and work on image processing tasks. - Collaborate with team members to provide innovative solutions. - Stay updated with the latest AI/ML trends and technologies. Professional & Technical Skills: - Must To Have Skills: Proficiency in Large Language Models. - Strong understanding of statistical analysis and machine learning algorithms. - Experience with data visualization tools such as Tableau or Power BI. - Hands-on implementing various machine learning algorithms like linear regression, logistic regression, decision trees, and clustering algorithms. - Solid grasp of data munging techniques including data cleaning, transformation, and normalization. Additional Information: - The candidate should have a minimum of 3 years of experience in Large Language Models. - This position is based at our Bengaluru office. - A 15 years full-time education is required. 15 years full time education Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Large Language Models Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an AI/ML Engineer, you will develop applications and systems utilizing AI tools, Cloud AI services, and GenAI models. Your role involves implementing deep learning, neural networks, chatbots, and image processing in production-ready quality solutions. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work-related problems. - Develop applications and systems using AI tools and Cloud AI services. - Implement deep learning and neural networks in solutions. - Create chatbots and work on image processing tasks. - Collaborate with team members to provide innovative solutions. - Stay updated with the latest AI/ML trends and technologies. Professional & Technical Skills: - Must To Have Skills: Proficiency in Large Language Models. - Strong understanding of statistical analysis and machine learning algorithms. - Experience with data visualization tools such as Tableau or Power BI. - Hands-on implementing various machine learning algorithms like linear regression, logistic regression, decision trees, and clustering algorithms. - Solid grasp of data munging techniques including data cleaning, transformation, and normalization. Additional Information: - The candidate should have a minimum of 3 years of experience in Large Language Models. - This position is based at our Bengaluru office. - A 15 years full-time education is required. 15 years full time education Show more Show less
Posted 1 week ago
40.0 years
0 Lacs
Pune, Maharashtra, India
On-site
JOB DESCRIPTION Technology Deployment and Management Service (TDMS) organization is a critical arm of Oracle FLEXCUBE consulting group. TDMS delivers Oracle Technology services for FSGBU product customer, while the applications team focusses on the application customization and setup. We are looking for a highly capable, self-motivated and independent Cloud Operations Engineers based in India. If you are passionate about Oracle technology as well as cloud computing, this is the ideal role you’ve been waiting for. Our team supports technology which are available both Cloud and on-premise. Extensive Experience with multiple Public Cloud Provider (OCI, Azure, AWS, GCP) Extensive experience supporting Cloud or PaaS / SaaS production environment Experience with Cloud Services and Cloud Automation solution Manage and administer cloud platforms of OCI / Azure / AWS hosting enterprise applications and databases of Oracle /MySQL on Linux/ Windows environments and hosting infrastructure in accordance with company security guidelines. Experience in providing Level 2/3 support on Public Cloud (OCI, AWS, AZUE, etc.) Strong analysis and troubleshooting skills and experience Experience in carrying out Cost Analysis Automation – experience in the likes of Ansible or Cloud Formation Scripting experience in Python, Powershell or Ansible Platform experience with the likes of RedHat, Linux or Windows advantageous Experience in Containers/VMWare Knowledge of ITIL best practices Responsible for developing processes for enforcing cloud governance, architecture, operating procedures, monitoring, and system standards. Respond to incidents, own them and drive to completion, participate in root cause analysis. Orchestrating and automating cloud-based platforms with primary focus on OCI, AWS and Azure. Deploying and debugging cloud initiatives as needed in accordance with best practices throughout the development lifecycle. Employing exceptional problem-solving skills, with the ability to see and solve issues before they snowball into problems. Educating teams on the implementation of new cloud-based initiatives and writing SOP (Standard Operating Procedures) to accomplish repetitive tasks. Requirements Graduate in Computer Science or Engineering. Certification in OCI / AWS / Azure as Solutions Architect given a high priority. Any Cloud Security certification a plus. Experience in infrastructure setup, services operation, monitoring and governance in public cloud environments (OCI, AWS, Azure). Strong experience working with enterprise application architectures and Databases (Oracle) clustering, High Availability. Extensive knowledge of Linux / Windows based systems including Hardware, software, networking, Cloud storage and fault tolerant designs. Very strong in writing puppet modules for deployment automation, Terraforms and scripting languages like Perl, Python, Power shell scripting. Experience in DevOps setup procedures and process, workflow automation, CI/CD pipeline development. Excellent communication and written skills and ability to generate and evangelize architectural documentation / diagrams across many teams. Skilled at working in tandem with a team of engineers, or alone as required Career Level - IC2 RESPONSIBILITIES Technology Deployment and Management Service (TDMS) organization is a critical arm of Oracle FLEXCUBE consulting group. TDMS delivers Oracle Technology services for FSGBU product customer, while the applications team focusses on the application customization and setup. We are looking for a highly capable, self-motivated and independent Cloud Operations Engineers based in India. If you are passionate about Oracle technology as well as cloud computing, this is the ideal role you’ve been waiting for. Our team supports technology which are available both Cloud and on-premise. Extensive Experience with multiple Public Cloud Provider (OCI, Azure, AWS, GCP) Extensive experience supporting Cloud or PaaS / SaaS production environment Experience with Cloud Services and Cloud Automation solution Manage and administer cloud platforms of OCI / Azure / AWS hosting enterprise applications and databases of Oracle /MySQL on Linux/ Windows environments and hosting infrastructure in accordance with company security guidelines. Experience in providing Level 2/3 support on Public Cloud (OCI, AWS, AZUE, etc.) Strong analysis and troubleshooting skills and experience Experience in carrying out Cost Analysis Automation – experience in the likes of Ansible or Cloud Formation Scripting experience in Python, Powershell or Ansible Platform experience with the likes of RedHat, Linux or Windows advantageous Experience in Containers/VMWare Knowledge of ITIL best practices Responsible for developing processes for enforcing cloud governance, architecture, operating procedures, monitoring, and system standards. Respond to incidents, own them and drive to completion, participate in root cause analysis. Orchestrating and automating cloud-based platforms with primary focus on OCI, AWS and Azure. Deploying and debugging cloud initiatives as needed in accordance with best practices throughout the development lifecycle. Employing exceptional problem-solving skills, with the ability to see and solve issues before they snowball into problems. Educating teams on the implementation of new cloud-based initiatives and writing SOP (Standard Operating Procedures) to accomplish repetitive tasks. Requirements Graduate in Computer Science or Engineering. Certification in OCI / AWS / Azure as Solutions Architect given a high priority. Any Cloud Security certification a plus. Experience in infrastructure setup, services operation, monitoring and governance in public cloud environments (OCI, AWS, Azure). Strong experience working with enterprise application architectures and Databases (Oracle) clustering, High Availability. Extensive knowledge of Linux / Windows based systems including Hardware, software, networking, Cloud storage and fault tolerant designs. Very strong in writing puppet modules for deployment automation, Terraforms and scripting languages like Perl, Python, Power shell scripting. Experience in DevOps setup procedures and process, workflow automation, CI/CD pipeline development. Excellent communication and written skills and ability to generate and evangelize architectural documentation / diagrams across many teams. Skilled at working in tandem with a team of engineers, or alone as required QUALIFICATIONS Career Level - IC2 ABOUT US As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 1 week ago
8.0 - 11.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Company Description About Sopra Steria Sopra Steria, a major Tech player in Europe with 50,000 employees in nearly 30 countries, is recognised for its consulting, digital services and solutions. It helps its clients drive their digital transformation and obtain tangible and sustainable benefits. The Group provides end-to-end solutions to make large companies and organisations more competitive by combining in-depth knowledge of a wide range of business sectors and innovative technologies with a collaborative approach. Sopra Steria places people at the heart of everything it does and is committed to putting digital to work for its clients in order to build a positive future for all. In 2024, the Group generated revenues of €5.8 billion. Job Description The world is how we shape it. Position: Snowflake - Senior Technical Lead Experience: 8-11 years Location: Noida/ Bangalore Education: B.E./ B.Tech./ MCA Primary Skills: Snowflake, Snowpipe, SQL, Data Modelling, DV 2.0, Data Quality, AWS, Snowflake Security Good to have Skills: Snowpark, Data Build Tool, Finance Domain Preferred Skills Experience with Snowflake-specific features: Snowpipe, Streams & Tasks, Secure Data Sharing. Experience in data warehousing, with at least 2 years focused on Snowflake. Hands-on expertise in SQL, Snowflake scripting (JavaScript UDFs), and Snowflake administration. Proven experience with ETL/ELT tools (e.g., dbt, Informatica, Talend, Matillion) and orchestration frameworks. Deep knowledge of data modeling techniques (star schema, data vault) and performance tuning. Familiarity with data security, compliance requirements, and governance best practices. Experience in Python, Scala, or Java for Snowpark development. Strong understanding of cloud platforms (AWS, Azure, or GCP) and related services (S3, ADLS, IAM) Key Responsibilities Define data partitioning, clustering, and micro-partition strategies to optimize performance and cost. Lead the implementation of ETL/ELT processes using Snowflake features (Streams, Tasks, Snowpipe). Automate schema migrations, deployments, and pipeline orchestration (e.g., with dbt, Airflow, or Matillion). Monitor query performance and resource utilization; tune warehouses, caching, and clustering. Implement workload isolation (multi-cluster warehouses, resource monitors) for concurrent workloads. Define and enforce role-based access control (RBAC), masking policies, and object tagging. Ensure data encryption, compliance (e.g., GDPR, HIPAA), and audit logging are correctly configured. Establish best practices for dimensional modeling, data vault architecture, and data quality. Create and maintain data dictionaries, lineage documentation, and governance standards. Partner with business analysts and data scientists to understand requirements and deliver analytics-ready datasets. Stay current with Snowflake feature releases (e.g., Snowpark, Native Apps) and propose adoption strategies. Contribute to the long-term data platform roadmap and cloud cost-optimization initiatives. Qualifications BTech/MCA Additional Information At our organization, we are committed to fighting against all forms of discrimination. We foster a work environment that is inclusive and respectful of all differences. All of our positions are open to people with disabilities. Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The job market for clustering roles in India is thriving, with numerous opportunities available for job seekers with expertise in this area. Clustering professionals are in high demand across various industries, including IT, data science, and research. If you are considering a career in clustering, this article will provide you with valuable insights into the job market in India.
Here are 5 major cities in India actively hiring for clustering roles: 1. Bangalore 2. Pune 3. Hyderabad 4. Mumbai 5. Delhi
The average salary range for clustering professionals in India varies based on experience levels. Entry-level positions may start at around INR 3-6 lakhs per annum, while experienced professionals can earn upwards of INR 12-20 lakhs per annum.
In the field of clustering, a typical career path may look like: - Junior Data Analyst - Data Scientist - Senior Data Scientist - Tech Lead
Apart from expertise in clustering, professionals in this field are often expected to have skills in: - Machine Learning - Data Analysis - Python/R programming - Statistics
Here are 25 interview questions for clustering roles: - What is clustering and how does it differ from classification? (basic) - Explain the K-means clustering algorithm. (medium) - What are the different types of distance metrics used in clustering? (medium) - How do you determine the optimal number of clusters in K-means clustering? (medium) - What is the Elbow method in clustering? (basic) - Define hierarchical clustering. (medium) - What is the purpose of clustering in machine learning? (basic) - Can you explain the difference between supervised and unsupervised learning? (basic) - What are the advantages of hierarchical clustering over K-means clustering? (advanced) - How does DBSCAN clustering algorithm work? (medium) - What is the curse of dimensionality in clustering? (advanced) - Explain the concept of silhouette score in clustering. (medium) - How do you handle missing values in clustering algorithms? (medium) - What is the difference between agglomerative and divisive clustering? (advanced) - How would you handle outliers in clustering analysis? (medium) - Can you explain the concept of cluster centroids? (basic) - What are the limitations of K-means clustering? (medium) - How do you evaluate the performance of a clustering algorithm? (medium) - What is the role of inertia in K-means clustering? (basic) - Describe the process of feature scaling in clustering. (basic) - How does the GMM algorithm differ from K-means clustering? (advanced) - What is the importance of feature selection in clustering? (medium) - How can you assess the quality of clustering results? (medium) - Explain the concept of cluster density in DBSCAN. (advanced) - How do you handle high-dimensional data in clustering? (medium)
As you venture into the world of clustering jobs in India, remember to stay updated with the latest trends and technologies in the field. Equip yourself with the necessary skills and knowledge to stand out in interviews and excel in your career. Good luck on your job search journey!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2