Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
India
On-site
Job Summary: We are seeking a talented and driven Machine Learning Engineer to design, build, and deploy ML models that solve complex business problems and enhance decision-making capabilities. You will work closely with data scientists, engineers, and product teams to develop scalable machine learning pipelines, deploy models into production, and continuously improve their performance. Key Responsibilities: Design, develop, and deploy machine learning models for classification, regression, clustering, recommendation, NLP, or computer vision tasks. Collaborate with data scientists to prepare and preprocess large-scale datasets for training and evaluation. Implement and optimize machine learning pipelines and workflows using tools like MLflow, Airflow, or Kubeflow. Integrate models into production environments and ensure model performance, monitoring, and retraining. Conduct A/B testing and performance evaluations to validate model accuracy and business impact. Stay up-to-date with the latest advancements in ML/AI research and tools. Write clean, efficient, and well-documented code for reproducibility and scalability. Requirements: Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field. Strong knowledge of machine learning algorithms, data structures, and statistical methods. Proficient in Python and ML libraries/frameworks (e.g., scikit-learn, TensorFlow, PyTorch, XGBoost). Experience with data manipulation libraries (e.g., pandas, NumPy) and visualization tools (e.g., Matplotlib, Seaborn). Familiarity with cloud platforms (AWS, GCP, or Azure) and model deployment tools. Experience with version control systems (Git) and software engineering best practices. Preferred Qualifications: Experience in deep learning, natural language processing (NLP), or computer vision. Knowledge of big data technologies like Spark, Hadoop, or Hive. Exposure to containerization (Docker), orchestration (Kubernetes), and CI/CD pipelines. Familiarity with MLOps practices and tools.
Posted 1 week ago
1.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
With more than 45,000 employees and partners worldwide, the Customer Experience and Success (CE&S) organization is on a mission to empower customers to accelerate business value through differentiated customer experiences that leverage Microsoft’s products and services, ignited by our people and culture. We drive cross-company alignment and execution, ensuring that we consistently exceed customers’ expectations in every interaction, whether in-product, digital, or human-centered. CE&S is responsible for all up services across the company, including consulting, customer success, and support across Microsoft’s portfolio of solutions and products. Join CE&S and help us accelerate AI transformation for our customers and the world. Within CE&S, the Customer Service & Support (CSS) organization builds trust and confidence for every person and organization through delivering a seamless support experience. In CSS, we are powered by Microsoft’s AI technology to help consumers, businesses, partners, and more, resolve their issues quickly and securely, helping prevent future problems from occurring and achieving more from their Microsoft investment. In the Customer Service & Support (CSS) team we are looking for people with a passion for delivering customer success. As a Technical Support Engineer, you will own, troubleshoot and solve customer technical issues. This opportunity will allow you to accelerate your career growth, hone your problem-solving, collaboration and research skills, and develop your technical proficiency. This role is flexible in that you can work up to 50% from home. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Responsibilities Response and Resolution: You own, investigate and solve customer technical issues, collaborating within and across teams and leveraging troubleshooting tools and practices. Readiness: You lead or participate in building communities with peer delivery roles and, where appropriate, share your knowledge. You develop specific technical and professional proficiency to enable you to resolve customer issues, through training and readiness. Product/Process Improvement: You identify potential product defects and escalate appropriately to resolve, contributing to Microsoft product improvements. Qualifications Required Qualifications: Bachelor's degree in Computer Science, Information Technology (IT), or related field AND 1+ years of technical support, technical consulting experience, or information technology experience OR 3+ years of technical support, technical consulting experience, or information technology experience. OR equivalent experience Language Qualification English Language: fluent in reading, writing and speaking. 5+ years technical support, technical consulting experience, or information technology experience OR Bachelor's Degree in Computer Science, Information Technology (IT), or related field AND 3+ years technical support, technical consulting experience, or information technology experience Windows System Administration, Configuration, including a good basic understanding of: Registry File Storage User Accounts and Access Control Event Logs and Auditing Performance, Resource Monitor Networking (TCP, IP) Experience with virtualization (Hyper-V is added advantage). Familiarity with additional Backup tools and storage solutions. Knowledge of Installation/Setup and troubleshooting - Failover Clustering, backup tools, Hyper-V, Docker, Kubernetes, and storage solutions. Troubleshoot and resolve issues related to Windows Failover Clustering, backup tools, Hyper-V, Docker, Kubernetes, and storage solutions, S2D (Software Defined Storage / Hyperconverged environments). Knowledge of Azure VM is an added advantage. Preferred Qualifications Failover Clustering Resilient Storage technology (clustering, storage spaces) Server management tools Hyper-V management and VM deployment Network Tracing and analysis Network Virtualisation (Hyper-V, SDN) Troubleshooting performance issues using PerfMon and other tools Azure fundamentals with experience in resource deployment using ARM templates, bicep. etc Ability to meet Microsoft, customer and / or government security screening requirements are required for this role. These requirements include, but are not limited to the following specialized security screenings: Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud Background Check upon hire / transfer and every two years thereafter. Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Posted 1 week ago
12.0 years
0 Lacs
India
Remote
🚀 We’re Hiring: Sr. Consultant – Data Science (Remote) 📍 Location : Remote 📊 Analytics Experience : 8–12 Years 🧠 Machine Learning/Data Science : Min. 7 Years Are you passionate about solving real-world business problems using advanced analytics and machine learning? We’re looking for a Senior Consultant – Data Science with deep hands-on expertise and leadership skills. ✅ Role Requirements: 🔹 6+ years of experience building statistical models and handling end-to-end analytics projects 🔹 Strong command over Python and SQL 🔹 Hands-on experience with visualization tools like Power BI or Tableau 🔹 Experience with both structured and unstructured data using classification, regression, clustering, NLP, etc. 🔹 Worked closely with clients on analytics opportunity identification, roadmap development, and solution delivery 🔹 Exposure to at least two sectors: Healthcare , Banking , Financial Services , Insurance , E-Commerce 💡 Technical Skills: Proficiency in ML techniques: Decision Trees, Random Forest, SVM, Naïve Bayes, KNN, PCA, Clustering Text Mining: Entity extraction, sentiment analysis, document summarization Exposure to ETL , Salesforce/CRM integration, and data visualization Bonus: Working knowledge of Generative AI. 📩 Interested? Share your profile at kalpanadeshmukh@livecjobs.com 📞 Or connect with us directly: 7386971110 Let’s build something impactful together! #Hiring #DataScience #MachineLearning #AnalyticsJobs #RemoteJobs #SeniorConsultant #AI #Leadership #GenerativeAI #LivecJobs
Posted 1 week ago
0.0 years
0 - 0 Lacs
Gurugram
Work from Office
About the Team: Join a highly skilled and collaborative team dedicated to ensuring data reliability, performance, and security across our organization’s critical systems. We work closely with developers, architects, and DevOps professionals to deliver seamless and scalable database solutions in a cloud-first environment, leveraging the latest in AWS and open-source technologies. Our team values continuous learning, innovation, and the proactive resolution of database challenges. About the Role: As a Database Administrator specializing in MySQL and Postgres within AWS environments, you will play a key role in architecting, deploying, and supporting the backbone of our data infrastructure. You’ll leverage your expertise to optimize database instances, manage large-scale deployments, and ensure our databases are secure, highly available, and resilient. This is an opportunity to collaborate across teams, stay ahead with emerging technologies, and contribute directly to our business success. Responsibilities: Design, implement, and maintain MySQL and Postgres database instances on AWS, including managing clustering and replication (MongoDB, Postgres solutions). Write, review, and optimize stored procedures, triggers, functions, and scripts for automated database management. Continuously tune, index, and scale database systems to maximize performance and handle rapid growth. Monitor database operations to ensure high availability, robust security, and optimal performance. Develop, execute, and test backup and disaster recovery strategies in line with company policies. Collaborate with development teams to design efficient and effective database schemas aligned with application needs. Troubleshoot and resolve database issues, implementing corrective actions to restore service and prevent recurrence. Enforce and evolve database security best practices, including access controls and compliance measures. Stay updated on new database technologies, AWS advancements, and industry best practices. Plan and perform database migrations across AWS regions or instances. Manage clustering, replication, installation, and sharding for MongoDB, Postgres, and related technologies. Requirements: 4-7 Years of Experinece in Database Management Systems as a Database Engineer. Proven experience as a MySQL/Postgres Database Administrator in high-availability, production environments. Expertise in AWS cloud services, especially EC2, RDS, Aurora, DynamoDB, S3, and Redshift. In-depth knowledge of DR (Disaster Recovery) setups, including active-active and active-passive master configurations. Hands-on experience with MySQL partitioning and AWS Redshift. Strong understanding of database architectures, replication, clustering, and backup strategies (including Postgres replication & backup). Advanced proficiency in optimizing and troubleshooting SQL queries; adept with performance tuning and monitoring tools. Familiarity with scripting languages such as Bash or Python for automation/maintenance. Experience with MongoDB, Postgres clustering, Cassandra, and related NoSQL or distributed database solutions. Ability to provide 24/7 support and participate in on-call rotation schedules. Excellent problem-solving, communication, and collaboration skills. What we offer? A positive, get-things-done workplace A dynamic, constantly evolving space (change is par for the course – important you are comfortable with this) An inclusive environment that ensures we listen to a diverse range of voices when making decisions. Ability to learn cutting edge concepts and innovation in an agile start-up environment with a global scale Access to 5000+ training courses accessible anytime/anywhere to support your growth and development (Corporate with top learning partners like Harvard, Coursera, Udacity) About us: At PayU, we are a global fintech investor and our vision is to build a world without financial borders where everyone can prosper. We give people in high growth markets the financial services and products they need to thrive. Our expertise in 18+ high-growth markets enables us to extend the reach of financial services. This drives everything we do, from investing in technology entrepreneurs to offering credit to underserved individuals, to helping merchants buy, sell, and operate online. Being part of Prosus, one of the largest technology investors in the world, gives us the presence and expertise to make a real impact. Find out more at www.payu.com Our Commitment to Building A Diverse and Inclusive Workforce As a global and multi-cultural organization with varied ethnicities thriving across locations, we realize that our responsibility towards fulfilling the D&I commitment is huge. Therefore, we continuously strive to create a diverse, inclusive, and safe environment, for all our people, communities, and customers. Our leaders are committed to create an inclusive work culture which enables transparency, flexibility, and unbiased attention to every PayUneer so they can succeed, irrespective of gender, color, or personal faith. An environment where every person feels they belong, that they are listened to, and where they are empowered to speak up. At PayU we have zero tolerance towards any form of prejudice whether a specific race, ethnicity, or of persons with disabilities, or the LGBTQ communities.
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Company Description WNS (Holdings) Limited (NYSE: WNS), is a leading Business Process Management (BPM) company. We combine our deep industry knowledge with technology and analytics expertise to co-create innovative, digital-led transformational solutions with clients across 10 industries. We enable businesses in Travel, Insurance, Banking and Financial Services, Manufacturing, Retail and Consumer Packaged Goods, Shipping and Logistics, Healthcare, and Utilities to re-imagine their digital future and transform their outcomes with operational excellence.We deliver an entire spectrum of BPM services in finance and accounting, procurement, customer interaction services and human resources leveraging collaborative models that are tailored to address the unique business challenges of each client. We co-create and execute the future vision of 400+ clients with the help of our 44,000+ employees. Job Description Job Overview: Responsible for leading data science initiatives, developing advanced analytics models and ensuring successful execution of data-driven projects for clients in the retail. Will work closely with key client stakeholders to understand their business challenges and leverage data science to deliver actionable insights that drive business growth and efficiency. Lead the design, development and implementation of advanced analytics models. Including predictive and prescriptive models for retail clients.Should be able to convert mathematical/ statistics-based research into sustainable data science solutions Candidate should be able to think from first principles to define & evangelize solutions for any client business problem Leverage deep knowledge of the retail to develop data-driven solutions that address industry-specific challenges. Apply AI/ML statistical methods to solve complex business problems and determine new opportunities for clients. Ensure project delivery of high-quality, actionable insights that drive business decisions and outcomes. Ensure end-to-end lifecycle (scoping to Delivery) of data science projects. Collaborate with cross-functional teams to ensure seamless project execution.Manage timelines, resources, and deliverables to meet client expectations and project goals. Drive innovation by exploring new data science techniques, tools, and technologies that can enhance the value delivered to clients. Strong hands-on experience with data science tools and technologies (e.g., Python, R, SQL, machine learning frameworks). Hand-on experience with a range of data science models including regression, classification, clustering, decision tree, random forest, support vector machine, naïve Bayes, GBM, XGBoost, multiple linear regression, logistic regression, and ARIMA/ARIMAX. Should be competent in Python (Pandas, NumPy, scikit-learn etc.), possess high levels of analytical skills and have experience in the creation and/or evaluation of predictive models. Preferred experience in areas such as time series analysis, market mix modelling, attribution modelling, churn modelling, market basket analysis, etc Good communication and project management skills. Should be able to communicate effectively to a wide range of audiences, both technical and business. Adept in creating Presentations, reports etc to present the analysis findings to key client stakeholders. Strong team management skills with a passion for mentoring and developing talent. Qualifications Educational Qualification:BTech/Masters in Statistics/Mathematics/Economics/Econometrics from Tier 1-2 institutions Or BE/B-Tech, MCA or MBA
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About the Role: We are seeking a highly experienced Voice AI /ML Engineer to lead the design and deployment of real-time voice intelligence systems. This role focuses on ASR, TTS, speaker diarization, wake word detection, and building production-grade modular audio processing pipelines to power next-generation contact centre solutions, intelligent voice agents, and telecom-grade audio systems. You will work at the intersection of deep learning, streaming infrastructure, and speech/NLP technology, creating scalable, low-latency systems across diverse audio formats and real-world applications. Key Responsibilities: Voice & Audio Intelligence: Build, fine-tune, and deploy ASR models (e.g., Whisper, wav2vec2.0, Conformer) for real-time transcription. Develop and finetune high-quality TTS systems using VITS, Tacotron, FastSpeech for lifelike voice generation and cloning. Implement speaker diarization for segmenting and identifying speakers in multi-party conversations using embeddings (x-vectors/d-vectors) and clustering (AHC, VBx, spectral clustering). Design robust wake word detection models with ultra-low latency and high accuracy in noisy conditions. Real-Time Audio Streaming & Voice Agent Infrastructure: Architect bi-directional real-time audio streaming pipelines using WebSocket, gRPC, Twilio Media Streams, or WebRTC. Integrate voice AI models into live voice agent solutions, IVR automation, and AI contact center platforms. Optimize for latency, concurrency, and continuous audio streaming with context buffering and voice activity detection (VAD). Build scalable microservices to process, decode, encode, and stream audio across common codecs (e.g., PCM, Opus, μ-law, AAC, MP3) and containers (e.g., WAV, MP4). Deep Learning & NLP Architecture: Utilize transformers, encoder-decoder models, GANs, VAEs, and diffusion models, for speech and language tasks. Implement end-to-end pipelines including text normalization, G2P mapping, NLP intent extraction, and emotion/prosody control. Fine-tune pre-trained language models for integration with voice-based user interfaces. Modular System Development: Build reusable, plug-and-play modules for ASR, TTS, diarization, codecs, streaming inference, and data augmentation. Design APIs and interfaces for orchestrating voice tasks across multi-stage pipelines with format conversions and buffering. Develop performance benchmarks and optimize for CPU/GPU, memory footprint, and real-time constraints. Engineering & Deployment: Writing robust, modular, and efficient Python code Experience with Docker, Kubernetes, cloud deployment (AWS, Azure, GCP) Optimize models for real-time inference using ONNX, TorchScript, and CUDA, including quantization, context-aware inference, model caching. On device voice model deployment. Why join us? Impactful Work: Play a pivotal role in safeguarding Tanla's assets, data, and reputation in the industry. Tremendous Growth Opportunities: Be part of a rapidly growing company in the telecom and CPaaS space, with opportunities for professional development. Innovative Environment: Work alongside a world-class team in a challenging and fun environment, where innovation is celebrated. Tanla is an equal opportunity employer. We champion diversity and are committed to creating an inclusive environment for all employees. www.tanla.com
Posted 1 week ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Job Title: Data Science Candidate Specification: 6+ years, Notice � Immediate to 15 days, Hybrid model. Job Description 5+ years of hands-on experience as an AI Engineer, Machine Learning Engineer, or a similar role focused on building and deploying AI/ML solutions. Strong proficiency in Python and its relevant ML/data science libraries (e.g., NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch). Extensive experience with at least one major deep learning framework such as TensorFlow, PyTorch, or Keras. Solid understanding of machine learning principles, algorithms (e.g., regression, classification, clustering, ensemble methods), and statistical modeling. Experience with cloud platforms (e.g., AWS, Azure, GCP) and their AI/ML services (e.g., SageMaker, Azure ML, Vertex AI). Skills Required RoleData Science ( AI ML ) Industry TypeIT Services & Consulting Functional AreaIT-Software Required Education Bachelor Degree Employment TypeFull Time, Permanent Key Skills DATA SCIENCE AI ENGINEER MACHINE LEARNING DATA SCIENCE AI ML PYTHON AWS Other Information Job CodeGO/JC/686/2025 Recruiter NameSheena Rakesh
Posted 1 week ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Job Title: Senior Analyst - Data Analytics Location: Pan India Candidate Specifications Candidate should have 3+ years of experience in Data Analytics and reporting, Databricks, Power BI, Snowflake. Strong technical expertise in Power BI, Microsoft Fabric, Snowflake, SQL, Python, and R. Experience with Azure Data Factory, Databricks, Synapse Analytics, and AWS Glue. Hands-on experience in building and deploying machine learning models. Ability to translate complex data into actionable insights. Excellent problem-solving and communication skills Job Description Design and build interactive dashboards and reports using Power BI and Microsoft Fabric Perform advanced data analysis and visualisation to support business decision-making. Develop and maintain data pipelines and queries using SQL and Python. Apply data science techniques such as predictive modelling, classification, clustering, and regression to solve business problems and uncover actionable insights. Perform feature engineering and data preprocessing to prepare datasets for machine learning workflows. Build, validate, and tune machine learning models using tools such as scikit-learn, TensorFlow, or similar frameworks. Deploy models into production environments and monitor their performance over time, ensuring they deliver consistent value. Collaborate with stakeholders to translate business questions into data science problems and communicate findings in a clear, actionable manner. Use statistical techniques and hypothesis testing to validate assumptions and support decision-making. Document data science workflows and maintain reproducibility of experiments and models Support the Data Analytics Manager in delivering analytics projects and mentoring junior analysts Design and build interactive dashboards and reports using Power BI and Microsoft Fabric. Professional Certifications (preferred Or In Progress) Microsoft Certified: Power BI Data Analyst Associate (PL-300) SnowPro Core Certification (Snowflake) Microsoft Certified: Azure Data Engineer Associate AWS Certified: Data Analytics � Specialty Skills Required RoleSenior Analyst - Data Analytics - Pan India Industry TypeBanking/ Financial Services Functional AreaITES/BPO/Customer Service Required Education B Com Employment TypeFull Time, Permanent Key Skills DATA ANALYTIC S DATABRICKS P O W ER BI REPORTING Other Information Job CodeGO/JC/687/2025 Recruiter NameHemalatha
Posted 1 week ago
6.0 - 8.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Designation: Senior System Engineer / Solution Architect. Location: Mumbai / Bangalore (Hybrid support) Duration: Full-time, Monday to Friday, 9 AM to 5 PM IST. Experience: 6 to 8 years of relevant hands-on experience in IT infrastructure, with a focus on deployment and Day 2 operations. Job Description:As a Senior System Engineer/Solutions Architect, you will serve asThe focused services solution and technical lead engineer supporting the ePlus regional sales team. Focused on key clients, drive revenue creation and capture within the account portfolio. Collaborate and work with our master architects, engineers, and consultants to understand our clients’ needs and craft sustainable solutions and IT strategic roadmaps to achieve client objectives. Required Skills:Experience with Nutanix and Cisco UCS:Hands-on expertise with Nutanix (Prism, AHV) and Cisco UCS infrastructure (UCS Manager, Intersight, UCS Central).Proficient in deploying and managing UCS C-Series, B- Series, and HCI Nutanix nodes, including Hyperflex systems.Experienced in installing ESXi, Windows, and Linux OS on UCS servers.Skilled in UCS firmware upgrades, driver installations, and hardware troubleshooting (blades, chassis, FIs, IOMs).Strong background in VMware installation, patching, and upgrades as per client requirements.Familiar with managing server I/O components, fabric interconnects/extenders, and GPU-equipped UCS systems.Basic knowledge of networking (VLANs, routing) and storage (RAID, SAN).Familiar with networking protocols and cloud/hybrid cloud concepts.Exposure to scripting (PowerShell, Python) for automation.Knowledge on Docker and Kubernetes an added advantage.Strong verbal and written communication skills. Your Impact:Deploy and support UCS B/C series servers; configure Fabric Interconnects, service profiles, and boot volumes.Manage Nutanix HCI, including LCM checks, upgrades, and ESXi host setup.Apply security patches, hotfixes, and perform version upgrades.Gather customer requirements; create high- and low-level design documents. Qualifications:Bachelor’s degree in Computer Science,Information Technology, or a related field.Extensive experience in deploying and managing Cisco UCS, Nutanix HCI, and VMware environments, with strong expertise in clustering and virtualization technologies.Certifications in VMware, Cisco, or Nutanix are a strong advantage. Skills: VMware vSphere, IOM, Cisco UCS, RAID, vSAN, UCS HCI Nutanix, Windows OS, UCS Manager, Cisco Intersight, Cisco SAN, HyperFlex, UCS Servers, UCS B- Series, Cisco UCS Layer 2, Cisco UCS Blades, VMware Installation, UCS Infrastructure, UCS C-Series, Cisco HyperFlex, VMware ESX, Fabric Interconnects, Linux OS, Chassis, ESXi, ESXi Hypervisor, Unified Computing System.
Posted 1 week ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
This role is for one of Weekday's clients Min Experience: 5 years Location: Bengaluru JobType: full-time Requirements We are looking for an experienced Data Scientist with a strong background in the CPG (Consumer Packaged Goods) or Retail domain , focusing on category and product analytics , forecasting , and machine learning workflows . The ideal candidate will possess advanced analytical skills, business acumen, and hands-on expertise in modern data science tools and platforms such as Python, SQL, Databricks, PySpark , and CI/CD ML pipelines . As a Data Scientist, you will be responsible for generating actionable insights across product assortment, category performance, sales trends, and customer behaviors. Your work will directly influence decision-making for new product launches , inventory optimization , campaign effectiveness , and category planning , enabling our teams to enhance operational efficiency and drive business growth. Key Responsibilities Category & Product Analytics: Conduct deep-dive analysis into product assortment, SKU performance, pricing effectiveness, and category trends. Evaluate new product launches and provide recommendations for optimization based on early performance indicators. Sales Data Analysis & Forecasting: Analyze historical and real-time sales data to identify key growth drivers, seasonality, and demand patterns. Build statistical and ML-based models to forecast demand and category-level performance at multiple aggregation levels. Customer Analytics (Nice to Have): Analyze loyalty program data and campaign performance metrics to assess customer retention and ROI of promotions. ML Model Development & Deployment: Design, build, and deploy machine learning models using Python and PySpark to address business problems in forecasting, product clustering, and sales optimization. Maintain and scale CI/CD pipelines for ML workflows using tools like MLflow, Azure ML, or similar. Data Engineering and Tooling: Develop and optimize data pipelines on Databricks and ensure reliable data ingestion and transformation for analytics use cases. Use SQL and PySpark to manipulate and analyze large datasets with performance and scalability in mind. Visualization & Stakeholder Communication: Build impactful dashboards using Power BI (preferred) to enable self-service analytics for cross-functional teams. Translate data insights into clear, compelling business narratives for leadership and non-technical stakeholders. Collaboration & Strategic Insights: Work closely with category managers, marketing, and supply chain teams to align data science initiatives with key business objectives. Proactively identify opportunities for innovation and efficiency across product and sales functions. Required Skills & Qualifications Bachelor's or Master's degree in Data Science, Statistics, Computer Science, or a related quantitative field. 5+ years of experience in applied data science, preferably in CPG/Retail/FMCG domains. Proficient in Python, SQL, Databricks, and MLflow. Experience with PySpark and Azure ML is a strong plus. Deep experience with time-series forecasting, product affinity modeling, and campaign analytics. Familiarity with Power BI for dashboarding and visualization. Strong storytelling skills, with the ability to explain complex data-driven insights to senior stakeholders. Solid understanding of challenges and opportunities within the retail and FMCG space. Ability to work independently as well as in cross-functional teams in a fast-paced environment.
Posted 1 week ago
7.0 - 10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
SQL DBA responsibilities will be to manage cloud engineering activities for SQL Server databases 24/7/365 and would also drive automation activities along with providing SME support to SQL engineering team. SQL Server Engineer with strong expertise in: SQL Server configuration and administration AWS-hosted SQL environments T-SQL scripting PowerShell automation Ansible for infrastructure automation Bitbucket for version control and deployment automation Key Responsibilities Perform Database Administrator functions with SQL Server. Manage backup and recovery procedures and Database Performance Issues. Monitor Space usage; plan to close the Space Gaps by taking necessary action Following standards, write, maintain, and document monitoring scripts needed in support of applications. Migrate the database objects from one environment to other. Help in tuning the SQL queries for the Application Team. Create new SQL Server databases as per the Ameriprise Standards. Work with the Application teams to setup Security Model for their applications running on SQL Server databases. Configure the databases in High Availability Environments and troubleshoots issues related to the same. Monitor the SQL Server databases and make sure that they are up and running. Resolve the day-to-day incidents assigned to the team. Expert in Performance Tuning. Known to Database Modeling for creation of Database Objects. Client handling experience Required Qualifications Experience in troubleshooting and resolving database issues, including performance tuning and capacity planning. Proven expertise in database design, to include solid understanding of related programming languages, clustering, back-up/restore technologies, replication and security. 7 to 10 years of Application DBA experience in SQL Server on preexisting and new projects, including design and implementation of physical databases based on logical data models. Experience in Database Backup / Restore strategies. Experience in Clustering / Mirroring / Replication / HA / DR strategies. Experience in setting up the security model for the application using SQL Server as a back-end database. Experience in setting up the Resource Governor / Maintenance Plans and SQL Agent Jobs. Proven understanding of SQL coding required to understand performance implications and translate requirements to application developers. Experience in writing SQL Stored Procedures / Functions / Views etc. Experience in keeping the SQL database up and running and perform the health checks for the databases on periodic basis. Experience in helping the Application Team in tuning the SQL queries / batches. Experience in overall monitoring of SQL Server Databases. Expertise in writing scripts such as Shell, batch, or Power-Shell scripts, writing SQL queries to automate DB related jobs. Familiar with the automation tools like Ansible. Expertise in SQL Server installations and High availability configuration Expertise in SQL Server patching and troubleshooting Expertise in SQL Server Up gradation Preferred Qualifications Strong working knowledge of industry-standards database management tools. Demonstrated, successful experience working in a matrix, multi-vendor technology environment. Demonstrated ability to work effectively in urgent situations with high pressure and visibility. Strong written and verbal communication skills. Design, test, implement and maintain complex databases with the required organization, access methods, access time, validation checks and security to meet or exceed requirements. Develop, edit and maintain required documentation. Proactively evaluate, recommend and perform database upgrades and changes. Perform system optimization and improvement planning including, but not limited to, database performance analysis, capacity planning and system sizing. Stay abreast of and recommend improvements in technology and methodology to meet changing business needs and market demands, as well as provide for overall optimization of database administration function. Pro-actively monitor the performance of development and production databases to detect existing or potential incidents and/or performance issues. Perform tuning and maintenance to correct and prevent unplanned downtime or performance degradation. Ensure operation of database environments meet or exceed agreed upon service levels (e.g. availability and performance). Plan, schedule and manage the implementation of new databases and modifications to existing databases in a manner that avoids disruption to production and development systems. Set-up and manage database security, manage data purging/archiving activity and other day-to-day database administration activities. Provide ongoing support to operations and support teams as needed. Escalate and manage escalated issues as appropriate. Thorough knowledge of SQL Server configuration, High availability configuration, setting up the environment About Our Company Ameriprise India LLP has been providing client based financial solutions to help clients plan and achieve their financial objectives for 125 years. We are a U.S. based financial planning company headquartered in Minneapolis with a global presence. The firm’s focus areas include Asset Management and Advice, Retirement Planning and Insurance Protection. Be part of an inclusive, collaborative culture that rewards you for your contributions and work with other talented individuals who share your passion for doing great work. You’ll also have plenty of opportunities to make your mark at the office and a difference in your community. So if you're talented, driven and want to work for a strong ethical company that cares, take the next step and create a career at Ameriprise India LLP. Ameriprise India LLP is an equal opportunity employer. We consider all qualified applicants without regard to race, color, religion, sex, genetic information, age, sexual orientation, gender identity, disability, veteran status, marital status, family status or any other basis prohibited by law. Full-Time/Part-Time Full time Timings (2:00p-10:30p) India Business Unit AWMPO AWMP&S President's Office Job Family Group Technology
Posted 1 week ago
3.0 - 5.0 years
3 Lacs
Mananthavady
On-site
We are seeking a skilled and motivated Data Scientist with 3–5 years of hands-on experience in data analytics, machine learning, and business intelligence. The ideal candidate will be responsible for deriving actionable insights from data, building predictive models, and supporting data-driven decision-making across various business units. Key Responsibilities: Analyze structured and unstructured datasets to extract insights and identify trends.Design and implement machine learning models for classification, regression, clustering, and recommendation. Collaborate with business stakeholders to understand objectives and translate them into data solutions. Perform data wrangling, preprocessing, feature engineering, and model validation. Build dashboards and reports using visualization tools like Power BI or Tableau. Present findings and recommendations to technical and non-technical audiences. Contribute to model deployment and monitoring processes. Required Skills & Qualifications: Bachelor’s or Master’s degree in Data Science, Computer Science, Statistics, Mathematics, or a related field. 3–5 years of industry experience in a Data Scientist or similar role. Proficient in programming languages such as Python or R . Strong experience with data manipulation tools like pandas , NumPy , and machine learning libraries like scikit-learn , XGBoost , or TensorFlow/PyTorch . Solid knowledge of SQL and database querying. Experience with data visualization tools like Power BI , Tableau , or Matplotlib/Seaborn . Familiarity with version control (e.g., Git) and basic software development practices. Preferred Qualifications: Experience working with cloud platforms (AWS, Azure, or GCP). Exposure to big data tools (Spark, Hadoop) is a plus. Knowledge of NLP, time-series forecasting, or deep learning techniques is desirable. Strong problem-solving and communication skills. What We Offer: A collaborative, innovative work environment. Opportunities to work on real-world data challenges across industries. Access to modern tools, cloud platforms, and machine learning infrastructure. Competitive salary and performance-based incentives. Job Type: Full-time Pay: From ₹30,000.00 per month Benefits: Food provided Schedule: Day shift Work Location: In person
Posted 1 week ago
0 years
6 - 8 Lacs
Hyderābād
Remote
Assistant Manager – Infrastructure SQL Services – Deloitte Support Services India Pvt. Ltd. The ITS Operations function is accountable for delivering all internal technology infrastructure – including Email, Skype, File Services and platforms underpinning SQL, SAP, Enterprise (IT) Security Services. It also provides technology that supports the service lines in delivering clients facing or client engagements as part of its client IT services Team Summary The Infrastructure SQL Services team are responsible for managing SQL infrastructure including databases, servers and clusters throughout their lifecycle. This role plays an important part in the SQL related aspects of designing, testing, operating and improving IT services. This is a large, enterprise SQL environment, underpinning many missions' critical applications. Interfacing business application owners, providing SQL support and guidance is a key element. As the IT function is spread over multiple geographic locations, you will be expected to communicate and collaborate effectively with remote colleagues. Responsibilities Administer, maintain, and implement SQL Server databases (on-premises and cloud-based) Oversee database performance tuning, query optimization, and troubleshooting for mission-critical systems Implement and manage high availability and disaster recovery solutions (e.g., Always On Availability Groups, clustering, replication). Develop, enforce, and monitor database security policies, including user access, encryption, and compliance with regulatory requirements. Automate database maintenance tasks and develop scripts for monitoring and reporting. Conduct root cause analysis for critical incidents and implement preventive solutions. Collaborate with architects, developers, and infrastructure teams to align database solutions with business needs. Maintain comprehensive documentation for database configurations, procedures, and standards. Respond to service outages which affect Deloitte’s business operation and reputation, including out of hours escalations as part of a 24 x 7 on-call rota Maintain the performance, availability and security of SQL services, with a focus on continuous service improvement Installing and managing SSIS packages and writing and deploying SSRS reports Proactive system \ platform availability checks Server performance management and capacity planning Troubleshooting and Break-fix (Incidents & Service requests) Documentation and cross-training of other team members Performs systematic and periodic application \ infrastructure availability check \ tasks Share knowledge of new solutions with UK and Swiss Security Operations teams Assist with client audits / MF Standards / ISO compliance and evidence gathering Essential In-depth knowledge and understanding of SQL working in a large-scale enterprise estate, including both on-premises and cloud hosted infrastructure In depth knowledge of SQL high availability techniques, specifically AlwaysOn Availability Groups and Failover Cluster Instances Experience with cloud database platforms (Azure SQL, AWS RDS, etc.). Experience with installing and managing SSIS (integration services) packages and writing and deploying SSRS (reporting services) reports Strong SQL performance tuning and troubleshooting skills Strong experience in SQL backup and recovery processes Fluent in T-SQL scripting Experience in server performance management and capacity planning Good knowledge of client/server architectures - this should primarily be centred upon, but not exclusively, the Microsoft suite of back-office products PowerShell basic scripting SolarWinds and SCOM monitoring A solid understanding of the ITIL framework Exceptional communications skills, both written and verbal Diplomatic and persuasive with an ability to handle difficult conversations and confidently manage stakeholders A strong track record of delivering continual service improvement Be able to communicate effectively, technical issues with technical and non-technical audience Able to work as part of a geographically separated team Desirable Database and server migration from on-premises architecture to cloud (Azure and AWS) ITIL Service Operations knowledge preferred (Event Management, Incident Management, Change Management, and Problem Management). Powershell advanced scripting Tools & Technology SQL Server 2017, 2019 and 2022 Azure/AWS (IaaS and PaaS) SSRS, SSIS T-SQL and PowerShell Scripting SolarWinds, SCOM monitoring RedGate SQL Monitor ServiceNow CyberArk- Password Management tool Technical Certifications (Must have) ITIL v3 or v4 Foundation Certification in SQL Server & Azure cloud Technology Technical Certifications (Good to have) DP-300 & AI-900 Certification Azure fundamentals certification (AZ-900) Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India. Benefits to help you thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 307462
Posted 1 week ago
6.0 - 10.0 years
18 - 20 Lacs
Delhi Cantonment
On-site
Job Title: Hyper-V Engineer / Specialist – PowerFlex & Data Center Build Location: Location: Delhi NCR, Pune, Mumbai and Bangalore(Hybrid) Experience: 6 to 10 Years Employment Type: Contract(Fixed Term: 3-6months) Notice Period: Immediate / Up to 30 Days Role Overview: We are looking for a technically proficient Hyper-V Engineer/Specialist to support our data center initiatives, including infrastructure builds and virtualization using Microsoft Hyper-V. The ideal candidate will also have hands-on experience with Dell PowerFlex storage systems and be well- versed in physical data center build activities such as racking, cabling, and server configuration. Key Responsibilities: Design, deploy, and manage Hyper-V virtualization environments, including clusters, virtual networking, and storage integration. Collaborate in data center build projects—installing and configuring physical servers, network switches, and cabling. Implement and manage Dell PowerFlex infrastructure for hyper-converged and software- defined storage solutions. Perform system patching, backup configuration, and failover testing for virtualized environments. Monitor performance, capacity, and availability of virtual infrastructure and storage. Create and maintain detailed documentation of system configurations, procedures, and change control records. Work with cross-functional teams to ensure smooth deployments and minimal downtime. Troubleshoot and resolve complex infrastructure and virtualization-related issues. Must-Have Skills: Strong hands-on experience with Microsoft Hyper-V, Failover Clustering, and Virtual Machine Manager (VMM). Knowledge and experience in Dell PowerFlex infrastructure (deployment, monitoring, storage pool setup). Experience with data center build and operations, including physical server setup and structured cabling. Familiarity with Windows Server (2016/2019/2022) in enterprise environments. Understanding of networking fundamentals related to virtualization (vSwitches, VLANs, NIC teaming). Experience with backup and disaster recovery for virtual environments. Preferred/Good to Have: Exposure to other hypervisors (e.g., VMware ESXi, KVM). Experience with monitoring tools (e.g., SCOM, Nagios, or SolarWinds). Scripting knowledge (PowerShell, CLI). Certifications like Microsoft Certified: Azure Administrator, Hyper-V Specialist, or Dell EMC PowerFlex. Education & Qualifications: Bachelor's degree in Computer Science, Information Technology, or related field. Industry certifications are a plus. Work Conditions: Onsite work required during physical builds and project deployment phases. May include off-hours or weekend support for migrations or scheduled downtimes. Job Types: Permanent, Contractual / Temporary Contract length: 3-6 months Pay: ₹150,000.00 - ₹170,000.00 per month
Posted 1 week ago
0 years
0 Lacs
Andhra Pradesh
On-site
Key Responsibilities Design, develop, and maintain scalable data pipelines using AWS services and Snowflake. Build and manage data transformation workflows using dbt. Collaborate with data analysts, data scientists, and business stakeholders to deliver clean, reliable, and well-documented datasets. Optimize Snowflake performance through clustering, partitioning, and query tuning. Implement data quality checks, testing, and documentation within dbt. Automate data workflows and integrate with CI/CD pipelines. Ensure data governance, security, and compliance across cloud platforms. Required Skills & Qualifications: Strong experience with Snowflake (data modeling, performance tuning, security). Proficiency in dbt (models, macros, testing, documentation). Solid understanding of AWS services such as S3, Lambda, Glue, and IAM. Experience with SQL and scripting languages (e.g., Python). Familiarity with version control systems (e.g., Git) and CI/CD tools. Strong problem-solving skills and attention to detail. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 week ago
12.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About RocketFrog.ai: RocketFrog.ai is an AI Studio for Business, engineering competitive advantage through cutting-edge AI solutions in Healthcare, Pharma, BFSI, Hi-Tech, and Consumer Services. From Agentic AI and deep learning models to full-stack AI-first product development, we help enterprises translate innovation into measurable business impact. 🚀 Ready to take a Rocket Leap with Science? Role Overview: We are looking for a visionary and execution-oriented AI Product Manager with 7–12 years of experience in the software/technology industry. This role will lead the conceptualization, development, and rollout of AI-first products that transform business workflows through Agentic AI, intelligent automation, and enterprise-scale orchestration. As an AI Product Manager, you will define product strategy, shape solution blueprints, and align cross-functional teams around product execution. You will translate ambiguous problem statements into structured AI solutions, bridging business objectives with technical implementation. Key Responsibilities: Own the AI Product Lifecycle: Lead end-to-end product development from ideation to launch, incorporating AI/ML capabilities, business logic, and user feedback. Define Product Vision & Roadmap: Translate customer and market needs into clear product goals, success metrics, and agile execution plans. Design AI-First Workflows: Collaborate with AI engineers and architects to design solutions using intelligent agents, LLM orchestration, and adaptive automation. Drive Stakeholder Alignment: Work closely with CxOs, domain SMEs, and engineering teams to define product requirements and align on priorities. Deliver High-Impact Use Cases: Identify and deliver AI use cases that optimize enterprise operations in BFSI, Healthcare, Pharma, or Consumer Services. Lead Backlog & Feature Planning: Break down large AI initiatives into actionable user stories and manage sprints with engineering and design teams. Champion AI/ML Integration: Identify opportunities to embed LLMs, agentic workflows, and data-driven intelligence into products. Structure Unstructured Information: Bring clarity to complexity using mind maps, schema models, and ontology-driven structures. Manage UX-UXP Alignment: Oversee wireframes, user journeys, and workflow designs via tools like Figma and Miro in collaboration with the design team. Measure Outcomes: Define product KPIs and drive post-launch iteration cycles based on usage, performance, and business feedback. Required Skills & Expertise: Domain & Experience: Software Product Leadership: 7–12 years in software/tech industry with at least 3 years in AI/ML-based product management roles. Strategic Thinking & Execution: Ability to drive both big-picture thinking and detailed execution with cross-functional teams. AI & Data Product Fluency: Agentic AI Concepts: Strong conceptual understanding of Agentic AI and how intelligent agents collaborate to automate business workflows. Agent Orchestration Awareness: Familiarity with how platforms like LangGraph or Crew.ai orchestrate roles such as worker, reviewer, or approver to enable modular and auditable AI behavior. ML Fundamentals: Working understanding of core AI/ML concepts including classification, clustering, and other supervised/unsupervised learning approaches. Cloud AI Services (Desirable): Basic understanding of cloud computing and cloud-based AI platforms such as AWS, Azure, or Google Cloud for deploying intelligent workflows. Meta-Modelling & Knowledge Structures Proficiency in designing schemas, taxonomies, or ontologies; familiarity with RDF, OWL, SHACL, or Description Logic is a strong plus. Information Design & Storytelling Information Structuring: Proven ability to turn raw inputs into actionable workflows and structured schemas using mind maps or process maps. Narrative & Visualization: Strong storytelling through PowerPoint, Notion, dashboards; ability to articulate strategy and progress to executives. Product & Process Tools Tools: Miro, Figma, JIRA, Asana, Confluence, Excel/Google Sheets, PowerPoint, Notion, and basic BI tools (Tableau, Power BI). Process Modeling: Swimlane diagrams, BPMN, ERDs, system design documentation. Key Stakeholders CxOs & Innovation Leaders – Strategic alignment and value realization Engineering & AI Teams – Technical execution and AI enablement Business & Operations Teams – Domain knowledge and feedback integration Customers – Use case validation and value co-creation Qualifications: Bachelor’s or Master’s degree from Tier-1 institutions (IIT, IIM, ISB, IIIT, BITS preferred). 7 to 12 years of professional experience, including product leadership in AI-first or data-driven product lines. Proven success in delivering AI-based transformation, automation, or enterprise software products. Why Join RocketFrog.ai? Shape the future of Agentic AI and intelligent enterprise systems. Own and scale high-impact AI product lines across industry verticals. Collaborate with world-class researchers, AI engineers, and product strategists. Thrive in a flat, fast-paced, innovation-first environment. Drive real business impact with measurable transformation at the intersection of AI and industry.
Posted 1 week ago
1.0 - 3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About The Company TSC Redefines Connectivity with Innovation and IntelligenceDriving the next level of intelligence powered by Cloud, Mobility, Internet of Things, Collaboration, Security, Media services and Network services, we at Tata Communications are envisaging a New World of Communications Job Title: Linux Support Engineer – Level 1 (with L2 Task Awareness) Location: Pune Experience: 1 to 3 years Shift: Rotational (24x7 support) Job Type: Full-time Job Summary We are seeking a dedicated L1 Linux Support Engineer to provide frontline operational support for enterprise Linux servers. The engineer will focus primarily on L1 responsibilities, but must also have basic to intermediate understanding of L2 tasks for occasional escalated activity handling and team backup. Key Responsibilities L1 Responsibilities (Primary): Monitor system performance, server health, and basic services using tools like Nagios, Zabbix, or similar. Handle tickets for standard issues like disk space, service restarts, log checks, user creation, and permission troubleshooting. Basic troubleshooting of server access issues (SSH, sudo access, etc.). Perform routine activities such as patching coordination, backup monitoring, antivirus checks, and compliance tasks. Execute pre-defined SOPs and escalation procedures in case of critical alerts or failures. Regularly update incident/ticket tracking systems (e.g., ServiceNow, Remedy). Provide Hands-and-feet Support At Data Center If Required. L2 Awareness (Secondary / Occasional Tasks): Understand LVM management, disk extension, and logical volume creation. Awareness of service and daemon-level troubleshooting (Apache, NGINX, SSH, Cron, etc.). Ability to assist in OS patching, kernel updates, and troubleshooting post-patch issues. Exposure to basic scripting (Bash, Shell) to automate repetitive tasks. Familiarity with tools like Red Hat Satellite, Ansible, and centralized logging (e.g., syslog, journalctl). Understand basic clustering, HA concepts, and DR readiness tasks. Assist L2 team during major incidents or planned changes. Required Skills Hands-on with RHEL, CentOS, Ubuntu or other Enterprise Linux distributions. Basic knowledge of Linux command-line tools, file systems, and system logs. Good understanding of Linux boot process, run levels, and systemd services. Basic networking knowledge (ping, traceroute, netstat, etc.). Familiar with ITSM tools and ticketing process. Nice to Have RHCSA Certification (preferred). Exposure to virtualization (VMware, KVM) and cloud environments (AWS, Azure). Experience With Shell Scripting Or Python For Automation. Understanding of ITIL framework. Soft Skills Strong communication and coordination skills. Ability to follow instructions and SOPs. Willingness to learn and take ownership of tasks. Team player with a proactive mindset.
Posted 1 week ago
2.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
About Elchemy Elchemy is a tech-enabled cross-border specialty chemicals marketplace. Our vision is to become the largest global speciality chemicals distributor focussing on discovery and fulfillment using a tech-first approach. Chemicals is an extremely important large and fragmented market with multiple inefficiencies in cross border trade. The global speciality chemicals market is $800bn growing at a CAGR of 5.7%. The industry faces glaring challenges including lack of trust, excessive lead times, quality uncertainty, lack of transparency and tons of operational challenges. In the past 20 months of the company's operation, we have scaled up our operations serving in more than 32 countries and have active partnerships with more than 100s of customers and suppliers. The company has raised a total of upwards of $7.5mn from marquee investors like InfoEdge Ventures, Prime Venture Partners and from promoters of companies like Vinati Organics, Laxmi Organics, and Coromandel International. Our highly ambitious team comprises alumni from IITs, IIMs, NITs and have extensive experience of working in startups as well as multinational companies. We want to create a team with A-players and rockstars in all roles. When such a team comes together, no vision seems unachievable, and everyone pushes to deliver outstanding results. Roles and Responsibilities Coordinate with all stakeholders, internal and external to understand and align on quality specifications Monitor the quality of incoming and outgoing products or materials for Elchemy and ensure delivery by meeting the expected quality standards Ensure the packaged products meet the specified quality standard set by the organisation before being shipped by doing a QC at every required stage Working in alignment with the procurement, sales and operations team to achieve the same. Sampling and testing of raw materials, packaging materials and ready goods by working with testing facilities and providing the quality certifications as required Identifying new testing facilities for new products Check and maintain compliance with health and safety procedures for hazardous goods Inspect packaging materials to assess and approve conformity to standard specifications Standardise quality assurance process for all existing products, new products and new suppliers Ensure documentation & regulatory compliance for: Designing of SOP for clustering of products based on Haz/Non Haz, Liquid/Solid; MSDS approval; ISO procedure Identifying labelling requirement (Product Labels) - destination specific, road transport, rail, shipping line; IIP requirement Building strong relationship, evaluation & development of service providers like inspection services at various locations Resolving customer complaints on time with effective RCA and CAPA Perform any duties and responsibilities as may be assigned from time to time Skills and Qualifications At least 2 years of experience in the chemical industry in Quality Control and assurance department Knowledge of compliance and regulations of chemical products in terms of packaging and transportation Ability to coordinate and work systematically with clarity in communication Good time management and organizational skills Should be able to commit passionately and take ownership Should love hard core execution - rolling up sleeves & getting hands dirty Other desired qualities - attention to detail, thinking on feet, passionate about start ups
Posted 1 week ago
2.0 - 5.0 years
12 - 16 Lacs
Chennai
Work from Office
The role is responsible for software development /testing/deployment/debugging process. This is an operational role that may seek appropriate level of guidance and advice to ensure delivery of quality outcomes. Responsibilities Writing effective and scalable code/test case Debugging and deploying applications Providing support for production environment Preparing software development calendar Preparing reports and dashboards on project time deviations, rework time etc Conducting development testing and reports testing issues to supervisor Identifying and tracking bugs, assessing nature of bugs, and executing corrective actions Desired Skill sets Good programming skills Familiar with software applications and tools Good Knowledge on coding/testing environment
Posted 1 week ago
0 years
0 Lacs
India
Remote
🤖 Machine Learning Intern – Remote | Learn AI by Building It 📍 Location: Remote / Virtual 💼 Type: Internship (Unpaid) 🎁 Perks: Certificate After Completion || Letter of Recommendation (6 Months) 🕒 Schedule: 5–7 hrs/week | Flexible Timing Join Skillfied Mentor as a Machine Learning Intern and move beyond online courses. You’ll work on real datasets, build models, and see your algorithms in action — all while gaining experience that hiring managers actually look for. Whether you're aiming for a career in AI, data science, or automation — this internship will build your foundation with hands-on learning. 🔧 What You’ll Do: Work with real datasets to clean, preprocess, and transform data Build machine learning models using Python, NumPy, Pandas, Scikit-learn Perform classification, regression, and clustering tasks Use Jupyter Notebooks for experimentation and documentation Collaborate on mini-projects and model evaluation tasks Present insights in simple, digestible formats 🎓 What You’ll Gain: ✅ Full Python course included during the internship ✅ Hands-on projects to showcase on your resume or portfolio ✅ Certificate of Completion + LOR (6-month internship) ✅ Experience with industry-relevant tools & techniques ✅ Remote flexibility — manage your time with just 5–7 hours/week 🗓️ Application Deadline: 30th July 2025 👉 Apply now to start your ML journey with Skillfied Mentor
Posted 1 week ago
4.0 - 6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Dreaming big is in our DNA. It’s who we are as a company. It’s our culture. It’s our heritage. And more than ever, it’s our future. A future where we’re always looking forward. Always serving up new ways to meet life’s moments. A future where we keep dreaming bigger. We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential. The power we create together – when we combine your strengths with ours – is unstoppable. Are you ready to join a team that dreams as big as you do? AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev. The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do You Dream Big? We Need You. Job Description Job Title: Senior Data Scientist Location: Bangalore Reporting to: Senior Manager PURPOSE OF ROLE Understand and solve complex business problems with sound analytical prowess and help business with impactful insights in decision making Ensure any roadblocks in implementation is brought to the notice of Analytics Manager so that project timelines do not get affected Document every aspect of the project in standard way, for future purposes Articulate technical complexities to the senior leadership in a simple and easy manner KEY TASKS AND ACCOUNTABILITIES Understand the business problem and work with business stakeholders to translate that to a data-driven analytical/statistical problem; participate in the solution building process Create appropriate datasets and develop statistical data models Translate complex statistical analysis over large datasets into insights and actions Analyze results and present to stakeholders Communicate the insights using business-friendly presentations Help and mentor other Data Scientists/Associate Data Scientists Build a pipeline of the project in Databricks which is production ready Build dashboards (preferably in Power BI) for easy consumption of the solutions QUALIFICATIONS, EXPERIENCE, SKILLS Level Of Educational Attainment Required Bachelor’s/Master’s Degree in Statistics, Applied Statistics, Economics, Econometrics, Operations Research or any other quantitative discipline. Previous Work Experience Minimum 4-6 years’ experience in data science role in building, implementing & operationalizing end-to-end solutions Expertise strongly desired in building statistical and machine learning models for classification, regression, forecasting, anomaly detection, dimensionality reduction, clustering etc. Exposure to optimization and simulation techniques (good to have) Expertise in building NLP based language models, sentiment analysis, text summarization and Named Entity Recognition Proven skills in translating statistics into insights. Sound knowledge in statistical inference and hypothesis testing Microsoft Office (mandatory) Expert in Python (mandatory) Advanced Excel (mandatory) And above all of this, an undying love for beer! We dream big to create future with more cheers
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
If you are interested in scaling Applied AI/ ML solutions to deliver business impact with customer satisfaction - this role will be the right fit for you. Responsibilities : You will be working on ensuring scale of applied AI products across risk and marketing, assets and liabilities within the bank. It will entail stress testing AI products on scale, identify ways to ensure simplified and intuitive customer journey. We are looking for sound understanding of AI/ ML techniques, strong UI/ UX, ability to simplify complexity and navigate through large/ big data. Qualifications: - Masters or Undergrad in Computer Science/Electrical Engineering/ Mechanical Engineering with 4-8 years of experience with minimum of 2-3 years in production implementation of AI/ML solutions - Experience in Java, Python, Linux, cloud computing environment (AWS/ GCP), big data tools like warehouse, spark and pipelines management. - Knowledge of different Al and Machine Learning techniques like transformers, regression, classification, clustering, CNN, tree-based algorithms, NLP etc. along with Hyper parameters tuning. - End-to-end system design: Feature engineering, implementation, debugging, and maintenance in production. - Experience implementing machine learning algorithms or research papers from scratch - Strong communication and project management skills - Experience in building bots, implementing GenAI and Computer Vision use cases or creating low code solutions would be a plus. - Experience in Financial Sector would be a plus. About the team : - You will be part of AI Centre of Excellence within Digital Banking Unit of the bank. - You will work in a fast paced environment where new ideas are encouraged. This will involve liaising with teams within Digital and those across the bank e.g. data engineers, data scientists, ML engineers, product managers and senior business stakeholders. The mandate of the Digital Business Unit at IndusInd Bank is as follows : - Building customer centric products with human centered design principles for retail Individual and micro, small and medium enterprise (MSME) customer segments - Build innovative products and propositions backed with problem solving mindset to discover and solve latent needs of customers - Build Embedded Finance (Banking as a Service) applications - Ensure designs are highly available, highly modular, highly scalable and highly secure - Drive digital business The unit's objectives are three fold : a) Drive better customer experience and engagement b) transform existing lines of businesses and c) build new digital only or banking as a service led digital business models.
Posted 1 week ago
6.0 - 10.0 years
1 - 1 Lacs
Delhi Cantonment, Delhi, Delhi
On-site
Job Title: Hyper-V Engineer / Specialist – PowerFlex & Data Center Build Location: Location: Delhi NCR, Pune, Mumbai and Bangalore(Hybrid) Experience: 6 to 10 Years Employment Type: Contract(Fixed Term: 3-6months) Notice Period: Immediate / Up to 30 Days Role Overview: We are looking for a technically proficient Hyper-V Engineer/Specialist to support our data center initiatives, including infrastructure builds and virtualization using Microsoft Hyper-V. The ideal candidate will also have hands-on experience with Dell PowerFlex storage systems and be well- versed in physical data center build activities such as racking, cabling, and server configuration. Key Responsibilities: Design, deploy, and manage Hyper-V virtualization environments, including clusters, virtual networking, and storage integration. Collaborate in data center build projects—installing and configuring physical servers, network switches, and cabling. Implement and manage Dell PowerFlex infrastructure for hyper-converged and software- defined storage solutions. Perform system patching, backup configuration, and failover testing for virtualized environments. Monitor performance, capacity, and availability of virtual infrastructure and storage. Create and maintain detailed documentation of system configurations, procedures, and change control records. Work with cross-functional teams to ensure smooth deployments and minimal downtime. Troubleshoot and resolve complex infrastructure and virtualization-related issues. Must-Have Skills: Strong hands-on experience with Microsoft Hyper-V, Failover Clustering, and Virtual Machine Manager (VMM). Knowledge and experience in Dell PowerFlex infrastructure (deployment, monitoring, storage pool setup). Experience with data center build and operations, including physical server setup and structured cabling. Familiarity with Windows Server (2016/2019/2022) in enterprise environments. Understanding of networking fundamentals related to virtualization (vSwitches, VLANs, NIC teaming). Experience with backup and disaster recovery for virtual environments. Preferred/Good to Have: Exposure to other hypervisors (e.g., VMware ESXi, KVM). Experience with monitoring tools (e.g., SCOM, Nagios, or SolarWinds). Scripting knowledge (PowerShell, CLI). Certifications like Microsoft Certified: Azure Administrator, Hyper-V Specialist, or Dell EMC PowerFlex. Education & Qualifications: Bachelor's degree in Computer Science, Information Technology, or related field. Industry certifications are a plus. Work Conditions: Onsite work required during physical builds and project deployment phases. May include off-hours or weekend support for migrations or scheduled downtimes. Job Types: Permanent, Contractual / Temporary Contract length: 3-6 months Pay: ₹150,000.00 - ₹170,000.00 per month
Posted 1 week ago
3.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Description We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Software Engineer III at JPMorgan Chase within Asset and Wealth Management, you will be an experienced member of an agile team, tasked with designing and delivering reliable, market-leading technology products that are secure, stable, and scalable. Your role involves implementing essential technology solutions across diverse technical domains, supporting various business functions to achieve the firm's strategic goals. Job Responsibilities Design, develop, and optimize complex PL/SQL procedures and functions. Perform SQL tuning and optimization to enhance performance. Implement resilient setups, including partitioning, indexing, clustering, and debugging. Develop and manage materialized views to improve query performance and data retrieval. Lead migration efforts of complex procedures to SQL, ensuring seamless integration and functionality. Design and implement Snowflake solutions, including external tables and dynamic queries. Manage sharing objects and visibility settings to ensure secure and efficient data access. Develop and optimize materialized views for enhanced data processing. Facilitate data movement in and out of Snowflake, ensuring data integrity and security. Optimize compute resources and monitor costs to ensure efficient and cost-effective operations. Required Qualifications, Capabilities, And Skills Formal training or certification on software engineering concepts and 3+ years applied experience Extensive hands-on experience in Oracle PL/SQL development and Snowflake implementation. Proven track record in SQL tuning, resilient setups, and migration of complex procedures. Strong understanding of materialized views, partitioning, indexing, and clustering in Oracle. Experience with external tables, dynamic queries, and compute optimization in Snowflake. Proficient in debugging and optimizing database systems for performance and reliability. Solid understanding of data sharing, visibility, and security best practices. Knowledge of cost monitoring and optimization strategies in Snowflake. Excellent communication skills to work effectively with cross-functional teams. Ability to provide technical leadership and mentorship to junior developers. Preferred Qualifications, Capabilities, And Skills Familiarity with cloud-based data solutions and integration strategies. Exposure to modern data visualization and reporting tools. Proficiency in Java and Python for enhanced software development capabilities. Experience with AI/ML technologies to drive innovation and data-driven insights. Passion for exploring new technologies and driving innovation in database systems. ABOUT US
Posted 1 week ago
170.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
About Us: Birlasoft, a global leader at the forefront of Cloud, AI, and Digital technologies, seamlessly blends domain expertise with enterprise solutions. The company’s consultative and design-thinking approach empowers societies worldwide, enhancing the efficiency and productivity of businesses. As part of the multibillion-dollar diversified CKA Birla Group, Birlasoft with its 12,000+ professionals, is committed to continuing the Group’s 170-year heritage of building sustainable communities. Job Description - Snowflake Tech Lead Experience: 10+ years Location: Mumbai, Pune, Hyderabad Employment Type: Full-time Job Summary We are looking for a Snowflake Tech Lead with 10+ years of experience in data engineering, cloud platforms, and Snowflake implementations. This role involves leading technical teams, designing scalable Snowflake solutions, and optimizing data pipelines for performance and efficiency. The ideal candidate will have deep expertise in Snowflake, ETL/ELT processes, and cloud data architecture. Key Responsibilities 1. Snowflake Development & Optimization Lead Snowflake implementation, including data modeling, warehouse design, and performance tuning. Optimize SQL queries, stored procedures, and UDFs for high efficiency. Implement Snowflake best practices (clustering, partitioning, zero-copy cloning). Manage virtual warehouses, resource monitors, and cost optimization. 2. Data Pipeline & Integration Design and deploy ETL/ELT pipelines using Snowflake, Snowpark, Coalesce. Integrate Snowflake with BI tools (Power BI, Tableau), APIs, and external data sources. Implement real-time and batch data ingestion (CDC, streaming, Snowpipe). 3. Team Leadership & Mentorship Lead a team of data engineers, analysts, and developers in Snowflake projects. Conduct code reviews, performance tuning sessions, and technical training. Collaborate with stakeholders, architects, and business teams to align solutions with requirements. 4. Security & Governance Configure RBAC, data masking, encryption, and row-level security in Snowflake. Ensure compliance with GDPR, HIPAA, or SOC2 standards. Implement data quality checks, monitoring, and alerting. 5. Cloud & DevOps Integration Deploy Snowflake in AWS, Azure Automate CI/CD pipelines for Snowflake using GitHub Actions, Jenkins, or Azure DevOps. Monitor and troubleshoot Snowflake environments using logging tools (Datadog, Splunk). Required Skills & Qualifications 10+ years in data engineering, cloud platforms, or database technologies. 5+ years of hands-on Snowflake development & administration. Strong expertise in SQL, Python for data processing. Experience with Snowflake features (Snowpark, Streams & Tasks, Time Travel). Knowledge of cloud data storage (S3, Blob) and data orchestration (Airflow, DBT). Certifications: Snowflake SnowPro Core/Advanced. Knowledge of DataOps, MLOps, and CI/CD pipelines. Familiarity with DBT, Airflow, SSIS & IICS
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough