Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
40.0 years
0 Lacs
Navi Mumbai, Maharashtra, India
Remote
Job Description Preferred Technical and Professional Expertise: Working knowledge of OS clustering, partitioning, virtualization and storage administration, integration with operating systems Demonstrated expertise in UNIX - Linux. Working experience in OVM Migration projects Working knowledge on Engineered system like Exadata, , PCA etc Working patching experience on Exadata, , PCA etc Individual contributor role able to manage his work independently. Ambitious individual who can work under their own direction towards agreed targets/goals and with creative approach to work Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Up-to-date technical knowledge by attending educational workshops, reviewing publications. Working knowledge on Kubernetes or Containerization setup. Career Level - IC2 Responsibilities As a Systems Engineer, you will interface with the customer's IT staff on a regular basis. Either at the client's site or from a remote location, you will be responsible for resolution of moderately complex technical problems related to the installation, recommended maintenance and use and repair/workarounds for Oracle products. You should have knowledge of some Oracle products and one platform that is being supported. You will be expected to work with only general guidance from senior engineers and management and, in some areas may work independently.Technical working knowledge on Kubernetes or Containerization environment. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 3 weeks ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About TwoSD (2SD Technologies Limited) TwoSD is the innovation engine of 2SD Technologies Limited , a global leader in product engineering, platform development, and advanced IT solutions. Backed by two decades of leadership in technology, our team brings together strategy, design, and data to craft transformative solutions for global clients. Our culture is built around cultivating talent, curiosity, and collaboration. Whether you're a career technologist, a self-taught coder, or a domain expert with a passion for real-world impact, TwoSD is where your journey accelerates. Join us and thrive. At 2SD Technologies, we push past the expected—with insight, integrity, and a passion for making things better. Role Overview As an Operations Manager – Contractor , you will play a key role in configuring and optimizing monitoring solutions, analyzing application workflows, and supporting the transition of infrastructure into a shared services model. This role combines deep technical skills, a strong process mindset, and the ability to collaborate across diverse technical and business teams. Key Responsibilities Configure and optimize monitoring tools (e.g., New Relic, SolarWinds) to ensure telemetry coverage and proactive issue detection Analyze application workflows, identify component dependencies, and guide transition to shared services support Collaborate with support and engineering teams to ensure seamless infrastructure handover Create technical documentation including support runbooks, diagrams, and operating workflows Coordinate temporary hypercare support during transition phases to ensure system stability and fast resolution Evaluate existing and desired support models, identifying and mitigating risks and differences Contribute to infrastructure transformation projects involving upgrades, migrations, and automation Primary Duties Deep understanding of ITIL, ITSM, CMDB frameworks and best practices Write and maintain detailed technical documentation and support diagrams Deliver medium to large-scale infrastructure projects with minimal disruption Define system specs, compatibility requirements, and operational parameters Coordinate infrastructure deployments and ensure compliance with enterprise standards Lead or assist in initiatives such as automation, virtualization, and performance tuning Collaborate with enterprise architects on future-state infrastructure design and roadmaps Work across functions to align priorities, risks, and execution plans for infrastructure evolution Contribute to post-implementation monitoring, testing, and operational integration Required Qualifications Bachelor’s degree in Computer Science, Information Technology, or related field 10+ years of experience in IT infrastructure design, support, or operations Proven expertise with UNIX/Linux, Windows Servers, clustering solutions Strong analytical and technical problem-solving skills Exceptional communication and documentation abilities Proficient in tools like New Relic, SolarWinds, VMware, and ServiceNow Familiar with cloud platforms (Azure, AWS, GCP), CRM/ERP systems, virtualization tools, and middleware Preferred Certifications Microsoft Certified IT Professional or equivalent TOGAF, CISSP, or cloud architecture certifications (e.g., Azure Solution Architect Expert) Network certifications such as CCNA Enterprise IT Governance certifications like CGEIT Key Technical Skills IT Infrastructure & Network Architecture Cloud Operations & Compliance (SOX, ISO, etc.) Infrastructure Monitoring & Automation Middleware and Server Configuration Virtualization (VMware, Hyper-V, vSphere) ITSM Tools (e.g., ServiceNow, CMDB) Technical Documentation and Runbooks Behavioral Competencies Decision Making and Problem Solving Cross-functional Collaboration Planning and Execution Creative Thinking and Innovation Conflict Resolution and Negotiation Strong Presentation and Communication Skills Tools Experience Monitoring: New Relic, SolarWinds Virtualization: VMware, Hyper-V Cloud Platforms: Azure, AWS, GCP ITSM & CMDB: ServiceNow Middleware: JBoss, WebSphere CRM & ERP: Salesforce, SAP, Sage Databases: Oracle, MS SQL, DB2 OS: Windows, Linux, Citrix Network Protocols: DNS, HTTP, LDAP, SMTP Why Join TwoSD? At TwoSD , innovation isn’t a department—it’s a mindset. Here, your voice matters, your expertise is valued, and your growth is supported by a collaborative culture that blends mentorship with autonomy. With access to cutting-edge tools, meaningful projects, and a global knowledge network, you’ll do work that counts—and evolve with every challenge. IT Operations Manager Location: Gurugram, India / Virtual Company: TwoSD (2SD Technologies Limited) Industry: Information Technology Employment Type: Full-time Date Posted: 24 May 2025 How to Apply Interested candidates can apply through this LinkedIn job post, or Email at hr@2sdtechnologies.com Show more Show less
Posted 3 weeks ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job title: Data Scientist/AI Engineer About Quantco At Quantaco, we deliver state-of-the-art predictive financial data services for the Australian hospitality industry. We are the eighth-fastest growing company in Australia as judged by the country’s flagship financial newspaper, The Australian Financial Review. We are continuing our accelerating through hyper-automation. Our engineers are thought leaders in the business and provide significant input into the design and direction of our technology. Our engineering roles are not singular in their focus. You will develop new data models and predictive models, ensure pipelines are fully automated and run with bullet-proof reliability. We are a friendly and collaborative team. We work using a mature, design-first development process focused on delivering new features to enhance our customer's experience and improve their bottom line. You'll always be learning at Quantaco. About The Role We are looking for a Data Scientist with strong software engineering capabilities to join our growing team. This is a key role in helping us unlock the power of data across our platform and deliver valuable insights to hospitality businesses. You will work on projects ranging from statistical modelling and anomaly detection to productionizing ML pipelines (ranged from time series forecasting to neural networks and custom LLMs), integrating with Django and Flask-based web applications, and building data products on Google Cloud using PostgreSQL and BigQuery, as well as ML routines in Databricks/VertexAI. This role is ideal for someone who thrives in a cross-functional environment, enjoys solving real-world problems with data, and can contribute to production-grade systems. Position Description – Data Scientist Our culture and values Quantaco is a happy and diverse group of professionals who value a strong work ethic, authenticity, creativity, and flexibility. We work hard for each other and for our customers while having fun along the way. You can see what our team says about life at Quantaco here. If you've got a passion for creating new and impactful data-driven technology and want to realise your potential in a team that values your ideas, then we want to hear from you. Responsibilities Of The Role Build and deploy data-driven solutions and machine learning models into production. Collaborate with engineers to integrate models into Django/Flask applications and APIs. Develop and maintain data pipelines using Python and SQL. Proactively seek to link analytical outputs to commercial outcomes Provide technical expertise for proof-of-concept (PoC) and minimum viable product (MVP) phases Clean, transform, and analyse large datasets to extract meaningful insights. Write clean, maintainable Python code and contribute to the platform’s architecture. Work with cloud-native tools (Google Cloud, BigQuery, Cloud Functions, etc.). Participate in sprint planning, stand-ups, and team ceremonies as part of an Agile team. Document MLOps processes, workflows, and best practices to facilitate knowledge sharing and ensure reproducibility You’ll fit right in if you… Have 3+ years of experience in a data-centric or backend software engineering role. Are proficient in production Python, including Django or Flask, and SQL (PostgreSQL preferred). Are curious, analytical, and love solving data problems end-to-end. You demonstrate a scientific and design-led approach to delivering effective data solutions Have experience with data modelling, feature engineering, and applying ML algorithms in real-world applications. Can develop scalable data pipelines and integrate them with cloud platforms (preferably Google Cloud). Communicate clearly and can collaborate across technical and non-technical teams. You are self-motivated and can work as an individual and in a team You love innovation and are always looking for ways to improve Have an MLOps experience (mainly, regarding time-series forecasting, LLM and text analysis, classification & clustering problems) Position Description – Data Scientist It would be fantastic (but not essential) if you… Hold a degree in data science, mathematics, statistics, or computer science. Have experience with BigQuery, VertexAI, DBT, Databricks, or Terraform. Are familiar with containerisation and serverless architecture (Docker/Kubernetes/GCP). Have worked with BI tools or data visualization frameworks (e.g. Looker, PowerBI). Have exposure to financial data systems or the hospitality industry. Preferred Technical Skill Set Google Cloud Platform (BigQuery, VertexAI, Cloud Run) Python (Django/Flask) Azure (MS SQL Server, Databricks) Postman (API development) DBT, Stored procedures ML (time-series forecasting, LLM, text analysis, classification) Tableau/Looker Studio/Power BI Show more Show less
Posted 3 weeks ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Summary: The HLRP Business Intelligence team is tasked with aligning with stakeholders across Sales, Marketing, CX, Finance and Product to effectively monetize our data assets. We are seeking an experienced Business Intelligence Manager to lead complex analytical projects and drive innovation in data analytics. The ideal candidate will have a demonstrated history of solving complex analytical problems and leveraging machine learning algorithms to deliver actionable insights. About the Role: As a Senior Business Intelligence Analyst, you will engage in advanced data tasks that contribute substantially to achieving team objectives. Your analytical proficiency and adept use of visualization tools will drive insights that reveal important business trends. By collaborating with management to define opportunities and interpreting complex datasets, you help shape data-driven decision-making processes. Key Responsibilities: Solve complex analytical problems using advanced data science techniques. Lead projects involving recommender systems, customer segmentation, product bundling, and price sensitivity measurement. Utilize machine learning algorithms such as Collaborative Filtering, Apriori, Market Basket Analysis, and various clustering techniques. Skilled in Python, PostgreSQL, Advanced Excel, Power BI, Tableau, and currently upskilling in PySpark and financial problems. Qualifications: Experience-6+ Years Proven experience as an Analytics Consultant with a focus on data science and complex problem-solving. Strong technical skills in Python, PostgreSQL, Advanced Excel, Power BI, and Tableau. Experience with machine learning algorithms and data science techniques. Bachelor of Technology in Mechanical Engineering from the National Institute of Technology Surat or equivalent. Excellent communication, documentation, and problem-solving skills. Applicants may be required to appear onsite at a Wolters Kluwer office as part of the recruitment process. Show more Show less
Posted 3 weeks ago
5.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Job Title: AI/ML Developer (5 Years Experience) Location : Remote Job Type : Full-time Experience:5 Year Job Summary: We are looking for an experienced AI/ML Developer with at least 5 years of hands-on experience in designing, developing, and deploying machine learning models and AI-driven solutions. The ideal candidate should have strong knowledge of machine learning algorithms, data preprocessing, model evaluation, and experience with production-level ML pipelines. Key Responsibilities Model Development : Design, develop, train, and optimize machine learning and deep learning models for classification, regression, clustering, recommendation, NLP, or computer vision tasks. Data Engineering : Work with data scientists and engineers to preprocess, clean, and transform structured and unstructured datasets. ML Pipelines : Build and maintain scalable ML pipelines using tools such as MLflow, Kubeflow, Airflow, or SageMaker. Deployment : Deploy ML models into production using REST APIs, containers (Docker), or cloud services (AWS/GCP/Azure). Monitoring and Maintenance : Monitor model performance and implement retraining pipelines or drift detection techniques. Collaboration : Work cross-functionally with data scientists, software engineers, and product managers to integrate AI capabilities into applications. Research and Innovation : Stay current with the latest advancements in AI/ML and recommend new techniques or tools where applicable. Required Skills & Qualifications Bachelor's or Master’s degree in Computer Science, Artificial Intelligence, Data Science, or a related field. Minimum 5 years of experience in AI/ML development. Proficiency in Python and ML libraries such as Scikit-learn, TensorFlow, PyTorch, XGBoost, or LightGBM. Strong understanding of statistics, data structures, and ML/DL algorithms. Experience with cloud platforms (AWS/GCP/Azure) and deploying ML models in production. Experience with CI/CD tools and containerization (Docker, Kubernetes). Familiarity with SQL and NoSQL databases. Excellent problem-solving and communication skills. Preferred Qualifications Experience with NLP frameworks (e.g., Hugging Face Transformers, spaCy, NLTK). Knowledge of MLOps best practices and tools. Experience with version control systems like Git. Familiarity with big data technologies (Spark, Hadoop). Contributions to open-source AI/ML projects or publications in relevant fields. Show more Show less
Posted 3 weeks ago
40.0 years
0 Lacs
Navi Mumbai, Maharashtra, India
Remote
Job Description Preferred Technical and Professional Expertise: Working knowledge of OS clustering, partitioning, virtualization and storage administration, integration with operating systems Demonstrated expertise in UNIX - Linux. Working experience in OVM Migration projects Working knowledge on Engineered system like Exadata, , PCA etc Working patching experience on Exadata, , PCA etc Individual contributor role able to manage his work independently. Ambitious individual who can work under their own direction towards agreed targets/goals and with creative approach to work Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Up-to-date technical knowledge by attending educational workshops, reviewing publications. Working knowledge on Kubernetes or Containerization setup. Career Level - IC2 Responsibilities As a Systems Engineer, you will interface with the customer's IT staff on a regular basis. Either at the client's site or from a remote location, you will be responsible for resolution of moderately complex technical problems related to the installation, recommended maintenance and use and repair/workarounds for Oracle products. You should have knowledge of some Oracle products and one platform that is being supported. You will be expected to work with only general guidance from senior engineers and management and, in some areas may work independently.Technical working knowledge on Kubernetes or Containerization environment. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 3 weeks ago
3.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Relocation Assistance Offered Within Country Job Number #166100 - Mumbai, Maharashtra, India Who We Are Colgate-Palmolive Company is a global consumer products company operating in over 200 countries specializing in Oral Care, Personal Care, Home Care, Skin Care, and Pet Nutrition. Our products are trusted in more households than any other brand in the world, making us a household name! Join Colgate-Palmolive, a caring, innovative growth company reimagining a healthier future for people, their pets, and our planet. Guided by our core values—Caring, Inclusive, and Courageous—we foster a culture that inspires our people to achieve common goals. Together, let's build a brighter, healthier future for all. About Colgate-Palmolive Do you want to come to work with a smile and leave with one as well? In between those smiles, your day consists of working in a global organization, continually learning and collaborating, having stimulating discussions, and making impactful contributions! If this is how you see your career, Colgate is the place to be! Our dependable household brands, dedicated employees, and sustainability commitments make us a company passionate about building a future to smile about for our employees, consumers, and surrounding communities. The pride in our brand fuels a workplace that encourages creative thinking, champions experimentation, and promotes authenticity which has contributed to our enduring success. If you want to work for a company that lives by their values, then give your career a reason to smile...every single day. The Experience In today’s dynamic analytical / technological environment, it is an exciting time to be a part of the GLOBAL ANALYTICS team at Colgate. Our highly insight driven and innovative team is dedicated to driving growth for Colgate Palmolive in this ever-changing landscape. What role will you play as a member of Colgate's Analytics team? The GLOBAL DATA SCIENCE & ADVANCED ANALYTICS vertical in Colgate Palmolive is focused on working on business cases which have big $ impact and scope for scalability. With clear focus on addressing the business questions, with recommended actions The Data Scientist position would lead GLOBAL DATA SCIENCE & ADVANCED ANALYTICS projects within the Analytics Continuum. Conceptualizes and builds predictive modeling, simulations, and optimization solutions for clear $ objectives and measured value The Data Scientist would work on a range of projects ranging across Revenue Growth Management, Market Effectiveness, Forecasting etc. Data Scientist needs to manage relationships independently with Business and to drive projects such as Price Promotion, Marketing Mix and Forecasting Who Are You… You are a function expert - Leads GLOBAL DATA SCIENCE & ADVANCED ANALYTICS within the Analytics Continuum Conceptualizes and builds predictive modeling, simulations, and optimization solutions to address business questions or use cases Applies ML and AI to analytics algorithms to build inferential and predictive models allowing for scalable solutions to be deployed across the business Conducts model validations and continuous improvement of the algorithms, capabilities, or solutions built Deploys models using Airflow, Docker on Google Cloud Platforms You connect the dots - Merge multiple data sources and build Statistical Models / Machine Learning models in Price and Promo Elasticity Modeling, Marketing Mix Modeling to derive actionable business insights and recommendation Assemble large, sophisticated data sets that meet functional / non-functional business requirements Build data and visualization tools for Business analytics to assist them in decision making You are a collaborator - Work closely with Division Analytics team leads Work with data and analytics specialists across functions to drive data solutions You are an innovator - Identify, design, and implement new algorithms, process improvements: while continuously automating processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Qualifications What you’ll need BE/BTECH [ Computer Science, Information Technology is preferred ], MBA or PGDM in Business Analytics / Data Science, Additional DS Certifications or Courses, MSC / MSTAT in Economics or Statistics 3+ years of experience in building data models and driving insights Hands-on/experience on developing statistical models, such as linear regression, ridge regression, lasso, random forest, SVM, gradient boosting, logistic regression, K-Means Clustering, Hierarchical Clustering, Bayesian Regression etc. Hands on experience on coding languages Python(mandatory), R, SQL, PySpark, SparkR Strong Understanding of Cloud Frameworks Google Cloud, Snowflake and services like Kubernetes, Cloud Build, Cloud Run. Knowledge of using GitHub, Airflow for coding and model executions and model deployment on cloud platforms Working knowledge on tools like Looker, Domo, Power BI and web apps framework using plotly, pydash, sql Experience front facing Business teams (Client facing role) supporting and working with multi-functional teams in a dynamic environment What You’ll Need…(Preferred) Managing, transforming, and developing statistical models for RGM/Pricing and/or Marketing Effectiveness Experience with third-party data i.e., syndicated market data, Point of Sales, etc. Working knowledge of consumer packaged goods industry Knowledge of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks. Experience visualizing/presenting data for partners using: Looker, DOMO, pydash, plotly, d3.js, ggplot2, pydash, streamlit etc Willingness and ability to experiment with new tools and techniques Ability to maintain personal composure and thoughtfully handle difficult situations. Knowledge of Google products (BigQuery, data studio, colab, Google Slides, Google Sheets etc) Our Commitment to Diversity, Equity & Inclusion Achieving our purpose starts with our people — ensuring our workforce represents the people and communities we serve —and creating an environment where our people feel they belong; where we can be our authentic selves, feel treated with respect and have the support of leadership to impact the business in a meaningful way. Equal Opportunity Employer Colgate is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity, sexual orientation, national origin, ethnicity, age, disability, marital status, veteran status (United States positions), or any other characteristic protected by law. Reasonable accommodation during the application process is available for persons with disabilities. Please complete this request form should you require accommodation. Show more Show less
Posted 3 weeks ago
1.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Relocation Assistance Offered Within Country Job Number #165136 - Mumbai, Maharashtra, India Who We Are Colgate-Palmolive Company is a global consumer products company operating in over 200 countries specializing in Oral Care, Personal Care, Home Care, Skin Care, and Pet Nutrition. Our products are trusted in more households than any other brand in the world, making us a household name! Join Colgate-Palmolive, a caring, innovative growth company reimagining a healthier future for people, their pets, and our planet. Guided by our core values—Caring, Inclusive, and Courageous—we foster a culture that inspires our people to achieve common goals. Together, let's build a brighter, healthier future for all. About Colgate-Palmolive Do you want to come to work with a smile and leave with one as well? In between those smiles, your day consists of working in a global organization, continually learning and collaborating, having stimulating discussions, and making impactful contributions! If this is how you see your career, Colgate is the place to be! Our diligent household brands, dedicated employees, and sustainability commitments make us a company passionate about building a future to smile about for our employees, consumers, and surrounding communities. The pride in our brand fuels a workplace that encourages creative thinking, champions experimentation, and promotes authenticity which has chipped in to our enduring success. If you want to work for a company that lives by their values, then give your career a reason to smile...every single day. The Experience In today’s dynamic analytical / technological environment, it is an exciting time to be a part of the CBS Analytics team at Colgate. Our highly insight driven and innovative team is dedicated to driving growth for Colgate Palmolive in this constantly evolving landscape. What role will you play as a member of Colgate's Analytics team? The CBS Analytics vertical in Colgate Palmolive is passionate about working on reasons which have big $ impact and scope for scalability. With clear focus on addressing the business questions, with recommended actions The Data Scientist position would lead CBS Analytics projects within the Analytics Continuum. Conceptualizes and builds predictive modeling, simulations, and optimization solutions for clear $ objectives and measured value The Data Scientist would work on a range of projects ranging across Revenue Growth Management, Market Efficiency, Forecasting etc. Data Scientist needs to handle relationships independently with Business and to drive projects such as Price Promotion, Marketing Mix and Forecasting Who Are You… You are a function expert - Leads Analytics projects within the Analytics Continuum Conceptualizes and builds predictive modeling, simulations, and optimization solutions to address business questions or use cases Applies ML and AI to analytics algorithms to build inferential and predictive models allowing for scalable solutions to be deployed across the business Conducts model validations and continuous improvement of the algorithms, capabilities, or solutions built You connect the dots - Drive insights from internal and external data for business Assemble large, sophisticated data sets that meet functional / non-functional business requirements Build data and visualization tools for Business analytics to assist them in decision making You are a collaborator - Work closely with Division Analytics team leads Work with data and analytics specialists across functions to drive data solutions You are an innovator - Identify, design, and implement new algorithms, process improvements: while continuously automating processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Qualifications What you’ll need Graduation/Masters in Statistics/ Applied Mathematics/ Computer Science 1+ years of experience in building data models and driving insights Hands-on/experience on developing statistical models, such as regression, ridge regression, lasso, random forest, SVM, gradient boosting, logistic regression, K-Means Clustering, Hierarchical Clustering etc. Hands on experience on coding languages Python(mandatory), R, SQL, PySpark, SparkR Knowledge of using GitHub, Airflow for coding and model executions Handling, redefining, developing statistical models for RGM/Pricing and/or Marketing Efficiency and communicating insights decks to business Validated understanding on tools like Tableau, Domo, Power BI and web apps framework using plotly, pydash, sql Experience front facing Business teams (Client facing role) supporting and working with multi-functional teams in a dynamic environment What You’ll Need…(Preferred) Experience with third-party data i.e., syndicated market data, Point of Sales, etc. Shown understanding of consumer packaged goods industry Knowledge of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks. Experience visualizing/communicating data for partners using: Tableau, DOMO, pydash, plotly, d3.js, ggplot2, pydash, R Shiny etc Willingness and ability to experiment with new tools and techniques Good facilitation and project management skills Ability to maintain personal composure and thoughtfully handle difficult situations. Knowledge of Google products (BigQuery, data studio, colab, Google Slides, Google Sheets etc) Our Commitment to Diversity, Equity & Inclusion Achieving our purpose starts with our people — ensuring our workforce represents the people and communities we serve —and creating an environment where our people feel they belong; where we can be our authentic selves, feel treated with respect and have the support of leadership to impact the business in a meaningful way. Equal Opportunity Employer Colgate is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity, sexual orientation, national origin, ethnicity, age, disability, marital status, veteran status (United States positions), or any other characteristic protected by law. Reasonable accommodation during the application process is available for persons with disabilities. Please complete this request form should you require accommodation. Show more Show less
Posted 3 weeks ago
8.0 years
0 Lacs
Gurgaon, Haryana, India
Remote
About This Role Team Overview This position requires hands on knowledge of Linux operating systems, server technologies, automation & cloud platforms. Should have a basic knowledge of industry hardware types and interoperability and also application software. The individual in this role must collaborate effectively with system engineers both locally and globally to ensure seamless operations and support. The person may also be called upon to work with other team colleagues regarding: builds, updates, testing and sharing results. provide operational support for legacy, current and new technologies. The global Aladdin Engineering SysOps UNIX team is a sizable team of over 50 individuals located in six different regions that support a highly available UNIX environment hosted on prem & on cloud. We are seeking a senior leader to provide technical leadership and lead the Unix platform across Cloud, Kubernetes, and On-prem environments, ensuring that products and services are delivered as agreed to all lines of business. This role primarily involves weekday work, but flexibility to work weekends is required as needed. Role Responsibility Successful performance requires an individual be able to perform these duties above a satisfactory level. The following requirements are representative of the knowledge, skill and proficiencies required. Lead system operations between cloud and on-premises IT UNIX infrastructure Engineer, administer and support services and products deploy to Azure, or AWS platform. Assist with the development of automated tools / scripts for use across the enterprise. Analytics and reporting using Cloud Native tools Responsible for the OS support for 12,000+ hosts located globally, running RedHat Enterprise Linux Review dashboards & reports for new & the rollout status of existing vulnerabilities. Role may also include… Server builds, monitoring, configuration, and ongoing maintenance of servers. Perform system administration duties across the environment to provide support services for customer. Maintain server availability for all sites and periodically participate in DR exercises. Must have good problem solving, teamwork, communication, and customer service skills. Resolve help desk critical issues and process items in a ticket queue. There are no after-hours on call requirements, but the individual should remain flexible for critical situations. Troubleshoot and resolve system issues and provide operations support for bridge calls. Participate in the development and implementation of UNIX related projects. Monitor servers and remediate alerts that get raised. Follow documented processes and procedures is critical. Good, clear communication is an absolute must. Experience Extensive knowledge of RHEL. Demonstrable experience in Cloud Environments, with a preference for Azure, though AWS or Google Cloud knowledge is also valuable. Experience with Kubernetes/containers is necessary. Strong proficiency in coding/scripting is essential, particularly with Python, Go, Perl or Java. Familiarity with modern DevOps / CI/CD and SRE principles/techniques is required. Proven track record in Build/Kickstart/Deploy processes & configuration management like Puppet. Expertise in Volume Management and Clustering: LVM, VxFS, VCS & VVR. Understanding of server hardware from vendors such as HPE and Dell. Knowledge of network services such as TCP/IP, NFS, DNS, LDAP, FTP, NTP, Samba, Autofs. Skills in security and performance monitoring and analysis. Conceptual understanding of storage and networking components. Manager Comments A minimum of 8+ years comparative industry experience Bachelor's degree preferably in Computer Science or Information Technology This role primarily involves weekday work, but flexibility to work weekends is required as needed Our Benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law. Show more Show less
Posted 3 weeks ago
0.0 years
0 Lacs
Bhopal, Madhya Pradesh
On-site
Position: Python Intern specializing in AI Location : Bhopal, Madhya Pradesh (Work from Office) Duration: 3 to 6 Months ✅ Must-Have Skills Core Python Programming: Functions, loops, list comprehensions, classes Error handling (try-except), logging File I/O operations, working with JSON/CSV Python Libraries for AI/ML: numpy, pandas – Data manipulation & analysis matplotlib, seaborn – Data visualization scikit-learn – Classical machine learning models Basic familiarity with tensorflow or pytorch Working knowledge of Openai / Transformers (bonus) AI/ML Fundamentals: Supervised and unsupervised learning (e.g., regression, classification, clustering) Concepts of overfitting, underfitting, and bias-variance tradeoff Train-test split, cross-validation Evaluation metrics: accuracy, precision, recall, F1-score, confusion matrix Data Preprocessing: Handling missing data, outliers Data normalization, encoding techniques Feature selection & dimensionality reduction (e.g., PCA) Jupyter Notebook Proficiency: Writing clean, well-documented notebooks Using markdown for explanations and visualizing outputs Version Control: Git basics (clone, commit, push, pull) Using GitHub/GitLab for code collaboration ✅ Good-to-Have Skills Deep Learning: Basic understanding of CNNs, RNNs, transformers Familiarity with keras or torch.nn for model building Generative AI: Prompt engineering, working with LLM APIs like OpenAI or Hugging Face Experience with vector databases (Qdrant, FAISS) NLP: Tokenization, stemming, lemmatization TF-IDF, Word2Vec, BERT basics Projects in sentiment analysis or text classification Tools & Platforms: VS Code, JupyterLab Google Colab / Kaggle Docker (basic understanding) Math for AI: Linear algebra, probability & statistics Basic understanding of gradients and calculus ✅ Soft Skills & Project Experience Participation in mini-projects (e.g., spam detector, digit recognizer) Kaggle competition experience Ability to clearly explain model outputs and results Documenting findings and creating simple dashboards or reports. Job Types: Full-time, Internship Pay: From ₹5,000.00 per month Schedule: Day shift Work Location: In person
Posted 3 weeks ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About The Role OSTTRA India The Role: Database Administrator The Team: The OSTTRA Technology team is composed of Capital Markets Technology professionals, who build, support and protect the applications that operate our network. The technology landscape includes high-performance, high-volume applications as well as compute intensive applications, leveraging contemporary microservices, cloud-based architectures The Impact: Together, we build, support, protect and manage high-performance, resilient platforms that process more than 100 million messages a day. Our services are vital to automated trade processing around the globe, managing peak volumes and working with our customers and regulators to ensure the efficient settlement of trades and effective operation of global capital markets What’s in it for you: We are looking for highly motivated technology professionals who will strengthen our specialisms, and champion our uniqueness to create a company that is collaborative, respectful, and inclusive to all. You will have 10+ years’ experience of DBA to meet the needs of our expanding portfolio of Financial Services clients. This is an excellent opportunity to be part of a team based out of Gurgaon and to work with colleagues across multiple regions globally. Role Summary We are looking for an experienced Oracle DBA to be a part of the company’s Infrastructure team. This team is responsible for managing databases on both cloud (AWS & GCP) and on-premises. We support and maintain several database technologies like Oracle, PostgreSQL, SQL Server & MySQL, but Oracle being a dominant estate. So, multi database experience is required along with atleast one cloud experience preferably AWS. We are looking for an experienced hands-on Oracle DBA that can work across the Operations and Engineering side. We are embarking on several migrations into the cloud, so experience with oracle database migrations would be must. Key Accountabilities Database Operational Support – Work in shift model (24*7) and providing on-call support during weekends and holidays. Support application release activities. Handling day-to-day issues like user access, performance tuning, backup and recovery, standby management, migrations etc Handling incidents Design & build optimized cost-effective fault-tolerant database solutions. Handle homogenous and heterogenous database migration using golden gate, datapump, DMS, pgdump/pg_restore, logical replication, dataguard (Oracle and PostgreSQL). Develop automated solution for manual procedures. Manage databases in cloud environments such as AWS RDS (code the infrastructure with Terraform) Requirements 8+ years of experience with Oracle database as a DBA 2+ years of experience in PostgreSQL. 2+ years of experience in managing Oracle RDS & EC2 Experience in Oracle database migration and management in AWS (RDS, EC2). Experience with migration technology like golden gate etc Extensive experience with Oracle 19c, Oracle Dataguard, RMAN, any cloning software (NetApp, Delphix), RAC, Performance Tuning. Understanding of some storage technology like EBS, Netapp from database perspective. Experience in cloning and refreshing Oracle and PostgreSQL database. Added Advantages Experience in automation with python Experience with infrastructure coding tools (preferably Terraform) Experience with Git Experience in any OS clustering Experience with AWS Aurora clusters The Location: Gurgaon, India About Company Statement OSTTRA is a market leader in derivatives post-trade processing, bringing innovation, expertise, processes and networks together to solve the post-trade challenges of global financial markets. OSTTRA operates cross-asset post-trade processing networks, providing a proven suite of Credit Risk, Trade Workflow and Optimisation services. Together these solutions streamline post-trade workflows, enabling firms to connect to counterparties and utilities, manage credit risk, reduce operational risk and optimise processing to drive post-trade efficiencies. OSTTRA was formed in 2021 through the combination of four businesses that have been at the heart of post trade evolution and innovation for the last 20+ years: MarkitServ, Traiana, TriOptima and Reset. These businesses have an exemplary track record of developing and supporting critical market infrastructure and bring together an established community of market participants comprising all trading relationships and paradigms, connected using powerful integration and transformation capabilities. About OSTTRA Candidates should note that OSTTRA is an independent firm, jointly owned by S&P Global and CME Group. As part of the joint venture, S&P Global provides recruitment services to OSTTRA - however, successful candidates will be interviewed and directly employed by OSTTRA, joining our global team of more than 1,200 post trade experts. OSTTRA was formed in 2021 through the combination of four businesses that have been at the heart of post trade evolution and innovation for the last 20+ years: MarkitServ, Traiana, TriOptima and Reset. OSTTRA is a joint venture, owned 50/50 by S&P Global and CME Group. With an outstanding track record of developing and supporting critical market infrastructure, our combined network connects thousands of market participants to streamline end to end workflows - from trade capture at the point of execution, through portfolio optimization, to clearing and settlement. Joining the OSTTRA team is a unique opportunity to help build a bold new business with an outstanding heritage in financial technology, playing a central role in supporting global financial markets. Learn more at www.osttra.com. What’s In It For You? Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf 20 - Professional (EEO-2 Job Categories-United States of America), BSMGMT203 - Entry Professional (EEO Job Group) Job ID: 305229 Posted On: 2025-04-12 Location: Gurgaon, Haryana, India Show more Show less
Posted 3 weeks ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Summary Description Summary of This Role Responsible for designing, implementing and maintaining Windows systems in company environments. This includes all approved vendor hardware and software for enabling the developmental, operational and production support groups to perform their assigned tasks. Serves as a technical resource for the company and is responsible for resolving issues with the hardware and software used at company installations. What Part Will You Play? Installs, documents, and configures Windows Operating Systems. Reviews complex designs and configures hardware, peripherals, services, settings, directories, storage, etc. in accordance with design requirements. Performs Operational Acceptance Testing and evaluations as part of Service Integration, ensuring acceptability and usability on a scale, capacity, resiliency and reliability level. May provide estimates of work effort and impact of projects and tasks. May design and implement multi-site Windows Active Directory infrastructure. Monitors systems to ensure platforms are available in accordance with Service Level Agreements (SLAs). Provides support to ensure that the monitoring systems are available at all times and that the appropriate thresholds and alerts have been set to ensure system availability. Reacts and responds to events in accordance with escalation procedures. Provides complex statistical information to datacentre management for weekly and monthly status update and additional information as required. Acts as the role of 3rd line technical support. Deploys changes in accordance with the Global Payments change control process. Raises, updates and closes change control tickets in accordance with Service Management guidelines. Monitors and reviews system logs, detect and troubleshoot problems, and escalate to appropriate level. Supports issue resolution as and when required, uses the available ticketing application to record activities. Provides feedback and updates to the Incident resolution support teams. Provides on call support within the on call structure 24/7. Investigates, troubleshoots, and provides mentoring for escalated issues. Conducts complex system maintenance by planning and developing strategy on patch management, firmware management and Operating Systems upgrades in line with best practices across Global Payments and the industry at large. Provides out of hours support for pre-arranged changes and maintenance events. Ensures systems are backed up in accordance with required practices and procedures. Mentors less experienced team members in becoming active participants in maintenance functions. Reports and assists in investigation of security breaches in accordance with Information Security guidelines. Reviews and investigates any issues identified via security monitoring applications. Reviews system access in line with the required practices and procedures following Information Security guidelines. Maintains the systems responsible to the level required to meet the Payment Card Industry (PCI), Security Standards and other applicable industry best practices. Create and review the availability of disaster recovery systems, maintaining code, configuration and documentation in-line with Production systems. Perform regular disaster recovery testing for internal and client facing systems. Interacts closely with the respective client service representatives and works with all levels of team members across business units within the company. Provides team members with detailed platform overview training and supporting documentation for operational, configuration, or other procedural purposes.. Attends status calls when requested and provides detailed technical support. Evaluates all systems supported or maintained for potential service improvements utilizing automation and orchestrations technologies. Ensures skills are updated by attending the appropriate courses, utilization of reference materials, Internet resources, and vendor sponsored seminars. What Are We Looking For in This Role? Minimum Qualifications Bachelor's Degree Relevant Experience or Degree in: Major in Computer Science preferred, other majors considered. Willing to accept additional experience in lieu of a degree. Typically Minimum 4 Years Relevant Exp Experience in system administration or related experience. Preferred Qualifications Typically Minimum 6 Years Relevant Exp Implementing and maintaining Active Directory; Windows Server; MS Windows Clustering; enterprise SAN and NAS configurations; MS SCCM and VMware VCM; Antivirus Software, Application White Listing and Device Control; TCP/IP and other networking principles including DNS and DHCP; scripting language; Experience with managing VMWare virtualization technologies; Virtual Center Management and Administration; vSphere Server, vSphere Client, and vCenter Server; Installation and support of VMware View to include Pool Management, Entitlements, Upgrades, and Break/Fix; deploying virtual machines and use technologies such as Snapshots, clones, templates. MCSE Certified or equivalent What Are Our Desired Skills and Capabilities? Skills / Knowledge - A seasoned, experienced professional with a full understanding of area of specialization; resolves a wide range of issues in creative ways. This job is the fully qualified, career-oriented, journey-level position. Job Complexity - Works on problems of diverse scope where analysis of data requires evaluation of identifiable factors. Demonstrates good judgment in selecting methods and techniques for obtaining solutions. Networks with senior internal and external personnel in own area of expertise. Supervision - Normally receives little instruction on day-to-day work, general instructions on new assignments. Active Directory - Windows Server, MS Windows Clustering - Show more Show less
Posted 3 weeks ago
15.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
To get the best candidate experience, please consider applying for a maximum of 3 roles within 12 months to ensure you are not duplicating efforts. Job Category Software Engineering Job Details About Salesforce We’re Salesforce, the Customer Company, inspiring the future of business with AI+ Data +CRM. Leading with our core values, we help companies across every industry blaze new trails and connect with customers in a whole new way. And, we empower you to be a Trailblazer, too — driving your performance and career growth, charting new paths, and improving the state of the world. If you believe in business as the greatest platform for change and in companies doing well and doing good – you’ve come to the right place. Role Description Our Salesforce Database (SDB) team provided a highly available database for Salesforce applications. Within the large SDB team, we provide a highly durable and available distributed storage for public clouds. We are defining the next generation of trusted enterprise computing in the cloud. We're a fast-paced, metrics driven team. We're highly collaborative and work across all areas of our technology stack. We live and breathe transactional systems, distributed systems and enterprise reliability, availability and scale. As part of building our team in India, we are looking for engineers and leaders who are passionate about working on the RDBMS technology of massive scale and one that thrives with continuous innovation. Required Skills And Experience The team is seeking a highly qualified and energetic Principal Software Engineer who will be responsible for working on development scalable, resilient and fault tolerant transactional and distributed systems with primary focus on the data storage. The Principal Software Engineer will be responsible for requirements, architecture/design and hands-on implementation. Experience designing, developing scalable, resilient and fault tolerant transactional and distributed systems in enterprise production environments. Highly skilled in Java or C in a Unix/ Linux Environment, with an understanding of modern object-oriented programming techniques and design patterns Experience using telemetry and metrics to drive operational excellence Ability to learn quickly and deliver high quality code in a fast-paced, dynamic team environment A meticulous and detailed oriented engineer, responsible for writing one’s own functional and unit tests and help review and test teammates' code Familiar with Agile development methodology and committed to continual improvement of team performance Effective communication, strong leadership skills, team player who is capable of mentoring and being mentored by others Inventive and creative; on task and able to deliver incrementally and on time You should have 15+ years of professional experience, or a M.Tech. in a relevant academic field and 12+ years of professional experience. Experience with relational databases is a big plus. Areas where you may be working on include highly scalable, highly performant distributed systems with highly available and durable data storage capabilities that ensure high availability of the stack above that includes databases. A thorough understanding of distributed systems, system programming, working with system resources is required. Practical knowledge for challenges regarding clustering solutions, hands-on experience in deploying your code in the public cloud environments, working knowledge of Kubernetes and working with APIs provided by various public cloud vendors to handle data are highly desired skills. BENEFITS & PERKS Comprehensive benefits package including well-being reimbursement, generous parental leave, adoption assistance, fertility benefits, and more! World-class enablement and on-demand training with Trailhead.com Exposure to executive thought leaders and regular 1:1 coaching with leadership Volunteer opportunities and participation in our 1:1:1 model for giving back to the community For more details, visit https://www.salesforcebenefits.com/ Accommodations If you require assistance due to a disability applying for open positions please submit a request via this Accommodations Request Form. Posting Statement Salesforce is an equal opportunity employer and maintains a policy of non-discrimination with all employees and applicants for employment. What does that mean exactly? It means that at Salesforce, we believe in equality for all. And we believe we can lead the path to equality in part by creating a workplace that’s inclusive, and free from discrimination. Know your rights: workplace discrimination is illegal. Any employee or potential employee will be assessed on the basis of merit, competence and qualifications – without regard to race, religion, color, national origin, sex, sexual orientation, gender expression or identity, transgender status, age, disability, veteran or marital status, political viewpoint, or other classifications protected by law. This policy applies to current and prospective employees, no matter where they are in their Salesforce employment journey. It also applies to recruiting, hiring, job assignment, compensation, promotion, benefits, training, assessment of job performance, discipline, termination, and everything in between. Recruiting, hiring, and promotion decisions at Salesforce are fair and based on merit. The same goes for compensation, benefits, promotions, transfers, reduction in workforce, recall, training, and education. Show more Show less
Posted 3 weeks ago
8.0 - 10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. At YASH, we’re a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth – bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future. We are looking forward to hire Microsoft Azure Professionals in the following areas : Experience 8-10 Years Job Description Experience in Azure Fabric, Azure Data factory, Azure Databricks, Azure Synapse, Azure Storage Services, Azure SQL, ETL, Azure Cosmos DB, Event HUB, Azure Data Catalog, Azure Functions, Azure Purview Create Pipelines, datasets, dataflows, Integration runtimes and monitoring Pipelines and trigger runs Extract Transformation and Load Data from source system and processing the data in Azure Databricks Prepare DB Design Documents for the user story based on the client requirement Work with the development team to create database structure, queries and used triggers Extensive experience in Microsoft Cloud solutions, i.e., Designing, Developing, and Testing Technologies Create SQL scripts to perform complex queries Create Synapse pipelines to migrate data from Gen2 to Azure SQL Data Migration pipeline to Azure cloud (Azure SQL). Database migration from on-prem SQL server to Azure Dev Environment by using Azure DMS and Data Migration Assistant Lift and Shift of the Development Server to Production Server Data governance in Azure Data migration pipeline which can migrate on-prem SQL server data to azure cloud (Azure SQL and Cosmos DB) Experience in using azure data catalog Experience in Big Data Batch Processing Solutions; Interactive Processing Solutions; Real Time Processing Solutions Required Technical/ Functional Competencies Domain/ Industry Knowledge: Basic knowledge of customer's business processes- relevant technology platform or product. Able to prepare process maps, workflows, business cases and simple business models in line with customer requirements with assistance from SME and apply industry standards/ practices in implementation with guidance from experienced team members. Requirement Gathering And Analysis Working knowledge of requirement management processes and requirement analysis processes, tools & methodologies. Able to analyse the impact of change requested/ enhancement/ defect fix and identify dependencies or interrelationships among requirements & transition requirements for engagement. Product/ Technology Knowledge Working knowledge of technology product/platform standards and specifications. Able to implement code or configure/customize products and provide inputs in design and architecture adhering to industry standards/ practices in implementation. Analyze various frameworks/tools, review the code and provide feedback on improvement opportunities. Architecture Tools And Frameworks Working knowledge of architecture Industry tools & frameworks. Able to identify pros/ cons of available tools & frameworks in market and use those as per Customer requirement and explore new tools/ framework for implementation. Architecture Concepts And Principles Working knowledge of architectural elements, SDLC, methodologies. Able to provides architectural design/ documentation at an application or function capability level and implement architectural patterns in solution & engagements and communicates architecture direction to the business. Analytics Solution Design Knowledge of statistical & machine learning techniques like classification, linear regression modelling, clustering & decision trees. Able to identify the cause of errors and their potential solutions. Tools & Platform Knowledge Familiar with wide range of mainstream commercial & open-source data science/analytics software tools, their constraints, advantages, disadvantages, and areas of application. Accountability Required Behavioral Competencies Takes responsibility for and ensures accuracy of own work, as well as the work and deadlines of the team. Collaboration Shares information within team, participates in team activities, asks questions to understand other points of view. Agility Demonstrates readiness for change, asking questions and determining how changes could impact own work. Customer Focus Identifies trends and patterns emerging from customer preferences and works towards customizing/ refining existing services to exceed customer needs and expectations. Communication Targets communications for the appropriate audience, clearly articulating and presenting his/her position or decision. Drives Results Sets realistic stretch goals for self & others to achieve and exceed defined goals/targets. Resolves Conflict Displays sensitivity in interactions and strives to understand others’ views and concerns. Certifications Mandatory At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded upon four principles Flexible work arrangements, Free spirit, and emotional positivity Agile self-determination, trust, transparency, and open collaboration All Support needed for the realization of business goals, Stable employment with a great atmosphere and ethical corporate culture Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
India
Remote
AI and Machine Learning Intern Company: INLIGHN TECH Location: Remote (100% Virtual) Duration: 3 Months Stipend for Top Interns: ₹15,000 Certificate Provided | Letter of Recommendation | Full-Time Offer Based on Performance About the Company: INLIGHN TECH empowers students and fresh graduates with real-world experience through hands-on, project-driven internships. The AI and Machine Learning Internship is crafted to provide practical exposure to building intelligent systems, enabling interns to bridge theoretical knowledge with real-world applications. Role Overview: As an AI and Machine Learning Intern, you will work on projects involving data preprocessing, model development, and performance evaluation. This internship will strengthen your skills in algorithm design, model optimization, and deploying AI solutions to solve real-world problems. Key Responsibilities: Collect, clean, and preprocess datasets for training machine learning models Implement machine learning algorithms for classification, regression, and clustering Develop deep learning models using frameworks like TensorFlow or PyTorch Evaluate model performance using metrics such as accuracy, precision, and recall Collaborate on AI-driven projects, such as chatbots, recommendation engines, or prediction systems Document code, methodologies, and results for reproducibility and knowledge sharing Qualifications: Pursuing or recently completed a degree in Computer Science, Data Science, Artificial Intelligence, or a related field Strong foundation in Python and understanding of libraries such as Scikit-learn, NumPy, Pandas, and Matplotlib Familiarity with machine learning concepts like supervised and unsupervised learning Experience or interest in deep learning frameworks (TensorFlow, Keras, PyTorch) Good problem-solving skills and a passion for AI innovation Eagerness to learn and contribute to real-world ML applications Internship Benefits: Hands-on experience with real-world AI and ML projects Certificate of Internship upon successful completion Letter of Recommendation for top performers Build a strong portfolio of AI models and machine learning solutions Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
India
Remote
Data Science Intern Company: INLIGHN TECH Location: Remote (100% Virtual) Duration: 3 Months Stipend for Top Interns: ₹15,000 Certificate Provided | Letter of Recommendation | Full-Time Offer Based on Performance About the Company: INLIGHN TECH empowers students and fresh graduates with real-world experience through hands-on, project-driven internships. The Data Science Internship is designed to equip you with the skills required to extract insights, build predictive models, and solve complex problems using data. Role Overview: As a Data Science Intern, you will work on real-world datasets to develop machine learning models, perform data wrangling, and generate actionable insights. This internship will help you strengthen your technical foundation in data science while working on projects that have a tangible business impact. Key Responsibilities: Collect, clean, and preprocess data from various sources Apply statistical methods and machine learning techniques to extract insights Build and evaluate predictive models for classification, regression, or clustering tasks Visualize data using libraries like Matplotlib, Seaborn, or tools like Power BI Document findings and present results to stakeholders in a clear and concise manner Collaborate with team members on data-driven projects and innovations Qualifications: Pursuing or recently completed a degree in Data Science, Computer Science, Mathematics, or a related field Proficiency in Python and data science libraries (NumPy, Pandas, Scikit-learn, etc.) Understanding of statistical analysis and machine learning algorithms Familiarity with SQL and data visualization tools or libraries Strong analytical, problem-solving, and critical thinking skills Eagerness to learn and apply data science techniques to solve real-world problems Internship Benefits: Hands-on experience with real datasets and end-to-end data science projects Certificate of Internship upon successful completion Letter of Recommendation for top performers Build a strong portfolio of data science projects and models Show more Show less
Posted 3 weeks ago
0.0 - 1.0 years
0 Lacs
Madhapur, Hyderabad, Telangana
On-site
About us: We are a software development company providing XMS solutions to retail businesses in the USA. We're seeking a Junior SEO Executive with good handce on eperiecine in SEO, On-Page Optimization, off-Page Optimization and technical seo Job Title: SEO Executive Location: Hyderabad, Telangana Experience Required: 1 to 3 Years About the Role: We are looking for a results-driven and detail-oriented SEO Executive to join our dynamic marketing team. The ideal candidate should possess hands-on experience with both on-page and off-page SEO, have a keen understanding of technical SEO, and a data-driven mindset to improve organic visibility and ROI across multiple digital properties. You will work closely with content creators, developers, and marketing strategists to execute SEO campaigns for both B2B and B2C brands, contributing directly to business growth through strategic search engine optimization. Key Responsibilities: SEO Strategy & Planning Develop and implement robust SEO strategies aligned with overall business goals. Conduct detailed keyword research, competitor analysis, and market trends. Create SEO roadmaps and monthly action plans. On-Page Optimization Optimize website content, meta tags, internal linking, and page structure. Ensure SEO best practices are followed across all web pages. Collaborate with developers to improve page speed, mobile responsiveness, and Core Web Vitals. Off-Page Optimization Build and maintain a clean, high-quality backlink profile using ethical white-hat techniques. Develop link-building campaigns including guest blogging, outreach, citations, and partnerships. Monitor backlinks and remove toxic links regularly. Technical SEO Conduct comprehensive SEO audits (crawlability, indexing, site architecture, etc.). Optimize XML sitemaps, robots.txt, and schema markup (structured data). Resolve technical issues such as broken links, redirects, duplicate content, etc. Content Collaboration & Optimization Work with content creators to produce SEO-friendly blogs, landing pages, and website copy. Perform content gap analysis and optimize for target keywords and search intent. Implement topic clustering and interlinking strategies. Reporting & Analytics Monitor and analyze performance using tools like Google Analytics, Google Search Console, Ahrefs, SEMrush, Screaming Frog, etc. Generate detailed SEO reports with insights, KPIs, and action items. A/B test SEO changes and measure impact. Key Skills Required: Bachelor's degree in Marketing, Business, Communications, IT, or a related field. Strong understanding of SEO algorithms, SERP features, and ranking factors. Proficiency in tools like Google Analytics, Google Search Console, Ahrefs, SEMrush, Moz, Screaming Frog. Familiarity with HTML, CSS, CMS (WordPress), and basic JavaScript. Hands-on experience with local SEO and Google My Business optimization. Excellent written and verbal communication skills. Strong analytical mindset and problem-solving ability. Ability to manage multiple projects and prioritize effectively. If you're interested, please send your resume to srikanth.banothu@growith.io . Job Type: Full-time Pay: ₹200,000.00 - ₹400,000.00 per month Benefits: Health insurance Schedule: UK shift Supplemental Pay: Performance bonus Application Question(s): If we give you an offer today, how soon can we expect you to join? How much experience do you have in B2B Marketing Do you have working experience in the IT, e-commerce, or food & beverage domain? What is your expected CTC? Can you attend a face-to-face interview at our Madhapur, Hyderabad office? Your current location? Experience: SEO: 1 year (Required) On-Page Optimization: 1 year (Required) Off-Page Optimization: 1 year (Required) total work: 1 year (Required) Language: English (Required) Location: Madhapur, Hyderabad, Telangana (Required) Work Location: In person
Posted 3 weeks ago
10.0 years
0 Lacs
Mohali district, India
On-site
About Americana Restaurants International PLC: Americana Restaurants International PLC is a pioneering force in the MENA region and Kazakhstan's out-of-home dining industry and ranks among the world's leading operators of Quick Service Restaurants (QSR) and casual dining establishments. With an extensive portfolio of iconic global brands and a dominant regional presence, we have consistently driven growth and innovation for over half a century. Our expansive network of 2,500 restaurants spans 12 markets throughout the Middle East, North Africa, and Kazakhstan, from Kazakhstan in the east to Morocco in the west. Leveraging unparalleled local knowledge and capabilities, we are key players in our core markets with significant potential for further growth. franchisee for some of the world’s most prominent brands in fast food and casual dining; including KFC, Pizza Hut, Hardee’s, Krispy Kreme, Wimpy, Costa Coffee, and TGI Friday’s; to name a few. Our dedicated team of over 40,000 talented employees strives to create memorable moments through exceptional food, superior service, and outstanding experiences. For More details https://www.americanarestaurants.com Experience: 10+ years Strong proficiency with PostgreSQL, Database Performance Tuning, Optimization, and Troubleshooting, Database Security Principles and best practices, Database backup and recovery procedures, Database Clustering and Replication technologies, Database Automation and Scripting ,Full stack web development using Java technologies, Relevant certifications (e.g., Microsoft Certified: Azure Database Administrator Associate, Oracle Certified Professional), Cloud-based database services (e.g., Azure PostgreSQL Databases), NoSQL databases (e.g., MongoDB, Cassandra), Familiarity with DevOps practices and tools. Show more Show less
Posted 3 weeks ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Exciting Opportunity at Eloelo: Join the Future of Live Streaming and Social Gaming! Are you ready to be a part of the dynamic world of live streaming and social gaming? Look no further! Eloelo, an innovative Indian platform founded in February 2020 by ex-Flipkart executives Akshay Dubey and Saurabh Pandey, is on the lookout for passionate individuals to join our growing team in Bangalore. About Us: Eloelo stands at the forefront of multi-host video and audio rooms, offering a unique blend of interactive experiences, including chat rooms, PK challenges, audio rooms, and captivating live games like Lucky 7, Tambola, Tol Mol Ke Bol, and Chidiya Udd. Our platform has successfully attracted audiences from all corners of India, providing a space for social connections and immersive gaming. Recent Milestone: In pursuit of excellence, Eloelo has secured a significant milestone by raising $22Mn in the month of October 2023 from a diverse group of investors, including Lumikai, Waterbridge Capital, Courtside Ventures, Griffin Gaming Partners, and other esteemed new and existing contributors. Why Eloelo? Be a part of a team that thrives on creativity and innovation in the live streaming and social gaming space. Rub shoulders with the stars! Eloelo regularly hosts celebrities such as Akash Chopra, Kartik Aryan, Rahul Dua, Urfi Javed, and Kiku Sharda from the Kapil Sharma Show and that's our level of celebrity collaboration. Working with a world class team ,high performance team that constantly pushes boundaries and limits , redefines what is possible Fun and work at the same place with amazing work culture , flexible timings , and vibrant atmosphere We are looking to hire a business analyst to join our growth analytics team. This role sits at the intersection of business strategy, marketing performance, creative experimentation, and customer lifecycle management, with a growing focus on AI-led insights. You’ll drive actionable insights to guide our performance marketing, creative strategy, and lifecycle interventions, while also building scalable analytics foundations for a fast-moving growth team. We’re looking for 2 to 4 years of experience in business/marketing analytics or growth-focused analytics roles Strong grasp of marketing funnel metrics, CAC, ROAS, LTV, retention, and other growth KPIs SQL Mastery: 3+ years of experience writing and optimizing complex SQL queries over large datasets (BigQuery/Redshift/Snowflake) Experience in campaign performance analytics across Meta, Google, Affiliates etc. Comfort working with creative performance data (e.g., A/B testing, video/image-led analysis) Experience with CLM campaign analysis via tools like MoEngage, Firebase. Ability to work with large datasets, break down complex problems, and derive actionable insights Hands-on experience or strong interest in applying AI/ML for automation, personalization, or insight generation is a plus Good business judgment and a strong communication style that bridges data and decision-making Comfort juggling short-term tactical asks and long-term strategic workstreams Experience in a fast-paced consumer tech or startup environment preferred You will Own reporting, insights, and experimentation across performance marketing, creative testing, and CLM Partner with growth, product, and content teams to inform campaign decisions, budget allocation, and targeting strategy Build scalable dashboards and measurement frameworks for marketing and business KPIs Drive insights into user behavior and campaign effectiveness by leveraging cohorting, segmentation, and funnel analytics Evaluate and experiment with AI tools or models to automate insights, build scoring systems, or improve targeting/personalization Be the go-to person for identifying growth levers, inefficiencies, or new opportunities across user acquisition and retention Bonus Points Experience working with marketing attribution tools (Appsflyer, Adjust etc.) Hands-on experience with Python/R for advanced analysis or automation Exposure to AI tools for marketing analytics (e.g., creative scoring, automated clustering, LLMs for insights) Past experience working in analytics for a D2C, gaming, or consumer internet company You’ve built marketing mix models or predictive LTV models Show more Show less
Posted 3 weeks ago
2.0 years
0 Lacs
New Delhi, Delhi, India
On-site
Job Title: Marketing Data Science - Senior Analyst Exp: 2+ Yrs Location: Delhi, India Mode of Work : Hybrid About the Company: We are seeking a highly driven Data Scientist Marketing to join our Client Solutions Team. You will be responsible for extracting valuable insights from data, analyzing marketing performance, and providing data-driven recommendations to optimize marketing campaigns. You will collaborate closely with the Client and organization’s product teams to develop and implement advanced analytical solutions. Key Responsibilities: Apply data science methodologies, including customer segmentation, clustering, and predictive modeling, to identify distinct customer groups, and behaviors, and forecast key marketing metrics, guiding personalized marketing efforts. Extract, clean, and integrate data from various sources, including Customer Data Platforms (CDP), Adobe Analytics, Google Analytics, CRM systems, and Salesforce data, to ensure data accuracy and reliability for analysis. Design, execute, and analyze A/B tests to evaluate the effectiveness of marketing strategies, advertisements, and website changes, making data-driven recommendations for improvements. Assist in developing marketing attribution models to accurately measure the impact of various marketing channels and touchpoints on customer conversions and sales. Collaborate cross-functionally with marketing, product development, and IT teams to infuse data science insights into marketing strategies, enabling data-driven decision-making and enhancing customer experiences. Qualifications: Bachelor's or Master's degree in Data Science, Statistics, Computer Science, or a related field. 2-4 years of professional experience as a Data Scientist or Analyst in a marketing environment. Experience in financial services and B2B marketing analytics will be preferred. Proficiency in programming languages such as Python or R. Proficiency in SQL for data extraction and transformation. Strong knowledge of statistical analysis, machine learning techniques, and data visualization tools. Experience with data manipulation and analysis libraries (e.g., Pandas, NumPy, scikit-learn). Excellent communication skills with the ability to convey complex findings to non-technical stakeholders. Strong problem-solving skills and attention to detail. Ability to work collaboratively in a cross-functional team environment. Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Summary: We are looking for a skilled and proactive WebSphere Portal Administrator to join our team. The ideal candidate will be responsible for the installation, configuration, administration, and maintenance of IBM WebSphere Portal environments across various platforms including RHEL and Windows. You will play a key role in ensuring the stability, security, and performance of our WebSphere infrastructure while supporting application deployments and user access requirements. Key Responsibilities: Install, configure, and maintain IBM WebSphere Portal and WebSphere Application Servers (WAS) across Linux and Windows environments. Perform setup and administration of IBM Web Content Management (WCM) and associated components. Configure and manage user registries such as IBM Tivoli Directory Server (TDS) and Microsoft Active Directory (AD). Implement disaster recovery procedures and perform regular backup/restore activities. Configure SSL certificates for secure communication on WebSphere Portal and WAS. Create and manage clusters using Deployment Manager, implementing both horizontal and vertical scaling. Handle WebSphere Portal migrations (e.g., V7 to V8.0, V8.0 to V8.5.x) and apply cumulative fixes (CFs) and interim fixes (IFIXes). Configure and troubleshoot WebSphere resources such as JDBC providers, data sources, and connection pools. Deploy portal artifacts using release builder, XMLAccess, and ConfigEngine tools. Set up and manage high-availability environments such as DB2 HADR. Optimize system performance and apply security best practices across the WebSphere environment. Work with IBM HTTP Server for reverse proxy setup and integration with WebSphere. Ensure LDAP-based security integration and manage user/group access policies. Actively monitor system health, respond to alerts, and resolve incidents within SLA. Collaborate with application teams and stakeholders to support deployments and troubleshoot issues. Required Skills & Experience: Strong knowledge of IBM WebSphere Portal and Application Server V7/V8.x (Base/ND). Experience in working with IBM HTTP Server, DB2, and LDAP integration. Hands-on experience in setting up Deployment Manager, clustering, and federating nodes. Solid understanding of SSL, security configurations, backup & restore, and system hardening. Proven experience in portal and application deployments using administrative tools and scripting. Experience with performance tuning and issue resolution in large-scale enterprise environments. Familiarity with monitoring tools, logs, and proactive response to system alerts. Excellent troubleshooting and documentation skills. Effective communication and ability to work collaboratively in a cross-functional team. Good to Have: Experience with GitHub versioning and DevOps practices. Exposure to other IBM products like Connections or Sametime. Knowledge of scripting for automation (e.g., shell, Jython). Show more Show less
Posted 3 weeks ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About Us : Paytm is India's leading mobile payments and financial services distribution company. Pioneer of the mobile QR payments revolution in India, Paytm builds technologies that help small businesses with payments and commerce. Paytm’s mission is to serve half a billion Indians and bring them to the mainstream economy with the help of technology. About the travel team: Paytm Travel has revolutionized the travel industry - with a goal to empower millions of travelers who choose us as their preferred travel partner. We are no. 3 in the travel segment, in India, within a span of a few years, which proves our capability and potential to become no. 1 in the near future. Being one of the largest travel platforms in the country, our aim is to not only ensure seamless, instant booking, but also a delightful journey. We strive to enrich customer experience by making every transaction transparent, honest and hassle free. To stay ahead of the curve, we are working aggressively towards our ambition to make travel affordable for all. With this customer centricity at our core, we strive to make Paytm Travel synonymous with a trustworthy travel partner. About the Role: We're seeking a data scientist with sharp business acumen to transform Paytm Travel's raw data into strategic gold . You'll dive deep into flights, trains and bus data to uncover hidden patterns, predict trends, and deliver actionable insights that shape business strategy. This is a pure insights generation role - your dashboards and recommendations will directly influence leadership decisions across pricing, inventory, customer experience and growth. Key Responsibilities: 1. Advanced Travel Data Analysis Mine complex datasets across flight bookings, train (IRCTC), and bus (aggregator) ecosystems. Develop predictive models for demand forecasting, cancellation risks, and price elasticity. Create customer segmentation frameworks using clustering techniques. 2. Insight Generation & Storytelling Produce weekly insight briefs with actionable recommendations for leadership. Build self-service dashboards (Looker) tracking core metrics and anomalies. Conduct deep-dive analyses on critical business questions (e.g., "Why are bus bookings dropping in South India?"). 3. Cross-Functional Advisory Partner with Revenue Management on dynamic pricing strategies. Guide Marketing on high-value customer acquisition opportunities. Advise Product on feature prioritization based on behavioral data. What We're Looking For: Must-Have: 3-5 years in data science/advanced analytics (travel/OTA/e-commerce preferred). Expert in Python (Pandas, Scikit-learn) + SQL (complex queries, optimization). Strong statistical modeling skills (regression, time series, clustering). Experience building Tableau/Looker/Power BI dashboards. Ability to translate data into boardroom-ready insights. Nice-to-Have: Familiarity with big data tools (Spark, Hadoop). Knowledge of experimentation frameworks (A/B testing). Understanding of travel industry metrics (RPC, load factor, cancellation curves). Why join us: Because you get an opportunity to make a difference, and have a great time doing that. You are challenged and encouraged here to do stuff that is meaningful for you and for those we serve. We are successful, and our successes are rooted in our people's collective energy and unwavering focus on the customer, and that's how it will always be Compensation: If you are the right fit, we believe in creating wealth for you. With enviable 500 mn+ registered users, 21 mn+ merchants and depth of data in our ecosystem, we are in a unique position to democratize credit for deserving consumers & merchants – and we are committed to it. India’s largest digital lending story is brewing here. It’s your opportunity to be a part of the story! Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Role : Oracle DBA (5+ yrs) Location : Mumbai (WFO) Skills : Oracle DBA, RAC, RMAN, Data Guard, Clustering, HA Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
Greater Kolkata Area
On-site
Strong knowledge of Splunk architecture, components, and deployment models (standalone, distributed, or clustered) Hands-on experience with Splunk forwarders, search processing, and index clustering Proficiency in writing SPL (Search Processing Language) queries and creating dashboards Familiarity with Linux/Unix systems and basic scripting (e.g., Bash, Python) Understanding of networking concepts and protocols (TCP/IP, syslog) We are looking for a Splunk architect to join our dynamic team. In this hybrid role, you will leverage your expertise in Python programming to develop innovative solutions while harnessing the power of Splunk for data analysis, monitoring, and automation. This position is ideal for a problem-solver passionate about integrating programming with operational intelligence tools to drive efficiency and insights across the organization. Key Responsibilities Deploy Splunk Enterprise or Splunk Cloud on servers or virtual environments. Configure indexing and search head clusters for data collection and search functionalities. Deploy universal or heavy forwarders to collect data from various sources and send it to the Splunk environment Configure data inputs (e.g., syslogs, snmp, file monitoring) and outputs (e.g., storage, dashboards) Identify and onboard data sources such as logs, metrics, and events. Use regular expressions or predefined methods to extract fields from raw data Configure props.conf and transforms.conf for data parsing and enrichment. Create and manage indexes to organize and control data storage. Configure roles and users with appropriate permissions using role-based access control (RBAC). Integrate Splunk with external authentication systems like LDAP, SAML, or Active Directory Monitor user activities and changes to the Splunk environment Optimize Splunk for better search performance and resource utilization Regularly monitor the status of indexers, search heads, and forwarders Configure backups for configurations and indexed data Diagnose and resolve issues like data ingestion failures, search slowness, or system errors. Install and manage apps and add-ons from Splunkbase or custom-built solutions. Create python scripts for automation and advanced data processing. Use KV stores for dynamic data storage and retrieval within Splunk Plan and execute Splunk version upgrades Regularly update apps and add-ons to maintain compatibility and security Ensure the underlying operating system and dependencies are up-to-date. Integrate Splunk with ITSM tools (e.g., ServiceNow), monitoring tools, or CI/CD pipelines. Use Splunk's REST API for automation and custom integrations Good to have Splunk Core Certified Admin certification Splunk Development and Administration Build and optimize complex SPL (Search Processing Language) queries for dashboards, reports, and alerts. Develop and manage Splunk apps and add-ons, including custom Python scripts for data ingestion and enrichment. Onboard and validate data sources in Splunk, ensuring proper parsing, indexing, and field extractions. Integration and Automation Leverage Python to automate Splunk administrative tasks such as monitoring, data onboarding, and alerting. Integrate Splunk with third-party tools, systems, and APIs (e.g., ServiceNow, cloud platforms, or in-house solutions). Develop custom connectors to stream data between Splunk and other platforms or databases. Data Analysis and Insights Collaborate with stakeholders to extract actionable insights from log data and metrics using Splunk. Create advanced visualizations and dashboards to highlight key trends and anomalies. Assist in root cause analysis for performance bottlenecks or operational incidents. System Optimization and Security Enhance Splunk search performance through Python-driven optimizations and configurations. Implement security best practices in both Python code and Splunk setups, ensuring compliance with regulatory standards. Perform regular Splunk system health checks and troubleshoot issues related to data ingestion or indexing. Collaboration and Mentoring Work closely with DevOps, Security, and Data teams to align Splunk solutions with business needs. Mentor junior developers or administrators in Python and Splunk best practices. Document processes, solutions, and configurations for future reference. Python Development: Proficient in Python 3.x, with experience in libraries such as Pandas, NumPy, Flask/Django, and Requests. Strong understanding of RESTful APIs and data serialization formats (JSON, XML). Experience with version control systems like Git. Design, develop, and maintain robust Python scripts, applications, and APIs to support automation, data processing, and integration workflows. Create reusable modules and libraries to simplify recurring tasks and enhance scalability. Debug, optimize, and document Python code to ensure high performance and maintainability. Splunk Expertise: Hands-on experience in Splunk development, administration, and data onboarding. Proficiency in SPL (Search Processing Language) for creating advanced searches, dashboards, and alerts. Familiarity with props.conf and transforms.conf configurations. Other Skills: Knowledge of Linux/Unix environments, including scripting (Bash/PowerShell). Understanding of networking protocols (TCP/IP, syslog) and log management concepts. Experience with cloud platforms (AWS, Azure, or GCP) and integrating Splunk in hybrid environments. Show more Show less
Posted 3 weeks ago
0 years
0 Lacs
India
Remote
CryptoChakra is an industry-leading cryptocurrency analytics and education platform committed to simplifying digital asset markets for traders, investors, and institutions. By integrating advanced machine learning frameworks, real-time blockchain intelligence, and immersive learning ecosystems, we empower users to decode market volatility with precision. Our platform delivers AI-driven price forecasts, sentiment analysis tools, and smart contract audits, complemented by curated tutorials and risk management frameworks. Focused on predictive modeling, DeFi analytics, and educational excellence, we champion transparency, integrity, and cutting-edge technology to democratize crypto literacy for a global audience. As a remote-first innovator, we bridge the gap between complex blockchain data and actionable financial strategies. Position: Data Analyst Intern (Digital Assets) Remote | Full-Time Internship | Compensation: Paid/Unpaid based on suitability Role Summary Join CryptoChakra’s analytics team to transform raw blockchain data into strategic insights that power predictive models and educational resources. This role offers hands-on experience in statistical analysis, machine learning, and crypto market research, with mentorship from industry experts. Key Responsibilities Data Analysis & Modeling: Process and analyze datasets from exchanges (CoinGecko, Binance) and blockchain explorers (Etherscan) using Python/R and SQL. Conduct statistical evaluations (regression, clustering) to identify trends in trading volumes, wallet activity, and NFT markets. Predictive Analytics Support: Assist in refining AI-driven models for price forecasting and DeFi risk assessment. Validate model accuracy against real-time market movements and on-chain metrics. Insight Communication: Create dashboards (Tableau, Power BI) and reports to translate findings into actionable strategies for traders and educators. Blockchain Metrics Decoding: Investigate smart contract interactions, gas fees, and liquidity pool dynamics to support educational content development. Qualifications Technical Skills Proficiency in Python/R for data manipulation (Pandas, NumPy) and basic machine learning (Scikit-learn). Strong understanding of statistics (hypothesis testing, probability distributions) and SQL/NoSQL databases. Familiarity with data visualization tools (Tableau, Plotly) and blockchain datasets (Etherscan, Dune Analytics) is a plus. Professional Competencies Analytical rigor to derive insights from unstructured data. Ability to articulate technical results to cross-functional teams in a remote setting. Self-driven with adaptability to Agile workflows and collaboration tools (Slack, Jira). Preferred (Not Required) Academic projects involving crypto market analysis, time-series forecasting, or NLP. Exposure to DeFi protocols (Uniswap, Aave) or cloud platforms (AWS, GCP). Pursuing or holding a degree in Data Science, Statistics, Computer Science, or related fields. What We Offer Skill Development: Master tools like TensorFlow, SQL, and blockchain analytics platforms. Portfolio Impact: Contribute to models and tutorials used by globalCryptoChakra users. Flexibility: Remote work with mentorship tailored to your learning goals. Certification & Recognition: LinkedIn endorsement and completion certificate for standout performers. Show more Show less
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The job market for clustering roles in India is thriving, with numerous opportunities available for job seekers with expertise in this area. Clustering professionals are in high demand across various industries, including IT, data science, and research. If you are considering a career in clustering, this article will provide you with valuable insights into the job market in India.
Here are 5 major cities in India actively hiring for clustering roles: 1. Bangalore 2. Pune 3. Hyderabad 4. Mumbai 5. Delhi
The average salary range for clustering professionals in India varies based on experience levels. Entry-level positions may start at around INR 3-6 lakhs per annum, while experienced professionals can earn upwards of INR 12-20 lakhs per annum.
In the field of clustering, a typical career path may look like: - Junior Data Analyst - Data Scientist - Senior Data Scientist - Tech Lead
Apart from expertise in clustering, professionals in this field are often expected to have skills in: - Machine Learning - Data Analysis - Python/R programming - Statistics
Here are 25 interview questions for clustering roles: - What is clustering and how does it differ from classification? (basic) - Explain the K-means clustering algorithm. (medium) - What are the different types of distance metrics used in clustering? (medium) - How do you determine the optimal number of clusters in K-means clustering? (medium) - What is the Elbow method in clustering? (basic) - Define hierarchical clustering. (medium) - What is the purpose of clustering in machine learning? (basic) - Can you explain the difference between supervised and unsupervised learning? (basic) - What are the advantages of hierarchical clustering over K-means clustering? (advanced) - How does DBSCAN clustering algorithm work? (medium) - What is the curse of dimensionality in clustering? (advanced) - Explain the concept of silhouette score in clustering. (medium) - How do you handle missing values in clustering algorithms? (medium) - What is the difference between agglomerative and divisive clustering? (advanced) - How would you handle outliers in clustering analysis? (medium) - Can you explain the concept of cluster centroids? (basic) - What are the limitations of K-means clustering? (medium) - How do you evaluate the performance of a clustering algorithm? (medium) - What is the role of inertia in K-means clustering? (basic) - Describe the process of feature scaling in clustering. (basic) - How does the GMM algorithm differ from K-means clustering? (advanced) - What is the importance of feature selection in clustering? (medium) - How can you assess the quality of clustering results? (medium) - Explain the concept of cluster density in DBSCAN. (advanced) - How do you handle high-dimensional data in clustering? (medium)
As you venture into the world of clustering jobs in India, remember to stay updated with the latest trends and technologies in the field. Equip yourself with the necessary skills and knowledge to stand out in interviews and excel in your career. Good luck on your job search journey!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2