Jobs
Interviews

3003 Clustering Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Avant de postuler à un emploi, sélectionnez votre langue de préférence parmi les options disponibles en haut à droite de cette page. Découvrez votre prochaine opportunité au sein d'une organisation qui compte parmi les 500 plus importantes entreprises mondiales. Envisagez des opportunités innovantes, découvrez notre culture enrichissante et travaillez avec des équipes talentueuses qui vous poussent à vous développer chaque jour. Nous savons ce qu’il faut faire pour diriger UPS vers l'avenir : des personnes passionnées dotées d’une combinaison unique de compétences. Si vous avez les qualités, de la motivation, de l'autonomie ou le leadership pour diriger des équipes, il existe des postes adaptés à vos aspirations et à vos compétences d'aujourd'hui et de demain. Job Summary Fiche de poste : This position participates in the design, build, test, and delivery of Machine Learning (ML) models and software components that solve challenging business problems for the organization, working in collaboration with the Business, Product, Architecture, Engineering, and Data Science teams. This position engages in assessment and analysis of data sources of structured and unstructured data (internal and external) to uncover opportunities for ML and Artificial Intelligence (AI) automation, predictive methods, and quantitative modeling across the organization. This position establishes and configures scalable and cost-effective end to end solution design pattern components to support prediction model transactions. This position designs trials and tests to measure the success of software and systems, and works with teams, or individually, to implement ML/AI models for production scale. Responsibilities The MLOPS developer works on maintaining existing models that are supporting applications such as the digital insurance application and claims recommendation engine. They will be responsible for setting up cloud monitoring jobs, performing quality assurance and testing for edge cases to ensure the ML product works within the application. They are also going to need to be on call on weekends to bring the application back online in case of failure. Studies and transforms data science prototypes into ML systems using appropriate datasets and data representation models. Researches and implements appropriate ML algorithms and tools that creates new systems and processes powered with ML and AI tools and techniques according to business requirements Collaborates with others to deliver ML products and systems for the organization. Designs workflows and analysis tools to streamline the development of new ML models at scale. Creates and evolves ML models and software that enable state-of-the-art intelligent systems using best practices in all aspects of engineering and modelling lifecycles. Extends existing ML libraries and frameworks with the developments in the Data Science and Machine Learning field. Establishes, configures, and supports scalable Cloud components that serve prediction model transactions Integrates data from authoritative internal and external sources to form the foundation of a new Data Product that would deliver insights that supports business outcomes necessary for ML systems. Qualifications Requirements: Ability to code in python/spark with enough knowledge of apache to build apache beam jobs in dataproc to build data transfer jobs. Experience designing and building data-intensive solutions using distributed computing within a multi-line business environment. Familiarity in Machine Learning and Artificial Intelligence frameworks (i.e., Keras, PyTorch), libraries (i.e., scikit-learn), and tools and Cloud-AI technologies that aids in streamlining the development of Machine Learning or AI systems. Experience in establishing and configuring scalable and cost-effective end to end solution design pattern components to support the serving of batch and live streaming prediction model transactions Possesses creative and critical thinking skills. Experience in developing Machine Learning models such as: Classification/Regression Models, NLP models, and Deep Learning models; with a focus on productionizing those models into product features. Experience with scalable data processing, feature development, and model optimization. Solid understanding of statistics such as forecasting, time series, hypothesis testing, classification, clustering or regression analysis, and how to apply that knowledge in understanding and evaluating Machine Learning models. Knowledgeable in software development lifecycle (SDLM), Agile development practices and cloud technology infrastructures and patterns related to product development Advanced math skills in Linear Algebra, Bayesian Statistics, Group Theory. Works collaboratively, both in a technical and cross-functional context. Strong written and verbal communication. Bachelors’ (BS/BA) degree in a quantitative field of mathematics, computer science, physics, economics, engineering, statistics (operations research, quantitative social science, etc.), international equivalent, or equivalent job experience. Type De Contrat en CDI Chez UPS, égalité des chances, traitement équitable et environnement de travail inclusif sont des valeurs clefs auxquelles nous sommes attachés.

Posted 18 hours ago

Apply

3.0 years

6 - 7 Lacs

Hyderābād

Remote

Working with Us Challenging. Meaningful. Life-changing. Those aren't words that are usually associated with a job. But working at Bristol Myers Squibb is anything but usual. Here, uniquely interesting work happens every day, in every department. From optimizing a production line to the latest breakthroughs in cell therapy, this is work that transforms the lives of patients, and the careers of those who do it. You'll get the chance to grow and thrive through opportunities uncommon in scale and scope, alongside high-achieving teams. Take your career farther than you thought possible. Bristol Myers Squibb recognizes the importance of balance and flexibility in our work environment. We offer a wide variety of competitive benefits, services and programs that provide our employees with the resources to pursue their goals, both at work and in their personal lives. Read more: careers.bms.com/working-with-us . Roles & Responsibilities Develop, maintain, and manage advanced reporting, analytics, dashboards and other BI solutions for HR stakeholders Partner with senior analysts build visualizations to communicate insights and recommendations to stakeholders at various levels of the organization Partner with HR senior analysts to implement statistical models, decision support models, and optimization techniques to solve complex business problems. Collaborate with cross-functional teams to gather/analyse data, define problem statements and identify KPIs for decision-making Perform and document data analysis, data validation, and data mapping/design. Collaborate with HR stakeholders to understand business objectives and translate them into projects and actionable recommendations Stay up to date with industry trends, emerging methodologies, and best practices related to reporting analytics / visualization optimization and decision support The HR data Analyst will play a critical role in ensuring the availability, integrity of HR data to drive informed decision-making. Skills and competencies Strong analytical thinking and problem-solving skills, with working knowledge of statistical analysis, optimization techniques, and decision support models. Ability to present complex information to non-technical stakeholders in a clear and concise manner; skilled in creating relevant and engaging PowerPoint presentations. Proficiency in data analysis techniques, including the use of Tableau, ETL tools (Python, R, Domino), and statistical software packages. Advanced skills in Power BI, Power Query, DAX, and data visualization best practices. Experience with data modelling, ETL processes, and connecting to various data sources. Solid understanding of SQL and relational databases. Exceptional attention to detail, with the ability to proactively detect data anomalies and ensure data accuracy. Ability to work collaboratively in cross-functional teams and manage multiple projects simultaneously. Strong capability to work with large datasets, ensuring the accuracy and reliability of analyses. Strong business acumen, with the ability to translate analytical findings into actionable insights and recommendations. Working knowledge of data modelling to support analytics needs. Experience conducting thorough Exploratory Data Analysis (EDA) to summarize, visualize, and validate data quality and trends. Ability to apply foundational data science or basic machine learning techniques (such as regression, clustering, or forecasting) when appropriate. Experience: Bachelor's or master's degree in a relevant field such as Statistics, Mathematics, Economics, Operations Research or a related discipline. Minimum of 3+ years of total relevant experience Business experience with visualization tools (e.g., PowerBI) Experience with data querying languages (e.g., SQL), scripting languages (Python) Problem-solving skills with understanding and practical experience across most Statistical Modelling and Machine Learning Techniques. Only academic knowledge is also acceptable. Ability to handle, and maintain the confidentiality of highly sensitive information Experience initiating and completing analytical projects with minimal guidance Experience communicating results of analysis to using compelling and persuasive oral and written storytelling techniques Hands-on experience working with large datasets, statistical software packages (e.g., R, Python), and data visualization tools such as Tableau and Power BI. Experience with ETL processes, writing complex SQL queries, and data manipulation techniques. Experience in HR analytics a nice to have If you come across a role that intrigues you but doesn't perfectly line up with your resume, we encourage you to apply anyway. You could be one step away from work that will transform your life and career. Uniquely Interesting Work, Life-changing Careers With a single vision as inspiring as Transforming patients' lives through science™ , every BMS employee plays an integral role in work that goes far beyond ordinary. Each of us is empowered to apply our individual talents and unique perspectives in a supportive culture, promoting global participation in clinical trials, while our shared values of passion, innovation, urgency, accountability, inclusion and integrity bring out the highest potential of each of our colleagues. On-site Protocol BMS has an occupancy structure that determines where an employee is required to conduct their work. This structure includes site-essential, site-by-design, field-based and remote-by-design jobs. The occupancy type that you are assigned is determined by the nature and responsibilities of your role: Site-essential roles require 100% of shifts onsite at your assigned facility. Site-by-design roles may be eligible for a hybrid work model with at least 50% onsite at your assigned facility. For these roles, onsite presence is considered an essential job function and is critical to collaboration, innovation, productivity, and a positive Company culture. For field-based and remote-by-design roles the ability to physically travel to visit customers, patients or business partners and to attend meetings on behalf of BMS as directed is an essential job function. BMS is dedicated to ensuring that people with disabilities can excel through a transparent recruitment process, reasonable workplace accommodations/adjustments and ongoing support in their roles. Applicants can request a reasonable workplace accommodation/adjustment prior to accepting a job offer. If you require reasonable accommodations/adjustments in completing this application, or in any part of the recruitment process, direct your inquiries to adastaffingsupport@bms.com . Visit careers.bms.com/ eeo -accessibility to access our complete Equal Employment Opportunity statement. BMS cares about your well-being and the well-being of our staff, customers, patients, and communities. As a result, the Company strongly recommends that all employees be fully vaccinated for Covid-19 and keep up to date with Covid-19 boosters. BMS will consider for employment qualified applicants with arrest and conviction records, pursuant to applicable laws in your area. If you live in or expect to work from Los Angeles County if hired for this position, please visit this page for important additional information: https://careers.bms.com/california-residents/ Any data processed in connection with role applications will be treated in accordance with applicable data privacy policies and regulations.

Posted 18 hours ago

Apply

3.0 - 6.0 years

5 - 15 Lacs

Hyderābād

Remote

Job Title: Data Scientist – Python Experience: 3 to 6 Years Location: Remote Job Type: Full-Time Education: B.E./B.Tech or M.Tech in Computer Science, Data Science, Statistics, or a related field Job Summary We are seeking a talented and results-driven Data Scientist with 3–6 years of experience in Python-based data science workflows. This is a remote, full-time opportunity for professionals who are passionate about solving real-world problems using data and statistical modeling. The ideal candidate should be highly proficient in Python and have hands-on experience with data exploration, machine learning, model deployment, and working with large datasets. Key Responsibilities Analyze large volumes of structured and unstructured data to generate actionable insights Design, develop, and deploy machine learning models using Python and related libraries Collaborate with cross-functional teams including product, engineering, and business to define data-driven solutions Develop data pipelines and ensure data quality, consistency, and reliability Create and maintain documentation for methodologies, code, and processes Communicate findings and model results clearly to technical and non-technical stakeholders Continuously research and implement new tools, techniques, and best practices in data science Required Skills & Qualifications 3–6 years of experience in a data science role using Python Proficiency in Python data science libraries (Pandas, NumPy, Scikit-learn, Matplotlib, Seaborn) Strong statistical analysis and modeling skills Experience with machine learning algorithms (classification, regression, clustering, etc.) Familiarity with model evaluation, tuning, and deployment techniques Hands-on experience with SQL and working with large databases Exposure to cloud platforms (AWS, GCP, or Azure) is a plus Experience with version control (Git), Jupyter notebooks, and collaborative data tools Preferred Qualifications Advanced degree (Master’s preferred) in Computer Science, Data Science, Statistics, or a related discipline Experience with deep learning frameworks like TensorFlow or PyTorch is a plus Familiarity with MLOps tools such as MLflow, Airflow, or Docker Experience in remote team collaboration and agile project environments What We Offer 100% remote work with flexible hours Competitive compensation package Access to cutting-edge tools and real-world projects A collaborative and inclusive work culture Opportunities for continuous learning and professional development Job Type: Full-time Pay: ₹500,000.00 - ₹1,500,000.00 per year Schedule: Day shift Monday to Friday Morning shift Work Location: In person

Posted 18 hours ago

Apply

7.0 years

6 - 9 Lacs

Thiruvananthapuram

On-site

7 - 9 Years 2 Openings Trivandrum Role description Senior Data Engineer – Azure/Snowflake Migration Key Responsibilities Design and develop scalable data pipelines using Snowflake as the primary data platform, integrating with tools like Azure Data Factory, Synapse Analytics, and AWS services. Build robust, efficient SQL and Python-based data transformations for cleansing, enrichment, and integration of large-scale datasets. Lead migration initiatives from AWS-based data platforms to a Snowflake-centered architecture, including: o Rebuilding AWS Glue pipelines in Azure Data Factory or using Snowflake-native ELT approaches. o Migrating EMR Spark jobs to Snowflake SQL or Python-based pipelines. o Migrating Redshift workloads to Snowflake with schema conversion and performance optimization. o Transitioning S3-based data lakes (Hudi, Hive) to Snowflake external tables via ADLS Gen2 or Azure Blob Storage. o Redirecting Kinesis/MSK streaming data to Azure Event Hubs, followed by ingestion into Snowflake using Streams & Tasks or Snowpipe. Support database migrations from AWS RDS (Aurora PostgreSQL, MySQL, Oracle) to Snowflake, focusing on schema translation, compatibility handling, and data movement at scale. Design modern Snowflake lakehouse-style architectures that incorporate raw, staging, and curated zones, with support for time travel, cloning, zero-copy restore, and data sharing. Integrate Azure Functions or Logic Apps with Snowflake for orchestration and trigger-based automation. Implement security best practices, including Azure Key Vault integration and Snowflake role-based access control, data masking, and network policies. Optimize Snowflake performance and costs using clustering, multi-cluster warehouses, materialized views, and result caching. Support CI/CD processes for Snowflake pipelines using Git, Azure DevOps or GitHub Actions, and SQL code versioning. Maintain well-documented data engineering workflows, architecture diagrams, and technical documentation to support collaboration and long-term platform maintainability. Required Qualifications 7+ years of data engineering experience, with 3+ years on Microsoft Azure stack and hands-on Snowflake expertise. Proficiency in: o Python for scripting and ETL orchestration o SQL for complex data transformation and performance tuning in Snowflake o Azure Data Factory and Synapse Analytics (SQL Pools) Experience in migrating workloads from AWS to Azure/Snowflake, including services such as Glue, EMR, Redshift, Lambda, Kinesis, S3, and MSK. Strong understanding of cloud architecture and hybrid data environments across AWS and Azure. Hands-on experience with database migration, schema conversion, and tuning in PostgreSQL, MySQL, and Oracle RDS. Familiarity with Azure Event Hubs, Logic Apps, and Key Vault. Working knowledge of CI/CD, version control (Git), and DevOps principles applied to data engineering workloads. Preferred Qualifications Extensive experience with Snowflake Streams, Tasks, Snowpipe, external tables, and data sharing. Exposure to MSK-to-Event Hubs migration and streaming data integration into Snowflake. Familiarity with Terraform or ARM templates for Infrastructure-as-Code (IaC) in Azure environments. Certification such as SnowPro Core, Azure Data Engineer Associate, or equivalent. Skills Aws,Azure Data Lake,Python About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 18 hours ago

Apply

9.0 years

5 - 10 Lacs

Thiruvananthapuram

On-site

9 - 12 Years 1 Opening Trivandrum Role description Tech Lead – Azure/Snowflake & AWS Migration Key Responsibilities Design and develop scalable data pipelines using Snowflake as the primary data platform, integrating with tools like Azure Data Factory, Synapse Analytics, and AWS services. Build robust, efficient SQL and Python-based data transformations for cleansing, enrichment, and integration of large-scale datasets. Lead migration initiatives from AWS-based data platforms to a Snowflake-centered architecture, including: o Rebuilding AWS Glue pipelines in Azure Data Factory or using Snowflake-native ELT approaches. o Migrating EMR Spark jobs to Snowflake SQL or Python-based pipelines. o Migrating Redshift workloads to Snowflake with schema conversion and performance optimization. o Transitioning S3-based data lakes (Hudi, Hive) to Snowflake external tables via ADLS Gen2 or Azure Blob Storage. o Redirecting Kinesis/MSK streaming data to Azure Event Hubs, followed by ingestion into Snowflake using Streams & Tasks or Snowpipe. Support database migrations from AWS RDS (Aurora PostgreSQL, MySQL, Oracle) to Snowflake, focusing on schema translation, compatibility handling, and data movement at scale. Design modern Snowflake lakehouse-style architectures that incorporate raw, staging, and curated zones, with support for time travel, cloning, zero-copy restore, and data sharing. Integrate Azure Functions or Logic Apps with Snowflake for orchestration and trigger-based automation. Implement security best practices, including Azure Key Vault integration and Snowflake role-based access control, data masking, and network policies. Optimize Snowflake performance and costs using clustering, multi-cluster warehouses, materialized views, and result caching. Support CI/CD processes for Snowflake pipelines using Git, Azure DevOps or GitHub Actions, and SQL code versioning. Maintain well-documented data engineering workflows, architecture diagrams, and technical documentation to support collaboration and long-term platform maintainability. Required Qualifications 9+ years of data engineering experience, with 3+ years on Microsoft Azure stack and hands-on Snowflake expertise. Proficiency in: o Python for scripting and ETL orchestration o SQL for complex data transformation and performance tuning in Snowflake o Azure Data Factory and Synapse Analytics (SQL Pools) Experience in migrating workloads from AWS to Azure/Snowflake, including services such as Glue, EMR, Redshift, Lambda, Kinesis, S3, and MSK. Strong understanding of cloud architecture and hybrid data environments across AWS and Azure. Hands-on experience with database migration, schema conversion, and tuning in PostgreSQL, MySQL, and Oracle RDS. Familiarity with Azure Event Hubs, Logic Apps, and Key Vault. Working knowledge of CI/CD, version control (Git), and DevOps principles applied to data engineering workloads. Preferred Qualifications Extensive experience with Snowflake Streams, Tasks, Snowpipe, external tables, and data sharing. Exposure to MSK-to-Event Hubs migration and streaming data integration into Snowflake. Familiarity with Terraform or ARM templates for Infrastructure-as-Code (IaC) in Azure environments. Certification such as SnowPro Core, Azure Data Engineer Associate, or equivalent. Skills Azure,AWS REDSHIFT,Athena,Azure Data Lake About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 18 hours ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Requisition id:1631551 The opportunity EY is looking for Senior Consultant/Consultant Analytics with expertise in one of the industries across: Banking, Insurance, not mandatory. Your Key Responsibilities Develop Analytics Based Decision Making Frameworks for clients across Banking, Insurance sector Project Management Client Management Support business development and new analytics solution development activities Skills And Attributes For Success Domain expertise in one of the industries across: Banking, Insurance, not mandatory Statistical modelling (Logistic / Linear regression, GLM modelling, Time-series forecasting, Scorecard development etc.) Hands-on experience in one or more Statistics tool - SAS, Python & R Experience in Tableau, Qlikview would be plus. Data mining experience - Clustering, Segmentation Machine learning and Python experience would be a plus. To qualify for the role you must have B Tech from top tier engineering schools or Masters in Statistics / Economics from the top universities Minimum 6 years of relevant experience, with minimum 1 year of managerial experience for Senior Consultant Minimum 1 year for Associate Consultant; Minimum 3 years for Consultant Ideally you’ll also have Strong communication, facilitation, relationship-building, presentation and negotiation skills. Be highly flexible, adaptable, and creative. Comfortable interacting with senior executives (within the firm and at the client) Strong leadership skills and supervisory responsibility. What We Look For People with the ability to work in a collaborative way to provide services across multiple client departments while adhering to commercial and legal requirements. You will need a practical approach to solving issues and complex problems with the ability to deliver insightful and practical solutions. What Working At EY Offers EY is committed to being an inclusive employer and we are happy to consider flexible working arrangements. We strive to achieve the right balance for our people, enabling us to deliver excellent client service whilst allowing you to build your career without sacrificing your personal priorities. While our client-facing professionals can be required to travel regularly, and at times be based at client sites, our flexible working arrangements can help you to achieve a lifestyle balance. About EY As a global leader in assurance, tax, transaction and advisory services, we’re using the finance products, expertise and systems we’ve developed to build a better working world. That starts with a culture that believes in giving you the training, opportunities and creative freedom to make things better. Whenever you join, however long you stay, the exceptional EY experience lasts a lifetime. And with a commitment to hiring and developing the most passionate people, we’ll make our ambition to be the best employer by 2020 a reality. If you can confidently demonstrate that you meet the criteria above, please contact us as soon as possible

Posted 18 hours ago

Apply

5.0 - 7.0 years

0 Lacs

Thiruvananthapuram

Remote

5 - 7 Years 1 Opening Trivandrum Role description Role Proficiency: Resolve enterprise trouble tickets within agreed SLA and raise problem tickets for permanent resolution and/or provide mentorship (Hierarchical or Lateral) to junior associates Outcomes: 1) Update SOP with updated troubleshooting instructions and process changes2) Mentor new team members in understanding customer infrastructure and processes3) Perform analysis for driving incident reduction4) Escalate high priority incidents to customer and organization stakeholders for quicker resolution5) Contribute to planning and successful migration of platforms 6) Resolve enterprise trouble tickets within agreed SLA and raise problem tickets for permanent resolution7) Provide inputs for root cause analysis after major incidents to define preventive and corrective actions Measures of Outcomes: 1) SLA Adherence2) Time bound resolution of elevated tickets - OLA3) Manage ticket backlog timelines - OLA4) Adhere to defined process – Number of NCs in internal/external Audits5) Number of KB articles created6) Number of incidents and change ticket handled 7) Number of elevated tickets resolved8) Number of successful change tickets9) % Completion of all mandatory training requirements Outputs Expected: Resolution: Understand Priority and Severity based on ITIL practice resolve trouble ticket within agreed resolution SLA Execute change control tickets as documented in implementation plan Troubleshooting: Troubleshooting based on available information from previous tickets or consulting with seniors Participate in online knowledge forums reference. Covert the new steps to KB article Perform logical/analytical troubleshooting Escalation/Elevation: Escalate within organization/customer peer in case of resolution delay. Understand OLA between delivery layers (L1 L2 L3 etc) adhere to OLA. Elevate to next level work on elevated tickets from L1 Tickets Backlog/Resolution: Follow up on tickets based on agreed timelines manage ticket backlogs/last activity as per defined process. Resolve incidents and SRs within agreed timelines. Execute change tickets for infrastructure Installation: Install and configure tools software and patches Runbook/KB: Update KB with new findings Document and record troubleshooting steps as knowledge base Collaboration: Collaborate with different towers of delivery for ticket resolution (within SLA resolve L1 tickets with help from respective tower. Collaborate with other team members for timely resolution of tickets. Actively participate in team/organization-wide initiatives. Co-ordinate with UST ISMS teams for resolving connectivity related issues. Stakeholder Management: Lead the customer calls and vendor calls. Organize meeting with different stake holders. Take ownership for function's internal communications and related change management. Strategic: Define the strategy on data management policy management and data retention management. Support definition of the IT strategy for the function’s relevant scope and be accountable for ensuring the strategy is tracked benchmarked and updated for the area owned. Process Adherence: Thorough understanding of organization and customer defined process. Suggest process improvements and CSI ideas. Adhere to organization’ s policies and business conduct. Process/efficiency Improvement: Proactively identify opportunities to increase service levels and mitigate any issues in service delivery within the function or across functions. Take accountability for overall productivity efforts within the function including coordination of function specific tasks and close collaboration with Finance. Process Implementation: Coordinate and monitor IT process implementation within the function Compliance: Support information governance activities and audit preparations within the function. Act as a function SPOC for IT audits in local sites (incl. preparation interface to local organization mitigation of findings etc.) and work closely with ISRM (Information Security Risk Management). Coordinate overall objective setting preparation and facilitate process in order to achieve consistent objective setting in function Job Description. Coordination Support for CSI across all services in CIS and beyond. Training: On time completion of all mandatory training requirements of organization and customer. Provide On floor training and one to one mentorship for new joiners. Complete certification of respective career paths. Performance Management: Update FAST Goals in NorthStar track report and seek continues feedback from peers and manager. Set goals for team members and mentees and provide feedback Assist new team members to understand the customer environment Skill Examples: 1) Good communication skills (Written verbal and email etiquette) to interact with different teams and customers. 2) Modify / Create runbooks based on suggested changes from juniors or newly identified steps3) Ability to work on an elevated server ticket and solve4) Networking:a. Trouble shooting skills in static and Dynamic routing protocolsb. Should be capable of running netflow analyzers in different product lines5) Server:a. Skills in installing and configuring active directory DNS DHCP DFS IIS patch managementb. Excellent troubleshooting skills in various technologies like AD replication DNS issues etc.c. Skills in managing high availability solutions like failover clustering Vmware clustering etc.6) Storage and Back up:a. Ability to give recommendations to customers. Perform Storage & backup enhancements. Perform change management.b. Skilled in in core fabric technology Storage design and implementation. Hands on experience on backup and storage Command Line Interfacesc. Perform Hardware upgrades firmware upgrades Vulnerability remediation storage and backup commissioning and de-commissioning replication setup and management.d. Skilled in server Network and virtualization technologies. Integration of virtualization storage and backup technologiese. Review the technical diagrams architecture diagrams and modify the SOP and documentations based on business requirements.f. Ability to perform the ITSM functions for storage & backup team and review the quality of ITSM process followed by the team.7) Cloud:a. Skilled in any one of the cloud technologies - AWS Azure GCP.8) Tools:a. Skilled in administration and configuration of monitoring tools like CA UIM SCOM Solarwinds Nagios ServiceNow etcb. Skilled in SQL scriptingc. Skilled in building Custom Reports on Availability and performance of IT infrastructure building based on the customer requirements9) Monitoring:a. Skills in monitoring of infrastructure and application components10) Database:a. Data modeling and database design Database schema creation and managementb. Identify the data integrity violations so that only accurate and appropriate data is entered and maintained.c. Backup and recoveryd. Web-specific tech expertise for e-Biz Cloud etc. Examples of this type of technology include XML CGI Java Ruby firewalls SSL and so on.e. Migrating database instances to new hardware and new versions of software from on premise to cloud based databases and vice versa.11) Quality Analysis: a. Ability to drive service excellence and continuous improvement within the framework defined by IT Operations Knowledge Examples: 1) Good understanding of customer infrastructure and related CIs. 2) ITIL Foundation certification3) Thorough hardware knowledge 4) Basic understanding of capacity planning5) Basic understanding of storage and backup6) Networking:a. Hands-on experience in Routers and switches and Firewallsb. Should have minimum knowledge and hands-on with BGPc. Good understanding in Load balancers and WAN optimizersd. Advance back and restore knowledge in backup tools7) Server:a. Basic to intermediate powershell / BASH/Python scripting knowledge and demonstrated experience in script based tasksb. Knowledge of AD group policy management group policy tools and troubleshooting GPO sc. Basic AD object creation DNS concepts DHCP DFSd. Knowledge with tools like SCCM SCOM administration8) Storage and Backup:a. Subject Matter Expert in any of the Storage & Backup technology9) Tools:a. Proficient in the understanding and troubleshooting of Windows and Linux family of operating systems10) Monitoring:a. Strong knowledge in ITIL process and functions11) Database:a. Knowledge in general database management b. Knowledge in OS System and networking skills Additional Comments: Role - Cloud Engineer Primary Responsibilities • Engineer and support a portfolio of tools including: o HashiCorp Vault (HCP Dedicated), Terraform (HCP), Cloud Platform o GitHub Enterprise Cloud (Actions, Advanced Security, Copilot) o Ansible Automation Platform, Env0, Docker Desktop o Elastic Cloud, Cloudflare, Datadog, PagerDuty, SendGrid, Teleport • Manage infrastructure using Terraform, Ansible, and scripting languages such as Python and PowerShell • Enable security controls including dynamic secrets management, secrets scanning workflows, and cloud access quotas • Design and implement automation for self-service adoption, access provisioning, and compliance monitoring • Respond to user support requests via ServiceNow and continuously improve platform support documentation and onboarding workflows • Participate in Agile sprints, sprint planning, and cross-team technical initiatives • Contribute to evaluation and onboarding of new tools (e.g., remote developer access, artifact storage) Key Projects You May Lead or Support • GitHub secrets scanning and remediation with integration to HashiCorp Vault • Lifecycle management of developer access across tools like GitHub and Teleport • Upgrades to container orchestration environments and automation platforms (EKS, AKS) Technical Skills and Experience • Proficiency with Terraform (IaC) and Ansible • Strong scripting experience in Python, PowerShell, or Bash • Experience operating in cloud environments (AWS, Azure, or GCP) • Familiarity with secure development practices and DevSecOps tooling • Exposure to or experience with: o CI/CD automation (GitHub Actions) o Monitoring and incident management platforms (Datadog, PagerDuty) o Identity providers (AzureAD, Okta) o Containers and orchestration (Docker, Kubernetes) o Secrets management and vaulting platforms Soft Skills and Attributes • Strong cross-functional communication skills with technical and non-technical stakeholders • Ability to work independently while knowing when to escalate or align with other engineers or teams. • Comfort managing complexity and ambiguity in a fast-paced environment • Ability to balance short-term support needs with longer-term infrastructure automation and optimization. • Proactive, service-oriented mindset focused on enabling secure and scalable development • Detail-oriented, structured approach to problem-solving with an emphasis on reliability and repeatability. Skills Terraform,Ansible,Python,PowershellorBash,AWS,AzureorGCP,CI/CDautomation About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 18 hours ago

Apply

0 years

0 Lacs

Gurgaon

On-site

Role: DC LEAD Location : Gurgaon Sammaan Capital's corporate office at Augusta Point is located at 4th Floor, Augusta Point, Golf Course Road, DLF Phase-5, Sector-53, Gurugram, Haryana -122002, India. EXP: 8-9YRS BUDGET : 10-12LPA Working Days : 6 WFO Look for immediate joiners only 1. JOB DESCRIPTION – Data Centre Lead/ Data Centre Operation Manager 1. Windows Server Administration Windows Server (2016/2019/2022) installation, configuration, and troubleshooting Active Directory (AD) management, Group Policy, and Domain Controllers DNS, DHCP, and network services configuration PowerShell scripting for automation 2. Virtualization & Cloud Hyper-V and VMware administration Virtual Machine (VM) provisioning and maintenance 3. Security & Compliance Patch management and Windows Update services (WSUS) Endpoint security, antivirus, and malware protection Compliance with IT security frameworks (ISO 27001, NIST, GDPR) 4. Monitoring & Performance Optimization Performance tuning and resource optimization Monitoring tools (ME , Zabbix) Troubleshooting high CPU, memory, disk, and network utilization issues 5. High Availability & Disaster Recovery Failover clustering and load balancing Disaster recovery planning and execution Windows Server Backup and restore strategies 7. Incident & Problem Management ITIL framework and service management best practices RCA (Root Cause Analysis) and incident handling

Posted 18 hours ago

Apply

8.0 years

0 Lacs

India

On-site

Business Summary The Deltek Global Cloud team focuses on the delivery of first-class services and solutions for our customers. We are an innovative and dynamic team that is passionate about transforming the Deltek cloud services that power our customers' project success. Our diverse, global team works cross-functionally to make an impact on the business. If you want to work in a transformational environment, where education and training are encouraged, consider Deltek as the next step in your career! Position Responsibilities As a Senior Manager for the DevOps Engineering and Automation team, you will lead a team of skilled DevOps engineers responsible for automating infrastructure provisioning, configuration, and CI/CD pipelines for a portfolio of Enterprise solutions. With a strong DevOps transformational background, you will leverage your expertise in DevOps practices and tools and public clouds (AWS, OCI) to develop strategic initiatives that enhance the efficiency, scalability, and reliability of our deployment processes. Additionally, you will have significant experience in people management, strategy development, and cross-functional collaboration. Key Responsibilities: Strategic Leadership: Help develop and implement a strategic roadmap for DevOps practices, automation, and infrastructure management. Identify and prioritize opportunities for process improvements, cost efficiencies, and technological advancements. Collaborate with senior leadership to align DevOps strategies with business objectives and goals. Team Management: Lead, mentor, and develop a team of DevOps engineers, fostering a culture of collaboration, innovation, and continuous improvement. Manage team performance, set clear goals, and provide regular feedback and professional development opportunities. Recruit and onboard top talent to build a high-performing DevOps team. Infrastructure Provisioning and Configuration: Oversee the development and maintenance of infrastructure as code (IaC) using Terraform for provisioning cloud resources. Ensure the creation and maintenance of Ansible playbooks for automated configuration and management of infrastructure and applications. Implement best practices for infrastructure scalability, security, and cost management. CI/CD Pipeline Implementation: Guide and support the design, implementation, and management of CI/CD pipelines to automate the build, testing, and deployment of applications and services. Ensure integration of CI/CD pipelines with version control systems, build tools, and monitoring solutions. Promote practices that support automated testing, security scans, and compliance checks. Cloud Deployment and Management: Direct the deployment and management of applications and services in public cloud environments such as AWS and OCI. Utilize cloud-native services and tools to enhance application performance and reliability. Implement robust monitoring, troubleshooting, and disaster recovery solutions for cloud deployments. Cross-Functional Collaboration: Work closely with Engineering and Delivery stakeholders to ensure alignment and successful deployments. Facilitate design and code reviews, ensuring adherence to high standards of quality and performance. Drive cross-functional initiatives to improve process efficiency and project outcomes. Qualifications Qualifications: Education: Bachelor’s degree in Computer Science (strongly preferred), Information Technology, or a related field. Master’s degree preferred. Experience: Minimum of 8 years of experience in DevOps, cloud infrastructure, and automation, with at least 3 years in a leadership role. Skills: Expertise in Infrastructure and automated configuration tools for infrastructure provisioning or automated configuration management. Proven experience in designing and implementing CI/CD pipelines using tools such as Jenkins, Azure DevOps, GitLab CI, or CircleCI. Extensive hands-on experience with AWS and OCI, including services like EC2, S3, Lambda, VCN, and OCI Compute. Strong understanding of containerization and orchestration tools like Docker and Kubernetes. Knowledge of Oracle and SQL Server, including clustering, replication, partitioning, and indexing. Excellent scripting skills in languages such as Python, Bash, or PowerShell. Proficiency in monitoring and logging tools like Prometheus, Grafana, ELK stack, or CloudWatch. Strong leadership, communication, and interpersonal skills. Preferred Qualifications: Certifications: AWS Certified DevOps Engineer, Terraform Certified Associate, or similar.

Posted 19 hours ago

Apply

0 years

3 - 7 Lacs

Chennai

On-site

Position Overview At NTT DATA, we know that with the right people on board, anything is possible. The quality, integrity, and commitment of our employees are key factors in our company's growth, market presence and our ability to help our clients stay a step ahead of the competition. By hiring the best people and helping them grow both professionally and personally, we ensure a bright future for NTT DATA and for the people who work here. NTT DATA, Inc. currently seeks Database Administration Specialist to join our team in India. Database Administration Specialist will be responsible for the implementation, configuration, maintenance, and performance of critical SQL Server RDBMS systems, to ensure the availability and consistent performance of our client applications. This is a "hands-on" position requiring solid technical skills, as well as excellent interpersonal and communication skills. Position General Duties and Tasks Skills & Qualifications Manage SQL Server databases through multiple product lifecycle environments, from development to mission-critical production systems. Configure and maintain database servers and processes, including monitoring of system health and performance, to ensure high levels of performance, availability, and security. Database performance tuning. Planning, deploying, maintaining, troubleshooting of high-availability database environments (Always-On, Clustering, Mirroring, etc.). Independently analyze, solve, and correct issues in real time, providing problem resolution end-to-end. Refine and automate regular processes, track issues, and document changes, for effeciency and cost optimization. Provide 24x7 support for critical production systems. Perform scheduled maintenance and support release deployment activities after hours. Share domain and technical expertise, providing technical mentorship and cross-training to other peers and team members. Work directly with end customer, business stakeholders as well as technical resources. Skills & Qualifications 6 + years of experience on SQL Server 2008/2012/Above versions in database administration, in an enterprise environment with multiple database servers is required. Experience with Performance Tuning and Optimization (PTO), using native monitoring and troubleshooting tools Experience with backups, restores and recovery models Experience with database administration Experience with incident and problem queue management. Knowledge of High Availability (HA) and Disaster Recovery (DR) options for SQL Server Experience working with Windows server, including Active Directory Excellent written and verbal communication, problem solving skills Flexible, team player, "get-it-done" personality Ability to work independently, with little or no direct supervision. Ability to work in a rapidly changing environment Ability to multi-task and context-switch effectively between different activities and teams Strong ITIL foundation, Continual Service Improvement and Total Quality Management experiences. Nice to Have: (Optional) Experience with scripting. Experience in SSIS, SSRS, T-SQL. Experience supporting Logical DBA activities. Experience complex query tuning and schema refinement. MCTS, MCITP, and/or MVP certifications a plus Preferred Education: Bachelors in Computer Science or equivalent.

Posted 19 hours ago

Apply

5.0 - 7.0 years

4 - 7 Lacs

Chennai

On-site

Comcast brings together the best in media and technology. We drive innovation to create the world's best entertainment and online experiences. As a Fortune 50 leader, we set the pace in a variety of innovative and fascinating businesses and create career opportunities across a wide range of locations and disciplines. We are at the forefront of change and move at an amazing pace, thanks to our remarkable people, who bring cutting-edge products and services to life for millions of customers every day. If you share in our passion for teamwork, our vision to revolutionize industries and our goal to lead the future in media and technology, we want you to fast-forward your career at Comcast. Job Summary Responsible for working cross-functionally to collect data and develop models to determine trends utilizing a variety of data sources. Retrieves, analyzes and summarizes business, operations, employee, customer and/or economic data in order to develop business intelligence, optimize effectiveness, predict business outcomes and decision-making purposes. Involved with numerous key business decisions by conducting the analyses that inform our business strategy. This may include: impact measurement of new products or features via normalization techniques, optimization of business processes through robust A/B testing, clustering or segmentation of customers to identify opportunities of differentiated treatment, deep dive analyses to understand drivers of key business trends, identification of customer sentiment drivers through natural language processing (NLP) of verbatim responses to Net Promotor System (NPS) surveys and development of frameworks to drive upsell strategy for existing customers by balancing business priorities with customer activity. Has in-depth experience, knowledge and skills in own discipline. Usually determines own work priorities. Acts as resource for colleagues with less experience. Job Description Core Responsibilities Work with business leaders and stakeholders to understand data and analysis needs and develop technical requirements. Analyzes large, complex data to determine actionable business insights using self-service analytics and reporting tools. Combines data as needed from disparate data sources to complete analysis from multiple sources. Identifies key business drivers and insights by conducting exploratory data analysis and hypothesis testing. Develops forecasting models to predict business key metrics. Analyzes the results of campaigns, offers or initiatives to measure their effectiveness and identifies opportunities for improvement. Communicates findings clearly and concisely through narrative-driven presentations and effective data visualizations to Company executives and decisionmakers. Stays current with emerging trends in analytics, statistics, and machine learning and applies them to business challenges. Mandatory Skills: SQL Tableau Good Story telling capabilities Nice to have skills: PPT creation Databricks Spark LLM Disclaimer:This information has been designed to indicate the general nature and level of work performed by employees in this role. It is not designed to contain or be interpreted as a comprehensive inventory of all duties, responsibilities and qualifications. Comcast is proud to be an equal opportunity workplace. We will consider all qualified applicants for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, veteran status, genetic information, or any other basis protected by applicable law. Base pay is one part of the Total Rewards that Comcast provides to compensate and recognize employees for their work. Most sales positions are eligible for a Commission under the terms of an applicable plan, while most non-sales positions are eligible for a Bonus. Additionally, Comcast provides best-in-class Benefits to eligible employees. We believe that benefits should connect you to the support you need when it matters most, and should help you care for those who matter most. That’s why we provide an array of options, expert guidance and always-on tools, that are personalized to meet the needs of your reality – to help support you physically, financially and emotionally through the big milestones and in your everyday life. Please visit the compensation and benefits summary on our careers site for more details. Education Bachelor's Degree While possessing the stated degree is preferred, Comcast also may consider applicants who hold some combination of coursework and experience, or who have extensive related professional experience. Relevant Work Experience 5-7 Years

Posted 19 hours ago

Apply

0 years

0 - 1 Lacs

Coimbatore

On-site

Job Title: Technical Internship – Programmer (AI/ML Focus) Location: Coimbatore, Tamil Nadu Company: Angler Technologies Duration: full-time conversion Eligibility: Recent graduates or final year students from Engineering Science with specialization in Artificial Intelligence and Machine Learning Job Description: We are looking for enthusiastic and innovative AI/ML Interns to join our technical team at Angler Technologies. This is a unique opportunity for fresh graduates to work on real-world AI/ML projects, build intelligent systems, and gain hands-on experience in programming, data science, and modern AI frameworks. Key Responsibilities: Assist in the development, training, and deployment of Machine Learning and AI models Work with data collection, preprocessing, and annotation tasks Collaborate with the product and software teams to integrate AI/ML features into applications Support documentation, testing, and debugging of code modules Participate in code reviews and knowledge-sharing sessions Required Skills & Qualifications: Strong foundation in programming (python, asp.net, HTML, CSS, JavaScript preferred) Understanding of basic AI/ML concepts (classification, regression, clustering, NLP, etc.) Analytical thinking and problem-solving mindset Good communication and teamwork skills Eagerness to learn and apply new technologies Preferred Background: B.E/B.Tech (AI/ML) B.Sc/BCA/M.Sc/MCA with specialization in AI/ML/Data Science Academic or hobby projects in AI/ML will be a bonus Perks & Benefits: Stipend based on performance and project contributions Hands-on training and mentorship Opportunity to work on live projects Certificate of Internship & Letter of Recommendation Potential for full-time employment post internship Job Types: Full-time, Permanent, Fresher Pay: ₹4,643.26 - ₹8,597.81 per month Benefits: Paid sick time Paid time off Schedule: Day shift Supplemental Pay: Performance bonus Work Location: In person

Posted 19 hours ago

Apply

4.0 - 5.0 years

5 - 9 Lacs

Noida

On-site

Job Information: Work Experience: 4-5 years Industry: IT Services Job Type: FULL TIME Location: Noida, India Job Overview: We are seeking a skilled Data Engineer with 4-5 years of experience to design, build, and maintain scalable data pipelines and analytics solutions within the AWS cloud environment. The ideal candidate will leverage AWS Glue, PySpark, and QuickSight to deliver robust data integration, transformation, and visualization capabilities. This role is critical in supporting business intelligence, analytics, and reporting needs across the organization. Key Responsibilities: Design, develop, and maintain data pipelines using AWS Glue, PySpark, and related AWS services to extract, transform, and load (ETL) data from diverse sources. Build and optimize data warehouse/data lake infrastructure on AWS, ensuring efficient data storage, processing, and retrieval. Develop and manage ETL processes to source data from various systems, including databases, APIs, and file storage, and create unified data models for analytics and reporting. Implement and maintain business intelligence dashboards using Amazon QuickSight, enabling stakeholders to derive actionable insights. Collaborate with cross-functional teams (business analysts, data scientists, product managers) to understand requirements and deliver scalable data solutions. Ensure data quality, integrity, and security throughout the data lifecycle, implementing best practices for governance and compliance. Support self-service analytics by empowering internal users to access and analyze data through QuickSight and other reporting tools. Troubleshoot and resolve data pipeline issues, optimizing performance and reliability as needed. Required Skills & Qualifications: Proficiency in AWS cloud services: AWS Glue, QuickSight, S3, Lambda, Athena, Redshift, EMR, and related technologies. Strong experience with PySpark for large-scale data processing and transformation. Expertise in SQL and data modeling for relational and non-relational databases. Experience building and optimizing ETL pipelines and data integration workflows. Familiarity with business intelligence and visualization tools, especially Amazon QuickSight. Knowledge of data governance, security, and compliance best practices. Strong programming skills in Python; experience with automation and scripting. Ability to work collaboratively in agile environments and manage multiple priorities effectively. Excellent problem-solving and communication skills. Preferred Qualifications: AWS certification (e.g., AWS Certified Data Analytics – Specialty, AWS Certified Developer). Good to Have Skills: Understanding of machine learning, deep learning and Generative AI concepts, Regression, Classification, Predictive modeling, Clustering, Deep Learning. Interview Process Internal Assessment 3 Technical Rounds

Posted 19 hours ago

Apply

25.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

The Company PayPal has been revolutionizing commerce globally for more than 25 years. Creating innovative experiences that make moving money, selling, and shopping simple, personalized, and secure, PayPal empowers consumers and businesses in approximately 200 markets to join and thrive in the global economy. We operate a global, two-sided network at scale that connects hundreds of millions of merchants and consumers. We help merchants and consumers connect, transact, and complete payments, whether they are online or in person. PayPal is more than a connection to third-party payment networks. We provide proprietary payment solutions accepted by merchants that enable the completion of payments on our platform on behalf of our customers. We offer our customers the flexibility to use their accounts to purchase and receive payments for goods and services, as well as the ability to transfer and withdraw funds. We enable consumers to exchange funds more safely with merchants using a variety of funding sources, which may include a bank account, a PayPal or Venmo account balance, PayPal and Venmo branded credit products, a credit card, a debit card, certain cryptocurrencies, or other stored value products such as gift cards, and eligible credit card rewards. Our PayPal, Venmo, and Xoom products also make it safer and simpler for friends and family to transfer funds to each other. We offer merchants an end-to-end payments solution that provides authorization and settlement capabilities, as well as instant access to funds and payouts. We also help merchants connect with their customers, process exchanges and returns, and manage risk. We enable consumers to engage in cross-border shopping and merchants to extend their global reach while reducing the complexity and friction involved in enabling cross-border trade. Our beliefs are the foundation for how we conduct business every day. We live each day guided by our core values of Inclusion, Innovation, Collaboration, and Wellness. Together, our values ensure that we work together as one global team with our customers at the center of everything we do – and they push us to ensure we take care of ourselves, each other, and our communities. Job Description Summary: We are looking for a Sr Product Manager to lead a foundational product area at the heart of PayPal’s Conversational AI strategy. This role will be responsible for creating and scaling the AI Insights and Search capabilities that power opportunity discovery and knowledge management of our platform clients launching and managing AI Agents. This leader will drive two critical workstreams: Lead product strategy & vision for the AI Assistant Search and RAG tooling shaping how we enable platform clients (inc. AI Agents) to access up-to-date knowledge. Lead product strategy & vision for AI Assistant Insights and Discovery – leveraging and expanding the work with AI search to build the pipelines and infrastructure that allow platform clients to make the most of unstructured data (transcripts, feedback) for the purpose of: Opportunity discovery and topic modeling/clustering Automated agentic prompt generation Automated test case generation Customer modeling (simulation) If you're passionate about building intelligent systems that learn from data, thrive at the intersection of AI, product, and platform, and want to make a direct impact on millions of users, this is your role. Job Description: Essential Responsibilities: Uses data to build insights on product requirements consistent with the shared vision for the product. Gathers insights from the customer experience and customer needs to input to product requirements Analyzes research, market analysis and usability studies, research and market analysis to support data-driven decision making Monitors product profitability measures, including budget. Lead sprint planning, daily standups and retrospectives to drive execution. Interfaces with product leadership as needed. Partners with content developers, data scientists, product designers and user experience researchers to identify new opportunities. Minimum Qualifications: Minimum of 5 years of relevant work experience and a Bachelor's degree or equivalent experience. PayPal does not charge candidates any fees for courses, applications, resume reviews, interviews, background checks, or onboarding. Any such request is a red flag and likely part of a scam. To learn more about how to identify and avoid recruitment fraud please visit https://careers.pypl.com/contact-us. For the majority of employees, PayPal's balanced hybrid work model offers 3 days in the office for effective in-person collaboration and 2 days at your choice of either the PayPal office or your home workspace, ensuring that you equally have the benefits and conveniences of both locations. Our Benefits: At PayPal, we’re committed to building an equitable and inclusive global economy. And we can’t do this without our most important asset-you. That’s why we offer benefits to help you thrive in every stage of life. We champion your financial, physical, and mental health by offering valuable benefits and resources to help you care for the whole you. We have great benefits including a flexible work environment, employee shares options, health and life insurance and more. To learn more about our benefits please visit https://www.paypalbenefits.com Who We Are: To learn more about our culture and community visit https://about.pypl.com/who-we-are/default.aspx Commitment to Diversity and Inclusion PayPal provides equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, pregnancy, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state, or local law. In addition, PayPal will provide reasonable accommodations for qualified individuals with disabilities. If you are unable to submit an application because of incompatible assistive technology or a disability, please contact us at paypalglobaltalentacquisition@paypal.com. Belonging at PayPal: Our employees are central to advancing our mission, and we strive to create an environment where everyone can do their best work with a sense of purpose and belonging. Belonging at PayPal means creating a workplace with a sense of acceptance and security where all employees feel included and valued. We are proud to have a diverse workforce reflective of the merchants, consumers, and communities that we serve, and we continue to take tangible actions to cultivate inclusivity and belonging at PayPal. Any general requests for consideration of your skills, please Join our Talent Community. We know the confidence gap and imposter syndrome can get in the way of meeting spectacular candidates. Please don’t hesitate to apply. REQ ID R0128882

Posted 19 hours ago

Apply

5.0 years

4 Lacs

Ahmedabad

On-site

We are hiring a Senior Software Development Engineer for our platform. We are helping enterprises and service providers build their AI inference platforms for end users. As a Senior Software Engineer, you will take ownership of backend-heavy, full-stack feature development—building robust services, scalable APIs, and intuitive frontends that power the user experience. You’ll contribute to the core of our enterprise-grade AI platform, collaborating across teams to ensure our systems are performant, secure, and built to last. This is a high-impact, high-visibility role working at the intersection of AI infrastructure, enterprise software, and developer experience. Responsibilities: Design, develop and maintain databases, system APIs, system integrations, machine learning pipelines and web user interfaces. Scale algorithms designed by data scientists for deployment in high-performance environments. Develop and maintain continuous integration pipelines to deploy the systems. Design and implement scalable backend systems using Golang, C++, Go,Python. Model and manage data using relational (e.g., PostgreSQL , MySQL). Build frontend components and interfaces using TypeScript, and JavaScript when needed. Participate in system architecture discussions and contribute to design decisions. Write clean, idiomatic, and well-documented Go code following best practices and design patterns. Ensure high code quality through unit testing, automation, code reviews, and documentation Communicate technical concepts clearly to both technical and non-technical stakeholders. Qualifications and Criteria: 5–10 years of professional software engineering experience building enterprise-grade platforms. Deep proficiency in Golang , with real-world experience building production-grade systems. Solid knowledge of software architecture, design patterns, and clean code principles. Experience in high-level system design and building distributed systems. Expertise in Python and backend development with experience in PostgreSQL or similar databases. Hands-on experience with unit testing, integration testing, and TDD in Go. Strong debugging, profiling, and performance optimization skills. Excellent communication and collaboration skills. Hands-on experience with frontend development using JavaScript, TypeScript , and HTML/CSS. Bachelor's degree or equivalent experience in a quantitative field (Computer Science, Statistics, Applied Mathematics, Engineering, etc.). Skills: Understanding of optimisation, predictive modelling, machine learning, clustering and classification techniques, and algorithms. Fluency in a programming language (e.g. C++, Go, Python, JavaScript, TypeScript, SQL). Docker, Kubernetes, and Linux knowledge are an advantage. Experience using Git. Knowledge of continuous integration (e.g. Gitlab/Github). Basic familiarity with relational databases, preferably PostgreSQL. Strong grounding in applied mathematics. A firm understanding of and experience with the engineering approach. Ability to interact with other team members via code and design documents. Ability to work on multiple tasks simultaneously. Ability to work in high-pressure environments and meet deadlines. Compensation: Commensurate with experience Position Type: Full-time ( In House ) Location: Ahmedabad / Jamnagar Gujarat India. Submission Requirements CV All academic transcripts Submit to chintanit22@gmail.com , dipakberait@gmail.com with the name of the position you wish to apply for in the subject line. Job Type: Full-time Pay: From ₹40,000.00 per month Benefits: Paid sick time Location Type: In-person Schedule: Day shift Monday to Friday Experience: Full-stack development: 5 years (Preferred) Work Location: In person

Posted 19 hours ago

Apply

7.0 years

0 Lacs

India

Remote

Role: Neo4j Engineer Overall IT Experience: 7+ years Relevant experience: (Graph Databases: 4+ years, Neo4j: 2+ years) Location: Remote Company Description Bluetick Consultants is a technology-driven firm that supports hiring remote developers, building technology products, and enabling end-to-end digital transformation. With previous experience in top technology companies such as Amazon, Microsoft, and Craftsvilla, we understand the needs of our clients and provide customized solutions. Our team has expertise in emerging technologies, backend and frontend development, cloud development, and mobile technologies. We prioritize staying up-to-date with the latest technological advances to create a long-term impact and grow together with our clients. Key Responsibilities • Graph Database Architecture: Design and implement Neo4j graph database schemas optimized for fund administration data relationships and AI-powered queries • Knowledge Graph Development: Build comprehensive knowledge graphs connecting entities like funds, investors, companies, transactions, legal documents, and market data • Graph-AI Integration: Integrate Neo4j with AI/ML pipelines, particularly for enhanced RAG (Retrieval-Augmented Generation) systems and semantic search capabilities • Complex Relationship Modeling: Model intricate relationships between Limited Partners, General Partners, fund structures, investment flows, and regulatory requirements • Query Optimization: Develop high-performance Cypher queries for real-time analytics, relationship discovery, and pattern recognition • Data Pipeline Integration: Build ETL processes to populate and maintain graph databases from various data sources including FundPanel.io, legal documents, and external market data using domain specific ontologies • Graph Analytics: Implement graph algorithms for fraud detection, risk assessment, relationship scoring, and investment opportunity identification • Performance Tuning: Optimize graph database performance for concurrent users and complex analytical queries • Documentation & Standards: Establish graph modelling standards, query optimization guidelines, and comprehensive technical documentation Key Use Cases You'll Enable • Semantic Search Enhancement: Create knowledge graphs that improve AI search accuracy by understanding entity relationships and context • Investment Network Analysis: Map complex relationships between investors, funds, portfolio companies, and market segments • Compliance Graph Modelling: Model regulatory relationships and fund terms to support automated auditing and compliance validation • Customer Relationship Intelligence: Build relationship graphs for customer relations monitoring and expansion opportunity identification • Predictive Modelling Support: Provide graph-based features for investment prediction and risk assessment models • Document Relationship Mapping: Connect legal documents, contracts, and agreements through entity and relationship extraction Required Qualifications • Bachelor's degree in Computer Science, Data Engineering, or related field • 7+ years of overall IT Experience • 4+ years of experience with graph databases, with 2+ years specifically in Neo4j • Strong background in data modelling, particularly for complex relationship structures • Experience with financial services data and regulatory requirements preferred • Proven experience integrating graph databases with AI/ML systems • Understanding of knowledge graph concepts and semantic technologies • Experience with high-volume, production-scale graph database implementations Technology Skills • Graph Databases: Neo4j (primary), Cypher query language, APOC procedures, Neo4j Graph Data Science library • Programming: Python, Java, or Scala for graph data processing and integration • AI Integration: Experience with graph-enhanced RAG systems, vector embeddings in graph context, GraphRAG implementations • Data Processing: ETL pipelines, data transformation, real-time data streaming (Kafka, Apache Spark) • Cloud Platforms: Neo4j Aura, Azure integration, containerized deployments • APIs: Neo4j drivers, REST APIs, GraphQL integration • Analytics: Graph algorithms (PageRank, community detection, shortest path, centrality measures) • Monitoring: Neo4j monitoring tools, performance profiling, query optimization • Integration: Elasticsearch integration, vector database connections, multi-modal data handling Specific Technical Requirements • Knowledge Graph Construction: Entity resolution, relationship extraction, ontology modelling • Cypher Expertise: Advanced Cypher queries, stored procedures, custom functions • Scalability: Clustering, sharding, horizontal scaling strategies • Security: Graph-level security, role-based access control, data encryption • Version Control: Graph schema versioning, migration strategies • Backup & Recovery: Graph database backup strategies, disaster recovery planning Industry Context Understanding • Fund Administration: Understanding of fund structures, capital calls, distributions, and investor relationships • Financial Compliance: Knowledge of regulatory requirements and audit trails in financial services • Investment Workflows: Understanding of due diligence processes, portfolio management, and investor reporting • Legal Document Structures: Familiarity with LPA documents, subscription agreements, and fund formation documents Collaboration Requirements • AI/ML Team: Work closely with GenAI engineers to optimize graph-based AI applications • Data Architecture Team: Collaborate on overall data architecture and integration strategies • Backend Developers: Integrate graph databases with application APIs and microservices • DevOps Team: Ensure proper deployment, monitoring, and maintenance of graph database infrastructure • Business Stakeholders: Translate business requirements into effective graph models and queries Performance Expectations • Query Performance: Ensure sub-second response times for standard relationship queries • Scalability: Support 100k+ users with concurrent access to graph data • Accuracy: Maintain data consistency and relationship integrity across complex fund structures • Availability: Ensure 99.9% uptime for critical graph database services • Integration Efficiency: Seamless integration with existing FundPanel.io systems and new AI services This role offers the opportunity to work at the intersection of advanced graph technology and artificial intelligence, creating innovative solutions that will transform how fund administrators understand and leverage their data relationships.

Posted 19 hours ago

Apply

0 years

0 Lacs

India

Remote

🤖 Machine Learning Intern – Remote | Learn AI by Building It 📍 Location: Remote / Virtual 💼 Type: Internship (Unpaid) 🎁 Perks: Certificate After Completion || Letter of Recommendation (6 Months) 🕒 Schedule: 5–7 hrs/week | Flexible Timing Join Skillfied Mentor as a Machine Learning Intern and move beyond online courses. You’ll work on real datasets, build models, and see your algorithms in action — all while gaining experience that hiring managers actually look for. Whether you're aiming for a career in AI, data science, or automation — this internship will build your foundation with hands-on learning. 🔧 What You’ll Do: Work with real datasets to clean, preprocess, and transform data Build machine learning models using Python, NumPy, Pandas, Scikit-learn Perform classification, regression, and clustering tasks Use Jupyter Notebooks for experimentation and documentation Collaborate on mini-projects and model evaluation tasks Present insights in simple, digestible formats 🎓 What You’ll Gain: ✅ Full Python course included during the internship ✅ Hands-on projects to showcase on your resume or portfolio ✅ Certificate of Completion + LOR (6-month internship) ✅ Experience with industry-relevant tools & techniques ✅ Remote flexibility — manage your time with just 5–7 hours/week 🗓️ Application Deadline: 5th August 2025 👉 Apply now to start your ML journey with Skillfied Mentor

Posted 19 hours ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Overview We are seeking a self-driven, inquisitive & curious SRE Database Site reliability engineer that drives reliability, performance, and availability, including ensuring data security and access control of database systems leveraged by the frontend application and the business transactions in both SQL and NoSQL database systems This is a critical enabler achieving a high resiliency during operations and also continuously improving through design during the software development lifecycle. The SRE database support engineer is integral part of the global team with its main purpose to provide a delightful customer experience for the user of the global consumer, commercial, supply chain and enablement functions in the PepsiCo digital products application portfolio of 260+ applications, enabling a full SRE Practice incident prevention / proactive resolution model. The scope of this role is focussed on the Modern architected cloud native application portfolio It requires a blend of technical expertise of database administration / engineering, SRE tools, modern applications architecture, IT operations experience, and analytics & influence skills. Responsibilities Reporting directly to the Modern IT Operations SRE enablement Associate Director, is responsible to enable & execute the pre-emptive diagnosis of an PepsiCo DPA applications towards service performance, reliability and availability expected by our customers and internal groups Ensure database availability, performance, and security in production environments. Instrument, monitor, pro-actively collaborate with development teams to optimize schema design, indexing, and query plans. Automate tasks using scripts or infrastructure-as-code tools. Understanding of cloud infrastructure and services. Ability to design and implement database replication and failover solutions. Providing insights along with troubleshoot and resolve database-related incidents and outages Stay up to date with emerging database technologies and best practices. Work closely with customer facing support teams to evolve & empower them with SRE insights Ability to collaborate effectively with development and operations teams. Participate in on-call support and orchestrating blameless post-mortems and encourage the practise within the organization Provides inputs to the definition, collection and analysis of data relevant products systems and their interactions towards output resiliency of the IT ecosystem especially related impacting customer statisfaction, Revenue or IT productivity Actively engage and drive AI Ops adoption across teams Qualifications Bachelor’s degree in Computer Science, Information Technology, or related field. 8-12 years of professional experience as a Database administrator and or / database SRE with application knowledge Hands-on experience with Microsoft SQL Server, PostgreSQL, MySQL, and at least one leading NoSQL technology such as MongoDB, Cassandra, or Couchbase. Proficiency in writing complex SQL queries, stored procedures, and functions. Experience building self-heal scripts or remediation runbooks (Python, PowerShell, Bash)- Azure Logic Apps, Azure Functions- Integration with ServiceNow and AppDynamics APIs- Exposure with replication, clustering, and high availability setups. Experience with cloud platforms (AWS, Azure, Google Cloud Spanner, etc.). Solid understanding of database security, auditing, and compliance requirements. Familiarity with DevOps tools and practices (CI/CD, version control, infrastructure automation). Excellent problem-solving and analytical skills. Strong communication and documentation skills. Preferred Qualifications: Certifications such as Microsoft Certified: Azure Database Administrator Associate, MongoDB Certified DBA, or similar. Experience with cloud platforms (AWS RDS, Azure SQL, Google Cloud Spanner, etc.). Exposure to containerized database deployments using Docker or Kubernetes. Leadership and Soft skills: Driving for Results: Demonstrates perseverance and resilience in the pursuit of goals. Confronts and works to resolve tough issues. Exhibits a “can-do” attitude and a willingness to take on significant challenges Decision Making: Quickly analyses complex problems to find actionable, pragmatic solutions. Sees connections in data, events, trends, etc. Consistently works against the right priorities Collaborating: Collaborates well with others to deliver results. Keeps others informed so there are no unnecessary surprises. Effectively listens to and understands what other people are saying. Communicating and Influencing: Ability to build convincing, persuasive, and logical storyboards. Strong executive presence. Able to communicate effectively and succinctly, both verbally and on paper. Motivating and Inspiring Others: Demonstrates a sense of passion, enjoyment, and pride about their work. Demonstrates a positive attitude in the workplace. Embraces and adapts well to change. Creates a work environment that makes work rewarding and enjoyable.

Posted 19 hours ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Requisition id:1631311 The opportunity EY is looking for Senior Consultant/Consultant Analytics with expertise in one of the industries across: Banking, Insurance, not mandatory. Your Key Responsibilities Develop Analytics Based Decision Making Frameworks for clients across Banking, Insurance sector Project Management Client Management Support business development and new analytics solution development activities Skills And Attributes For Success Domain expertise in one of the industries across: Banking, Insurance, not mandatory Statistical modelling (Logistic / Linear regression, GLM modelling, Time-series forecasting, Scorecard development etc.) Hands-on experience in one or more Statistics tool - SAS, Python & R Experience in Tableau, Qlikview would be plus. Data mining experience - Clustering, Segmentation Machine learning and Python experience would be a plus. To qualify for the role you must have B Tech from top tier engineering schools or Masters in Statistics / Economics from the top universities Minimum 6 years of relevant experience, with minimum 1 year of managerial experience for Senior Consultant Minimum 1 year for Associate Consultant; Minimum 3 years for Consultant Ideally you’ll also have Strong communication, facilitation, relationship-building, presentation and negotiation skills. Be highly flexible, adaptable, and creative. Comfortable interacting with senior executives (within the firm and at the client) Strong leadership skills and supervisory responsibility. What We Look For People with the ability to work in a collaborative way to provide services across multiple client departments while adhering to commercial and legal requirements. You will need a practical approach to solving issues and complex problems with the ability to deliver insightful and practical solutions. What Working At EY Offers EY is committed to being an inclusive employer and we are happy to consider flexible working arrangements. We strive to achieve the right balance for our people, enabling us to deliver excellent client service whilst allowing you to build your career without sacrificing your personal priorities. While our client-facing professionals can be required to travel regularly, and at times be based at client sites, our flexible working arrangements can help you to achieve a lifestyle balance. About EY As a global leader in assurance, tax, transaction and advisory services, we’re using the finance products, expertise and systems we’ve developed to build a better working world. That starts with a culture that believes in giving you the training, opportunities and creative freedom to make things better. Whenever you join, however long you stay, the exceptional EY experience lasts a lifetime. And with a commitment to hiring and developing the most passionate people, we’ll make our ambition to be the best employer by 2020 a reality. If you can confidently demonstrate that you meet the criteria above, please contact us as soon as possible

Posted 19 hours ago

Apply

0.0 years

0 - 0 Lacs

Coimbatore, Tamil Nadu

On-site

Job Title: Technical Internship – Programmer (AI/ML Focus) Location: Coimbatore, Tamil Nadu Company: Angler Technologies Duration: full-time conversion Eligibility: Recent graduates or final year students from Engineering Science with specialization in Artificial Intelligence and Machine Learning Job Description: We are looking for enthusiastic and innovative AI/ML Interns to join our technical team at Angler Technologies. This is a unique opportunity for fresh graduates to work on real-world AI/ML projects, build intelligent systems, and gain hands-on experience in programming, data science, and modern AI frameworks. Key Responsibilities: Assist in the development, training, and deployment of Machine Learning and AI models Work with data collection, preprocessing, and annotation tasks Collaborate with the product and software teams to integrate AI/ML features into applications Support documentation, testing, and debugging of code modules Participate in code reviews and knowledge-sharing sessions Required Skills & Qualifications: Strong foundation in programming (python, asp.net, HTML, CSS, JavaScript preferred) Understanding of basic AI/ML concepts (classification, regression, clustering, NLP, etc.) Analytical thinking and problem-solving mindset Good communication and teamwork skills Eagerness to learn and apply new technologies Preferred Background: B.E/B.Tech (AI/ML) B.Sc/BCA/M.Sc/MCA with specialization in AI/ML/Data Science Academic or hobby projects in AI/ML will be a bonus Perks & Benefits: Stipend based on performance and project contributions Hands-on training and mentorship Opportunity to work on live projects Certificate of Internship & Letter of Recommendation Potential for full-time employment post internship Job Types: Full-time, Permanent, Fresher Pay: ₹4,643.26 - ₹8,597.81 per month Benefits: Paid sick time Paid time off Schedule: Day shift Supplemental Pay: Performance bonus Work Location: In person

Posted 21 hours ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Summary Experience in applying machine learning techniques, Natural Language Processing or Computer Vision using TensorFlow, Pytorch Strong analytical and problem-solving skills Solid software engineering skills across multiple languages including but not limited to Java or Python, C/C++ Build and deploy end to end ML models and leverage metrics to support predictions, recommendations, search, and growth strategies Deep understanding of ML techniques such as: classification, clustering, deep learning, optimization methods, supervised and unsupervised techniques Proven ability to apply, debug, and develop machine learning models Establish scalable, efficient, automated processes for data analyses, model development, validation and implementation, Choose suitable DL algorithms, software, hardware and suggest integration methods. Ensure AI ML solutions are developed, and validations are performed in accordance with Responsible AI guidelines & Standards To closely monitor the Model Performance and ensure Model Improvements are done post Project Delivery Coach and mentor our team as we build scalable machine learning solutions Strong communication skills and an easy-going attitude Oversee development and implementation of assigned programs and guide teammates Carry out testing procedures to ensure systems are running smoothly Ensure that systems satisfy quality standards and procedures Build and manage strong relationships with stakeholders and various teams internally and externally, Provide direction and structure to assigned projects activities, establishing clear, precise goals, objectives and timeframes, run Project Governance calls with senior Stakeholders Strategy As the Squad Lead of AI ML Delivery team, the candidate is expected to lead the squad Delivery for AIML. Business Understand the Business requirement and execute the ML solutioning and ensue the delivery commitments are delivered on time and schedule. Processes Design and Delivery of AI ML Use cases RAI, Security & Governance Model Validation & Improvements Stakeholder Management People & Talent Manage the team in terms of project assignments and deadlines Manage a team dedicated for reviewing models related unstructured and structured data. Hire, nurture talent as required. Key Responsibilities Risk Management Ownership of the delivery, highlighting various risks on a timely manner to the stakeholders. Identifying proper remediation plan for the risks with proper risk roadmap. Governance Awareness and understanding of the regulatory framework, in which the Group operates, and the regulatory requirements and expectations relevant to the role. Regulatory & Business Conduct Display exemplary conduct and live by the Group’s Values and Code of Conduct. Take personal responsibility for embedding the highest standards of ethics, including regulatory and business conduct, across Standard Chartered Bank. This includes understanding and ensuring compliance with, in letter and spirit, all applicable laws, regulations, guidelines and the Group Code of Conduct. Lead the team to achieve the outcomes set out in the Bank’s Conduct Principles: [Fair Outcomes for Clients; Effective Financial Markets; Financial Crime Compliance; The Right Environment.] * Effectively and collaboratively identify, escalate, mitigate and resolve risk, conduct and compliance matters. [Insert local regulator e.g. PRA/FCA prescribed responsibilities and Rationale for allocation]. [Where relevant - Additionally, for subsidiaries or relevant non -subsidiaries] Serve as a Director of the Board of [insert name of entities] Exercise authorities delegated by the Board of Directors and act in accordance with Articles of Association (or equivalent) Key stakeholders Business Stakeholders AIML Engineering Team AIML Product Team Product Enablement Team SCB Infrastructure Team Interfacing Program Team Skills And Experience Use NLP, Vision and ML techniques to bring order to unstructured data Experience in extracting signal from noise in large unstructured datasets a plus Work within the Engineering Team to design, code, train, test, deploy and iterate on enterprise scale machine learning systems Work alongside an excellent, cross-functional team across Engineering, Product and Design create solutions and try various algorithms to solve the problem. Stakeholder Management Qualifications Masters with specialisation in Technology with certification in AI and ML 5-12 years relevant of Hands-on Experience in developing and delivering AI solutions About Standard Chartered We're an international bank, nimble enough to act, big enough for impact. For more than 170 years, we've worked to make a positive difference for our clients, communities, and each other. We question the status quo, love a challenge and enjoy finding new opportunities to grow and do better than before. If you're looking for a career with purpose and you want to work for a bank making a difference, we want to hear from you. You can count on us to celebrate your unique talents and we can't wait to see the talents you can bring us. Our purpose, to drive commerce and prosperity through our unique diversity, together with our brand promise, to be here for good are achieved by how we each live our valued behaviours. When you work with us, you'll see how we value difference and advocate inclusion. Together We Do the right thing and are assertive, challenge one another, and live with integrity, while putting the client at the heart of what we do Never settle, continuously striving to improve and innovate, keeping things simple and learning from doing well, and not so well Are better together, we can be ourselves, be inclusive, see more good in others, and work collectively to build for the long term What We Offer In line with our Fair Pay Charter, we offer a competitive salary and benefits to support your mental, physical, financial and social wellbeing. Core bank funding for retirement savings, medical and life insurance, with flexible and voluntary benefits available in some locations. Time-off including annual leave, parental/maternity (20 weeks), sabbatical (12 months maximum) and volunteering leave (3 days), along with minimum global standards for annual and public holiday, which is combined to 30 days minimum. Flexible working options based around home and office locations, with flexible working patterns. Proactive wellbeing support through Unmind, a market-leading digital wellbeing platform, development courses for resilience and other human skills, global Employee Assistance Programme, sick leave, mental health first-aiders and all sorts of self-help toolkits A continuous learning culture to support your growth, with opportunities to reskill and upskill and access to physical, virtual and digital learning. Being part of an inclusive and values driven organisation, one that embraces and celebrates our unique diversity, across our teams, business functions and geographies - everyone feels respected and can realise their full potential.

Posted 21 hours ago

Apply

3.0 years

0 Lacs

Rajarhat, West Bengal, India

On-site

About Us We are a fast-growing tech-driven company revolutionizing the logistics space through digital platforms. As we scale our products and reach, we’re looking for a data-savvy, AI-forward SEO Specialist to lead our organic growth strategy and ensure we’re discoverable by the right audience at the right time. Role Overview As an SEO Specialist, you will be responsible for developing and executing strategies to increase our organic visibility across search engines. You should be deeply analytical, up-to-date with the latest SEO trends, and comfortable using modern AI-powered tools to optimize content, perform keyword research, automate repetitive tasks, and extract insights. Key Responsibilities - Develop and execute on-page and off-page SEO strategies to improve search rankings and drive quality traffic. - Perform advanced keyword research and competitive analysis using both traditional and AI tools (e.g., SEMrush, Ahrefs, ChatGPT, SurferSEO, etc.). - Optimize website content, metadata, internal linking, and user experience based on SEO best practices. - Write content and collaborate product team to create SEO-friendly content using AI-driven ideation, briefs, and optimization tools. - Monitor, report, and analyze SEO performance using Google Analytics, Search Console, and AI-powered dashboards. - Identify and resolve technical SEO issues (site speed, crawlability, indexation). - Stay updated with SEO, search engine algorithm changes, and AI trends to continually evolve our approach. - Leverage AI for tasks like content clustering, keyword gap analysis, SERP intent prediction, and backlink analysis. Requirements - 3+ years of proven SEO experience, preferably in a tech or SaaS environment. - Strong understanding of search engine algorithms, ranking factors, and core SEO principles. - Proficiency in tools like Google Search Console, Google Analytics, Screaming Frog, Ahrefs/SEMrush, and SurferSEO. - Experience using AI tools (e.g., ChatGPT, Jasper, Frase, Clearscope) to augment SEO workflows. - Basic understanding of HTML, CSS, and website architecture. - Ability to analyze data, draw insights, and translate them into action. - Strong written and verbal communication skills. - Bonus: Experience in international or multilingual SEO, programmatic SEO, or AI-powered content generation at scale.

Posted 22 hours ago

Apply

2.0 years

0 Lacs

India

On-site

The Role We are hiring an AI/ML Developer (India), to join our India team, in support of a large global client! You will be responsible for developing, deploying, and maintaining AI and machine learning models. Your expertise in Python, cloud services, databases, and big data technologies will be instrumental in creating scalable and efficient AI applications. What You Will Be Doing •Develop, train, and deploy machine learning models for predictive analytics, classification, and clustering. •Implement AI-based solutions using frameworks such as TensorFlow, PyTorch, and Scikit-learn. •Work with cloud platforms including AWS (SageMaker, Lambda, S3), Azure, and Google Cloud (Vertex AI). •Integrate and fine-tune Hugging Face transformer models (e.g., BERT, GPT) for NLP tasks such as text classification, summarization, and sentiment analysis. •Develop AI automation solutions, including chatbot implementations using Microsoft Teams and Azure AI. •Work with big data technologies such as Apache Spark and Snowflake for large-scale data processing and analytics. •Design and optimize ETL pipelines for data quality management, transformation, and validation. •Utilize SQL, MySQL, PostgreSQL, and MongoDB for database management and query optimization. •Create interactive data visualizations using Tableau and Power BI to drive business insights. •Work with Large Language Models (LLMs) for AI-driven applications, including fine-tuning, training, and deploying model for conversational AI, text generation, and summarization. •Develop and implement Agentic AI systems, enabling autonomous decision-making AI agents that can adapt, learn, and optimize tasks in real-time. What You Bring Along •2+ years of experience applying AI to practical uses. •Strong programming skills in Python, SQL, and experience with ML frameworks such as TensorFlow, PyTorch, or Scikit-learn. •Knowledge of basic algorithms and object-oriented and functional design principles •Proficiency in using data analytics libraries like Pandas, NumPy, Matplotlib, and Seaborn. •Hands-on experience with cloud platforms such as AWS, Azure, and Google Cloud. •Experience with big data processing using Apache Spark and Snowflake. •Knowledge of NLP and AI model implementations using Hugging Face and cloud-based AI services. •Strong understanding of database management, query optimization, and data warehousing. •Experience with data visualization tools such as Tableau and Power BI. •Ability to work in a collaborative environment and adapt to new AI technologies. •Strong analytical and problem solving skills. Education: •Bachelor’s degree in computer science, Data Science, AI/ML, or a related field.

Posted 22 hours ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

JD: Data Scientist Responsibilities: Leverage strong ML model experience in complex data environments to achieve business objectives in innovative and efficient ways. Utilize a solid background in mathematics and statistics to inform model development and evaluation. Design, architect, and develop robust machine learning solutions, with a focus on integrating Large Language Models (LLMs) where applicable. Collaborate effectively within Agile Scrum teams, contributing to iterative development and continuous improvement. Document business processes, workflows, and requirements clearly and comprehensively. Engage in close collaboration with various domains within the organization to ensure alignment and understanding of business needs. Participate in collaborative conceptualization sessions to brainstorm and refine project ideas. Mandatory Skills: Proficiency in Python, Machine Learning, REST API, and SQL. Experience with data processing, cleansing, and verification to ensure data integrity for analysis. Conduct data quality checks and exploratory analyses to inform model development. Demonstrated programming skills in relevant languages, particularly Python and API development. Build end-to-end machine learning models, including data structures and transformation processes. Strong understanding of statistical modeling techniques (e.g., Regression, Clustering, Decision Trees, Logistic Regression). Familiarity with machine learning algorithms (e.g., KNN, Random Forests, Ensemble Methods, Bayesian/Markov Networks). Knowledge of data mining concepts and experience with data visualization tools and dashboards. Preferred Skills: Experience with Large Language Models (LLMs) such as GPT, BERT, or similar architectures. Understanding of natural language processing (NLP) techniques and their applications in business contexts. Familiarity with advanced research topics, including deep learning, kernel methods, spectral methods, and forecasting. Ability to integrate end-to-end ML solutions into product suites and business functions, with a focus on LLM applications. Design technical frameworks based on various use cases, particularly those involving text data and language understanding. Identify opportunities to automate analytical processes, data extraction, and flow processes, especially in the context of LLMs. Propose hypotheses and design experiments to address specific problems, leveraging LLM capabilities where relevant. Additional Responsibilities: Stay updated with the latest advancements in machine learning and natural language processing, particularly in the context of LLMs. Mentor junior team members on best practices in machine learning and LLM implementation. Contribute to the development of best practices and standards for machine learning and LLM projects within the organization. Why Join Us We are committed to creating a diverse environment and are proud to be an equal opportunity employer. All qualified applicants receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status. Business Insight At Société Générale, we are convinced that people are drivers of change, and that the world of tomorrow will be shaped by all their initiatives, from the smallest to the most ambitious. Whether you’re joining us for a period of months, years or your entire career, together we can have a positive impact on the future. Creating, daring, innovating, and taking action are part of our DNA. If you too want to be directly involved, grow in a stimulating and caring environment, feel useful on a daily basis and develop or strengthen your expertise, you will feel right at home with us! Still hesitating? You should know that our employees can dedicate several days per year to solidarity actions during their working hours, including sponsoring people struggling with their orientation or professional integration, participating in the financial education of young apprentices, and sharing their skills with charities. There are many ways to get involved. We are committed to support accelerating our Group’s ESG strategy by implementing ESG principles in all our activities and policies. They are translated in our business activity (ESG assessment, reporting, project management or IT activities), our work environment and in our responsible practices for environment protection.

Posted 22 hours ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Greetings !! We are looking for a skilled Splunk Administrator with hands-on experience in deploying and managing Splunk Enterprise and Splunk Cloud. The ideal candidate should have experience in Splunk Enterprise Security (ES), Splunk UBA, and IT Service Intelligence (ITSI). This role requires strong technical skills, along with the ability to communicate effectively with customers. Roles & Responsibilities: ✅ Splunk Deployment & Administration: Install, configure, and manage Splunk Enterprise and Splunk Cloud. Handle indexers, search heads, forwarders, and clustering. Optimize Splunk performance, storage, and scalability. ✅ Security & Splunk Monitoring Solutions: Implement and manage Splunk Enterprise Security (ES), Splunk UBA, and ITSI. Configure correlation searches, threat intelligence feeds, risk-based alerting (RBA), and dashboards. Troubleshoot security-related issues within Splunk. ✅ Customer Interaction & Troubleshooting: Engage with customers to understand their requirements and provide technical guidance. Troubleshoot and resolve Splunk-related issues, logs ingestion, parsing, and data onboarding. ✅ Splunk Architecture & Implementation: Design, deploy, and optimize Splunk Enterprise and Splunk Cloud environments. Lead end-to-end Splunk implementations, migrations, and upgrades. Manage search head clustering, indexer clustering, and data retention policies. ✅ Security & Observability Solutions: Architect and configure Splunk Enterprise Security (ES), Splunk UBA, and ITSI. Implement risk-based alerting (RBA), custom correlation searches, and advanced analytics. Integrate Splunk with SOAR, cloud platforms (AWS, Azure, GCP), and third-party security tools. ✅ Team Leadership & Customer Engagement: Lead and mentor a team of Splunk Administrators & Engineers. Interact with customers to gather requirements, design solutions, and conduct workshops etc. Review and improve Splunk use cases, dashboards, and data models. ✅ Optimization & Automation: Develop custom scripts (Python, Bash, PowerShell) for automation and orchestration. Tune Splunk performance, search queries, and indexing strategies. Implement best practices for data onboarding, parsing, and CIM compliance. Interested can share their updated resume to gayathri.ramaraj@locuz.com along with the below mentioned details. Current CTC: Expected CTC: Notice Period:

Posted 23 hours ago

Apply

Exploring Clustering Jobs in India

The job market for clustering roles in India is thriving, with numerous opportunities available for job seekers with expertise in this area. Clustering professionals are in high demand across various industries, including IT, data science, and research. If you are considering a career in clustering, this article will provide you with valuable insights into the job market in India.

Top Hiring Locations in India

Here are 5 major cities in India actively hiring for clustering roles: 1. Bangalore 2. Pune 3. Hyderabad 4. Mumbai 5. Delhi

Average Salary Range

The average salary range for clustering professionals in India varies based on experience levels. Entry-level positions may start at around INR 3-6 lakhs per annum, while experienced professionals can earn upwards of INR 12-20 lakhs per annum.

Career Path

In the field of clustering, a typical career path may look like: - Junior Data Analyst - Data Scientist - Senior Data Scientist - Tech Lead

Related Skills

Apart from expertise in clustering, professionals in this field are often expected to have skills in: - Machine Learning - Data Analysis - Python/R programming - Statistics

Interview Questions

Here are 25 interview questions for clustering roles: - What is clustering and how does it differ from classification? (basic) - Explain the K-means clustering algorithm. (medium) - What are the different types of distance metrics used in clustering? (medium) - How do you determine the optimal number of clusters in K-means clustering? (medium) - What is the Elbow method in clustering? (basic) - Define hierarchical clustering. (medium) - What is the purpose of clustering in machine learning? (basic) - Can you explain the difference between supervised and unsupervised learning? (basic) - What are the advantages of hierarchical clustering over K-means clustering? (advanced) - How does DBSCAN clustering algorithm work? (medium) - What is the curse of dimensionality in clustering? (advanced) - Explain the concept of silhouette score in clustering. (medium) - How do you handle missing values in clustering algorithms? (medium) - What is the difference between agglomerative and divisive clustering? (advanced) - How would you handle outliers in clustering analysis? (medium) - Can you explain the concept of cluster centroids? (basic) - What are the limitations of K-means clustering? (medium) - How do you evaluate the performance of a clustering algorithm? (medium) - What is the role of inertia in K-means clustering? (basic) - Describe the process of feature scaling in clustering. (basic) - How does the GMM algorithm differ from K-means clustering? (advanced) - What is the importance of feature selection in clustering? (medium) - How can you assess the quality of clustering results? (medium) - Explain the concept of cluster density in DBSCAN. (advanced) - How do you handle high-dimensional data in clustering? (medium)

Closing Remark

As you venture into the world of clustering jobs in India, remember to stay updated with the latest trends and technologies in the field. Equip yourself with the necessary skills and knowledge to stand out in interviews and excel in your career. Good luck on your job search journey!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies