Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
10.0 years
0 Lacs
Pune, Maharashtra, India
Remote
About the Company Creospan is a growing tech collective of makers, shakers, and problem solvers, offering solutions today that will propel businesses into a better tomorrow. “Tomorrow’s ideas, built today!” In addition to being able to work alongside equally brilliant and motivated developers, our consultants appreciate the opportunity to learn and apply new skills and methodologies to different clients and industries. Job Title: Data Modeler Location: Pune (Pan India relocation is considerable - High preference is Pune) Hybrid: 3 days WFO & 2 days WFH Shift timings: UK Working Hours (9AM — 5PM GMT) Notice period: Immediate Gap: Upto 3 Months (Strictly not more than that) Project Overview: Creation and management of business data models in all their forms, including conceptual models, logical data models and physical data models (relational database designs, message models and others). Expert level understanding of relational database concepts, dimensional database concepts and database architecture and design, ontology and taxonomy design. Background working with key data domains as account, holding and transactions within security servicing or asset management space. Expertise in designing data driven solution on Snowflake for complex business needs. Knowledge of entire application lifecycle including Design, Development, Deployment, Operation and Maintenance in an Agile and DevOps culture. Role: This person strengthens the impact of, and provides recommendations on data-models and architecture that will need to be available and shared consistently across the TA organization through the identification, definition and analysis of how data related assets aid business outcomes. The Data Modeler\Architect is responsible for making data trusted, understood and easy to use. They will be responsible for the entire lifecycle of the data architectural assets, from design and development to deployment, operation and maintenance, with a focus on automation and quality. Must Have Skills: 10+ years of experience in Enterprise-level Data Architecture, Data Modelling, and Database Engineering Expertise in OLAP & OLTP design, Data Warehouse solutions, ELT/ETL processes Proficiency in data modelling concepts and practices such as normalization, denormalization, and dimensional modelling (Star Schema, Snowflake Schema, Data Vault, Medallion Data Lake) Experience with Snowflake-specific features, including clustering, partitioning, and schema design best practices Proficiency in Enterprise Modelling tools - Erwin, PowerDesigner, IBM Infosphere etc. Strong experience in Microsoft Azure data pipelines (Data Factory, Synapse, SQL DB, Cosmos DB, Databricks) Familiarity with Snowflake’s native tools and services including Snowflake Data Sharing, Snowflake Streams & Tasks, and Snowflake Secure Data Sharing Strong knowledge of SQL performance tuning, query optimization, and indexing strategies Strong verbal and written communication skills for collaborating with both technical teams and business stakeholders Working knowledge of BIAN, ACORD, ESG risk data integration Nice to Haves: At least 3+ in security servicing or asset Management/investment experience is highly desired Understanding of software development life cycle including planning, development, quality assurance, change management and release management Strong problem-solving skills and ability to troubleshoot complex issues Excellent communication and collaboration skills to work effectively in a team environment Self-motivated and ability to work independently with minimal supervision Excellent communication skills: experience in communicating with tech and non-tech teams Deep understanding of data and information architecture, especially in asset management space Familiarity with MDM, data vault, and data warehouse design and implementation techniques Business domain, data/content and process understanding (which are more important than technical skills). Being techno functional is a plus Good presentation skills in creating Data Architecture diagrams Data modelling and information classification expertise at the project and enterprise level Understanding of common information architecture frameworks and information models Experience with distributed data and analytics platforms in cloud and hybrid environments. Also an understanding of a variety of data access and analytic approaches (for example, microservices and event-based architectures) Knowledge of problem analysis, structured analysis and design, and programming techniques Python, R
Posted 2 days ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Working with Us Challenging. Meaningful. Life-changing. Those aren't words that are usually associated with a job. But working at Bristol Myers Squibb is anything but usual. Here, uniquely interesting work happens every day, in every department. From optimizing a production line to the latest breakthroughs in cell therapy, this is work that transforms the lives of patients, and the careers of those who do it. You'll get the chance to grow and thrive through opportunities uncommon in scale and scope, alongside high-achieving teams. Take your career farther than you thought possible. Bristol Myers Squibb recognizes the importance of balance and flexibility in our work environment. We offer a wide variety of competitive benefits, services and programs that provide our employees with the resources to pursue their goals, both at work and in their personal lives. Read more careers.bms.com/working-with-us . Roles & Responsibilities Develop, maintain, and manage advanced reporting, analytics, dashboards and other BI solutions for HR stakeholders Partner with senior analysts build visualizations to communicate insights and recommendations to stakeholders at various levels of the organization Partner with HR senior analysts to implement statistical models, decision support models, and optimization techniques to solve complex business problems. Collaborate with cross-functional teams to gather/analyse data, define problem statements and identify KPIs for decision-making Perform and document data analysis, data validation, and data mapping/design. Collaborate with HR stakeholders to understand business objectives and translate them into projects and actionable recommendations Stay up to date with industry trends, emerging methodologies, and best practices related to reporting analytics / visualization optimization and decision support The HR data Analyst will play a critical role in ensuring the availability, integrity of HR data to drive informed decision-making. Skills And Competencies Strong analytical thinking and problem-solving skills, with working knowledge of statistical analysis, optimization techniques, and decision support models. Ability to present complex information to non-technical stakeholders in a clear and concise manner; skilled in creating relevant and engaging PowerPoint presentations. Proficiency in data analysis techniques, including the use of Tableau, ETL tools (Python, R, Domino), and statistical software packages. Advanced skills in Power BI, Power Query, DAX, and data visualization best practices. Experience with data modelling, ETL processes, and connecting to various data sources. Solid understanding of SQL and relational databases. Exceptional attention to detail, with the ability to proactively detect data anomalies and ensure data accuracy. Ability to work collaboratively in cross-functional teams and manage multiple projects simultaneously. Strong capability to work with large datasets, ensuring the accuracy and reliability of analyses. Strong business acumen, with the ability to translate analytical findings into actionable insights and recommendations. Working knowledge of data modelling to support analytics needs. Experience conducting thorough Exploratory Data Analysis (EDA) to summarize, visualize, and validate data quality and trends. Ability to apply foundational data science or basic machine learning techniques (such as regression, clustering, or forecasting) when appropriate. Experience Bachelor's or master's degree in a relevant field such as Statistics, Mathematics, Economics, Operations Research or a related discipline. Minimum of 3+ years of total relevant experience Business experience with visualization tools (e.g., PowerBI) Experience with data querying languages (e.g., SQL), scripting languages (Python) Problem-solving skills with understanding and practical experience across most Statistical Modelling and Machine Learning Techniques. Only academic knowledge is also acceptable. Ability to handle, and maintain the confidentiality of highly sensitive information Experience initiating and completing analytical projects with minimal guidance Experience communicating results of analysis to using compelling and persuasive oral and written storytelling techniques Hands-on experience working with large datasets, statistical software packages (e.g., R, Python), and data visualization tools such as Tableau and Power BI. Experience with ETL processes, writing complex SQL queries, and data manipulation techniques. Experience in HR analytics a nice to have If you come across a role that intrigues you but doesn't perfectly line up with your resume, we encourage you to apply anyway. You could be one step away from work that will transform your life and career. Uniquely Interesting Work, Life-changing Careers With a single vision as inspiring as Transforming patients' lives through science™ , every BMS employee plays an integral role in work that goes far beyond ordinary. Each of us is empowered to apply our individual talents and unique perspectives in a supportive culture, promoting global participation in clinical trials, while our shared values of passion, innovation, urgency, accountability, inclusion and integrity bring out the highest potential of each of our colleagues. On-site Protocol BMS has an occupancy structure that determines where an employee is required to conduct their work. This structure includes site-essential, site-by-design, field-based and remote-by-design jobs. The occupancy type that you are assigned is determined by the nature and responsibilities of your role Site-essential roles require 100% of shifts onsite at your assigned facility. Site-by-design roles may be eligible for a hybrid work model with at least 50% onsite at your assigned facility. For these roles, onsite presence is considered an essential job function and is critical to collaboration, innovation, productivity, and a positive Company culture. For field-based and remote-by-design roles the ability to physically travel to visit customers, patients or business partners and to attend meetings on behalf of BMS as directed is an essential job function. BMS is dedicated to ensuring that people with disabilities can excel through a transparent recruitment process, reasonable workplace accommodations/adjustments and ongoing support in their roles. Applicants can request a reasonable workplace accommodation/adjustment prior to accepting a job offer. If you require reasonable accommodations/adjustments in completing this application, or in any part of the recruitment process, direct your inquiries to adastaffingsupport@bms.com . Visit careers.bms.com/ eeo -accessibility to access our complete Equal Employment Opportunity statement. BMS cares about your well-being and the well-being of our staff, customers, patients, and communities. As a result, the Company strongly recommends that all employees be fully vaccinated for Covid-19 and keep up to date with Covid-19 boosters. BMS will consider for employment qualified applicants with arrest and conviction records, pursuant to applicable laws in your area. If you live in or expect to work from Los Angeles County if hired for this position, please visit this page for important additional information https //careers.bms.com/california-residents/ Any data processed in connection with role applications will be treated in accordance with applicable data privacy policies and regulations.
Posted 2 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Comcast brings together the best in media and technology. We drive innovation to create the world's best entertainment and online experiences. As a Fortune 50 leader, we set the pace in a variety of innovative and fascinating businesses and create career opportunities across a wide range of locations and disciplines. We are at the forefront of change and move at an amazing pace, thanks to our remarkable people, who bring cutting-edge products and services to life for millions of customers every day. If you share in our passion for teamwork, our vision to revolutionize industries and our goal to lead the future in media and technology, we want you to fast-forward your career at Comcast. Job Summary Responsible for developing and deploying machine learning algorithms. Evaluates accuracy and functionality of machine learning algorithms. Translates application requirements into machine learning problem statements. Analyzes and evaluates solutions both internally generated as well as third party supplied. Develops novel ways to use machine learning to solve problems and discover new products. Has in-depth experience, knowledge and skills in own discipline. Usually determines own work priorities. Acts as resource for colleagues with less experience. Job Description About the Role: We are seeking an experienced Data Scientist to join our growing Operational Intelligence team. You will play a key role in building intelligent systems that help reduce alert noise, detect anomalies, correlate events, and proactively surface operational insights across our large-scale streaming infrastructure. You’ll work at the intersection of machine learning, observability, and IT operations, collaborating closely with Platform Engineers, SREs, Incident Managers, Operators and Developers to integrate smart detection and decision logic directly into our operational workflows. This role offers a unique opportunity to push the boundaries of AI/ML in large-scale operations. We welcome curious minds who want to stay ahead of the curve, bring innovative ideas to life, and improve the reliability of streaming infrastructure that powers millions of users globally. What You’ll Do Design and tune machine learning models for event correlation, anomaly detection, alert scoring, and root cause inference Engineer features to enrich alerts using service relationships, business context, change history, and topological data Apply NLP and ML techniques to classify and structure logs and unstructured alert messages Develop and maintain real-time and batch data pipelines to process alerts, metrics, traces, and logs Use Python, SQL, and time-series query languages (e.g., PromQL) to manipulate and analyze operational data Collaborate with engineering teams to deploy models via API integrations, automate workflows, and ensure production readiness Contribute to the development of self-healing automation, diagnostics, and ML-powered decision triggers Design and validate entropy-based prioritization models to reduce alert fatigue and elevate critical signals Conduct A/B testing, offline validation, and live performance monitoring of ML models Build and share clear dashboards, visualizations, and reporting views to support SREs, engineers and leadership Participate in incident postmortems, providing ML-driven insights and recommendations for platform improvements Collaborate on the design of hybrid ML + rule-based systems to supportnamic correlation and intelligent alert grouping Lead and support innovation efforts including POCs, POVs, and explorationemerging AI/ML tools and strategies Demonstrate a proactive, solution-oriented mindset with the ability to navigate ambiguity and learn quickly Participate in on-call rotations and provide operational support as needed Qualifications Bachelor's or Master's degree in Computer Science, Data Science, Machine Learning, Statistics or a related field 5+ years of experience building and deploying ML solutions in production environments 2+ years working with AIOps, observability, or real-time operations data Strong coding skills in Python (including pandas, NumPy, Scikit-learn, PyTorch, or TensorFlow) Experience working with SQL, time-series query languages (e.g., PromQL), and data transformation in pandas or Spark Familiarity with LLMs, prompt engineering fundamentals, or embedding-based retrieval (e.g., sentence-transformers, vector DBs) Strong grasp of modern ML techniques including gradient boosting (XGBoost/LightGBM), autoencoders, clustering (e.g., HDBSCAN), and anomaly detection Experience managing structured + unstructured data, and building features from logs, alerts, metrics, and traces Familiarity with real-time event processing using tools like Kafka, Kinesis, or Flink Strong understanding of model evaluation techniques including precision/recall trade-offs, ROC, AUC, calibration Comfortable working with relational (PostgreSQL), NoSQL (MongoDB), and time-series (InfluxDB, Prometheus) databases Ability to collaborate effectively with SREs, platform teams, and participate in Agile/DevOps workflows Clear written and verbal communication skills to present findings to technical and non-technical stakeholders Comfortable working across Git, Confluence, JIRA, & collaborative agile environments Nice To Have Experience building or contributing to the AIOps platform (e.g., Moogsoft, BigPanda, Datadog, Aisera, Dynatrace, BMC etc.) Experience working in streaming media, OTT platforms, or large-scale consumer services Exposure to Infrastructure as Code (Terraform, Pulumi) and modern cloud-native tooling Working experience with Conviva, Touchstream, Harmonic, New Relic, Prometheus, & event- based alerting tools Hands-on experience with LLMs in operational contexts (e.g., classification of alert text, log summarization, retrieval-augmented generation) Familiarity with vector databases (e.g., FAISS, Pinecone, Weaviate) and embeddings-based search for observability data Experience using MLflow, SageMaker, or Airflow for ML workflow orchestration Knowledge of LangChain, Haystack, RAG pipelines, or prompt templating libraries Exposure to MLOps practices (e.g., model monitoring, drift detection, explainability tools like SHAP or LIME) Experience with containerized model deployment using Docker or Kubernetes Use of JAX, Hugging Face Transformers, or LLaMA/Claude/Command-R models in experimentation Experience designing APIs in Python or Go to expose models as services Cloud proficiency in AWS/GCP, especially for distributed training, storage, or batch inferencing Contributions to open-source ML or DevOps communities, or participation in AIOps research/benchmarking efforts. Certifications in cloud architecture, ML engineering, or data science specializations, confluence pages, white papers, presentations, test results, technical manuals, formal recommendations and reports. Contributes to the company by creating patents, Application Programming Interfaces (APIs). Comcast is proud to be an equal opportunity workplace. We will consider all qualified applicants for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, veteran status, genetic information, or any other basis protected by applicable law. Base pay is one part of the Total Rewards that Comcast provides to compensate and recognize employees for their work. Most sales positions are eligible for a Commission under the terms of an applicable plan, while most non-sales positions are eligible for a Bonus. Additionally, Comcast provides best-in-class Benefits to eligible employees. We believe that benefits should connect you to the support you need when it matters most, and should help you care for those who matter most. That’s why we provide an array of options, expert guidance and always-on tools, that are personalized to meet the needs of your reality – to help support you physically, financially and emotionally through the big milestones and in your everyday life. Please visit the compensation and benefits summary on our careers site for more details. Education Bachelor's Degree While possessing the stated degree is preferred, Comcast also may consider applicants who hold some combination of coursework and experience, or who have extensive related professional experience. Relevant Work Experience 5-7 Years
Posted 2 days ago
8.0 years
20 - 40 Lacs
India
On-site
Role: Senior Graph Data Engineer (Neo4j & AI Knowledge Graphs) Experience: 8+ years Type: Contract We’re hiring a Graph Data Engineer to design and implement advanced Neo4j-powered knowledge graph systems for our next-gen AI platform. You'll work at the intersection of data engineering, AI/ML, and financial services , helping build the graph infrastructure that powers semantic search, investment intelligence, and automated compliance for venture capital and private equity clients. This role is ideal for engineers who are passionate about graph data modeling , Neo4j performance , and enabling AI-enhanced analytics through structured relationships. What You'll Do Design Knowledge Graphs: Build and maintain Neo4j graph schemas modeling complex fund administration relationships — investors, funds, companies, transactions, legal docs, etc. Graph-AI Integration: Work with GenAI teams to power RAG systems, semantic search, and graph-enhanced NLP pipelines. ETL & Data Pipelines: Develop scalable ingestion pipelines from sources like FundPanel.io, legal documents, and external market feeds using Python, Spark, or Kafka. Optimize Graph Performance: Craft high-performance Cypher queries, leverage APOC procedures, and tune for real-time analytics. Graph Algorithms & Analytics: Implement algorithms for fraud detection, relationship scoring, compliance, and investment pattern analysis. Secure & Scalable Deployment: Implement clustering, backups, and role-based access on Neo4j Aura or containerized environments. Collaborate Deeply: Partner with AI/ML, DevOps, data architects, and business stakeholders to translate use cases into scalable graph solutions. What You Bring 7+ years in software/data engineering; 2+ years in Neo4j and Cypher. Strong experience in graph modeling, knowledge graphs, and ontologies. Proficiency in Python, Java, or Scala for graph integrations. Experience with graph algorithms (PageRank, community detection, etc.). Hands-on with ETL pipelines, Kafka/Spark, and real-time data ingestion. Cloud-native experience (Neo4j Aura, Azure, Docker/K8s). Familiarity with fund structures, LP/GP models, or financial/legal data a plus. Strong understanding of AI/ML pipelines, especially graph-RAG and embeddings. Use Cases You'll Help Build AI Semantic Search over fund documents and investment entities. Investment Network Analysis for GPs, LPs, and portfolio companies. Compliance Graphs modeling fund terms and regulatory checks. Document Graphs linking LPAs, contracts, and agreements. Predictive Investment Models enhanced by graph relationships. Skills: java,machine learning,spark,apache spark,neo4j aura,ai,azure,cloud-native technologies,data,ai/ml pipelines,scala,python,cypher,graphs,ai knowledge graphs,graph data modeling,apoc procedures,semantic search,etl pipelines,data engineering,neo4j,etl,cypher query,pipelines,graph schema,kafka,kafka streams,graph algorithms
Posted 2 days ago
0.0 - 5.0 years
0 Lacs
Ahmedabad, Gujarat
On-site
We are hiring a Senior Software Development Engineer for our platform. We are helping enterprises and service providers build their AI inference platforms for end users. As a Senior Software Engineer, you will take ownership of backend-heavy, full-stack feature development—building robust services, scalable APIs, and intuitive frontends that power the user experience. You’ll contribute to the core of our enterprise-grade AI platform, collaborating across teams to ensure our systems are performant, secure, and built to last. This is a high-impact, high-visibility role working at the intersection of AI infrastructure, enterprise software, and developer experience. Responsibilities: Design, develop and maintain databases, system APIs, system integrations, machine learning pipelines and web user interfaces. Scale algorithms designed by data scientists for deployment in high-performance environments. Develop and maintain continuous integration pipelines to deploy the systems. Design and implement scalable backend systems using Golang, C++, Go,Python. Model and manage data using relational (e.g., PostgreSQL , MySQL). Build frontend components and interfaces using TypeScript, and JavaScript when needed. Participate in system architecture discussions and contribute to design decisions. Write clean, idiomatic, and well-documented Go code following best practices and design patterns. Ensure high code quality through unit testing, automation, code reviews, and documentation Communicate technical concepts clearly to both technical and non-technical stakeholders. Qualifications and Criteria: 5–10 years of professional software engineering experience building enterprise-grade platforms. Deep proficiency in Golang , with real-world experience building production-grade systems. Solid knowledge of software architecture, design patterns, and clean code principles. Experience in high-level system design and building distributed systems. Expertise in Python and backend development with experience in PostgreSQL or similar databases. Hands-on experience with unit testing, integration testing, and TDD in Go. Strong debugging, profiling, and performance optimization skills. Excellent communication and collaboration skills. Hands-on experience with frontend development using JavaScript, TypeScript , and HTML/CSS. Bachelor's degree or equivalent experience in a quantitative field (Computer Science, Statistics, Applied Mathematics, Engineering, etc.). Skills: Understanding of optimisation, predictive modelling, machine learning, clustering and classification techniques, and algorithms. Fluency in a programming language (e.g. C++, Go, Python, JavaScript, TypeScript, SQL). Docker, Kubernetes, and Linux knowledge are an advantage. Experience using Git. Knowledge of continuous integration (e.g. Gitlab/Github). Basic familiarity with relational databases, preferably PostgreSQL. Strong grounding in applied mathematics. A firm understanding of and experience with the engineering approach. Ability to interact with other team members via code and design documents. Ability to work on multiple tasks simultaneously. Ability to work in high-pressure environments and meet deadlines. Compensation: Commensurate with experience Position Type: Full-time ( In House ) Location: Ahmedabad / Jamnagar Gujarat India. Submission Requirements CV All academic transcripts Submit to chintanit22@gmail.com , dipakberait@gmail.com with the name of the position you wish to apply for in the subject line. Job Type: Full-time Pay: From ₹40,000.00 per month Benefits: Paid sick time Location Type: In-person Schedule: Day shift Monday to Friday Experience: Full-stack development: 5 years (Preferred) Work Location: In person
Posted 2 days ago
0 years
0 Lacs
Trivandrum, Kerala, India
Remote
Role Description Role Proficiency: Resolve enterprise trouble tickets within agreed SLA and raise problem tickets for permanent resolution and/or provide mentorship (Hierarchical or Lateral) to junior associates Outcomes 1) Update SOP with updated troubleshooting instructions and process changes2) Mentor new team members in understanding customer infrastructure and processes3) Perform analysis for driving incident reduction4) Escalate high priority incidents to customer and organization stakeholders for quicker resolution5) Contribute to planning and successful migration of platforms 6) Resolve enterprise trouble tickets within agreed SLA and raise problem tickets for permanent resolution7) Provide inputs for root cause analysis after major incidents to define preventive and corrective actions Measures Of Outcomes 1) SLA Adherence2) Time bound resolution of elevated tickets - OLA3) Manage ticket backlog timelines - OLA4) Adhere to defined process – Number of NCs in internal/external Audits5) Number of KB articles created6) Number of incidents and change ticket handled 7) Number of elevated tickets resolved8) Number of successful change tickets9) % Completion of all mandatory training requirements Resolution Outputs Expected: Understand Priority and Severity based on ITIL practice resolve trouble ticket within agreed resolution SLA Execute change control tickets as documented in implementation plan Troubleshooting Troubleshooting based on available information from previous tickets or consulting with seniors Participate in online knowledge forums reference. Covert the new steps to KB article Perform logical/analytical troubleshooting Escalation/Elevation Escalate within organization/customer peer in case of resolution delay. Understand OLA between delivery layers (L1 L2 L3 etc) adhere to OLA. Elevate to next level work on elevated tickets from L1 Tickets Backlog/Resolution Follow up on tickets based on agreed timelines manage ticket backlogs/last activity as per defined process. Resolve incidents and SRs within agreed timelines. Execute change tickets for infrastructure Installation Install and configure tools software and patches Runbook/KB Update KB with new findings Document and record troubleshooting steps as knowledge base Collaboration Collaborate with different towers of delivery for ticket resolution (within SLA resolve L1 tickets with help from respective tower. Collaborate with other team members for timely resolution of tickets. Actively participate in team/organization-wide initiatives. Co-ordinate with UST ISMS teams for resolving connectivity related issues. Stakeholder Management Lead the customer calls and vendor calls. Organize meeting with different stake holders. Take ownership for function's internal communications and related change management. Strategic Define the strategy on data management policy management and data retention management. Support definition of the IT strategy for the function’s relevant scope and be accountable for ensuring the strategy is tracked benchmarked and updated for the area owned. Process Adherence Thorough understanding of organization and customer defined process. Suggest process improvements and CSI ideas. Adhere to organization’ s policies and business conduct. Process/efficiency Improvement Proactively identify opportunities to increase service levels and mitigate any issues in service delivery within the function or across functions. Take accountability for overall productivity efforts within the function including coordination of function specific tasks and close collaboration with Finance. Process Implementation Coordinate and monitor IT process implementation within the function Compliance Support information governance activities and audit preparations within the function. Act as a function SPOC for IT audits in local sites (incl. preparation interface to local organization mitigation of findings etc.) and work closely with ISRM (Information Security Risk Management). Coordinate overall objective setting preparation and facilitate process in order to achieve consistent objective setting in function Job Description. Coordination Support for CSI across all services in CIS and beyond. Training On time completion of all mandatory training requirements of organization and customer. Provide On floor training and one to one mentorship for new joiners. Complete certification of respective career paths. Performance Management Update FAST Goals in NorthStar track report and seek continues feedback from peers and manager. Set goals for team members and mentees and provide feedback Assist new team members to understand the customer environment Skill Examples 1) Good communication skills (Written verbal and email etiquette) to interact with different teams and customers. 2) Modify / Create runbooks based on suggested changes from juniors or newly identified steps3) Ability to work on an elevated server ticket and solve4) Networking:a. Trouble shooting skills in static and Dynamic routing protocolsb. Should be capable of running netflow analyzers in different product lines5) Server:a. Skills in installing and configuring active directory DNS DHCP DFS IIS patch managementb. Excellent troubleshooting skills in various technologies like AD replication DNS issues etc.c. Skills in managing high availability solutions like failover clustering Vmware clustering etc.6) Storage and Back up:a. Ability to give recommendations to customers. Perform Storage & backup enhancements. Perform change management.b. Skilled in in core fabric technology Storage design and implementation. Hands on experience on backup and storage Command Line Interfacesc. Perform Hardware upgrades firmware upgrades Vulnerability remediation storage and backup commissioning and de-commissioning replication setup and management.d. Skilled in server Network and virtualization technologies. Integration of virtualization storage and backup technologiese. Review the technical diagrams architecture diagrams and modify the SOP and documentations based on business requirements.f. Ability to perform the ITSM functions for storage & backup team and review the quality of ITSM process followed by the team.7) Cloud:a. Skilled in any one of the cloud technologies - AWS Azure GCP.8) Tools:a. Skilled in administration and configuration of monitoring tools like CA UIM SCOM Solarwinds Nagios ServiceNow etcb. Skilled in SQL scriptingc. Skilled in building Custom Reports on Availability and performance of IT infrastructure building based on the customer requirements9) Monitoring:a. Skills in monitoring of infrastructure and application components10) Database:a. Data modeling and database design Database schema creation and managementb. Identify the data integrity violations so that only accurate and appropriate data is entered and maintained.c. Backup and recoveryd. Web-specific tech expertise for e-Biz Cloud etc. Examples of this type of technology include XML CGI Java Ruby firewalls SSL and so on.e. Migrating database instances to new hardware and new versions of software from on premise to cloud based databases and vice versa.11) Quality Analysis: a. Ability to drive service excellence and continuous improvement within the framework defined by IT Operations Knowledge Examples Good understanding of customer infrastructure and related CIs. 2) ITIL Foundation certification3) Thorough hardware knowledge 4) Basic understanding of capacity planning5) Basic understanding of storage and backup6) Networking:a. Hands-on experience in Routers and switches and Firewallsb. Should have minimum knowledge and hands-on with BGPc. Good understanding in Load balancers and WAN optimizersd. Advance back and restore knowledge in backup tools7) Server:a. Basic to intermediate powershell / BASH/Python scripting knowledge and demonstrated experience in script based tasksb. Knowledge of AD group policy management group policy tools and troubleshooting GPO sc. Basic AD object creation DNS concepts DHCP DFSd. Knowledge with tools like SCCM SCOM administration8) Storage and Backup:a. Subject Matter Expert in any of the Storage & Backup technology9) Tools:a. Proficient in the understanding and troubleshooting of Windows and Linux family of operating systems10) Monitoring:a. Strong knowledge in ITIL process and functions11) Database:a. Knowledge in general database management b. Knowledge in OS System and networking skills Additional Comments Role - Cloud Engineer Primary Responsibilities Engineer and support a portfolio of tools including: o HashiCorp Vault (HCP Dedicated), Terraform (HCP), Cloud Platform o GitHub Enterprise Cloud (Actions, Advanced Security, Copilot) o Ansible Automation Platform, Env0, Docker Desktop o Elastic Cloud, Cloudflare, Datadog, PagerDuty, SendGrid, Teleport Manage infrastructure using Terraform, Ansible, and scripting languages such as Python and PowerShell Enable security controls including dynamic secrets management, secrets scanning workflows, and cloud access quotas Design and implement automation for self-service adoption, access provisioning, and compliance monitoring Respond to user support requests via ServiceNow and continuously improve platform support documentation and onboarding workflows Participate in Agile sprints, sprint planning, and cross-team technical initiatives Contribute to evaluation and onboarding of new tools (e.g., remote developer access, artifact storage) Key Projects You May Lead or Support GitHub secrets scanning and remediation with integration to HashiCorp Vault Lifecycle management of developer access across tools like GitHub and Teleport Upgrades to container orchestration environments and automation platforms (EKS, AKS) Technical Skills and Experience Proficiency with Terraform (IaC) and Ansible Strong scripting experience in Python, PowerShell, or Bash Experience operating in cloud environments (AWS, Azure, or GCP) Familiarity with secure development practices and DevSecOps tooling Exposure to or experience with: o CI/CD automation (GitHub Actions) o Monitoring and incident management platforms (Datadog, PagerDuty) o Identity providers (AzureAD, Okta) o Containers and orchestration (Docker, Kubernetes) o Secrets management and vaulting platforms Soft Skills and Attributes Strong cross-functional communication skills with technical and non-technical stakeholders Ability to work independently while knowing when to escalate or align with other engineers or teams. Comfort managing complexity and ambiguity in a fast-paced environment Ability to balance short-term support needs with longer-term infrastructure automation and optimization. Proactive, service-oriented mindset focused on enabling secure and scalable development Detail-oriented, structured approach to problem-solving with an emphasis on reliability and repeatability. Skills Terraform,Ansible,Python,PowershellorBash,AWS,AzureorGCP,CI/CDautomation
Posted 2 days ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
At PDI Technologies, we empower some of the world's leading convenience retail and petroleum brands with cutting-edge technology solutions that drive growth and operational efficiency. By “Connecting Convenience” across the globe, we empower businesses to increase productivity, make more informed decisions, and engage faster with customers through loyalty programs, shopper insights, and unmatched real-time market intelligence via mobile applications, such as GasBuddy. We’re a global team committed to excellence, collaboration, and driving real impact. Explore our opportunities and become part of a company that values diversity, integrity, and growth. Role Overview Hosted Operations Department maintains and manages internal and hosted hardware and software and systems. This position is a Systems Administrator for internal and hosted systems. Key Responsibilities Update tickets by gathering information and defining the ticket urgency. Complete assigned tasks and tickets by following more involved documented procedures than Tier I Support team, including Staging, Installation, Configuration, Upgrade and Maintenance of software and physical and virtual servers and workstations. Research, development, documentation, and training of short procedures for Tier I Support and concurrent Systems Administrators. Communicate with professionalism and succinctly include relevant information when handing off tickets to the appropriate Line of Business, Infrastructure Engineering and Systems Engineering teams. Troubleshoot ticketed problems, including coordinating with Vendor support teams. Taking on-call shifts for after-hours escalation of urgent alerts, tickets, and issues. Define, submit, and complete Change Requests. Qualifications Experience: 4 years Systems Administrative experience or equivalent education Skills: • Exceptional verbal and written communication skills in English • Ability to communicate with internal and external customers with courtesy and professionalism • Strong Ability to understand and troubleshoot servers and systems • Detail oriented - strong organizational skills • Ability to work independently and be comfortable in an environment of rapid growth and consistent change • Must be able to maintain and handle shared documentation confidentially • Ability to thrive in an open, collaborative, and team-based culture • Ability to work cross-functionally to meet the needs of our internal customers • Good general understanding of networking • Good knowledge and understanding of Windows Servers and Active Directory with the ability to monitor and troubleshoot issues • Good knowledge and understanding of Windows Clustering with the ability to monitor and troubleshoot issues • Good knowledge and understanding of Windows backup solutions. Prior use of Veeam would be desirable • Good knowledge and understanding of Hyper-V with the ability to monitor and troubleshoot issues •Prior experience of administering SAN solutions • Good understanding of Windows Certificate manage would be desirable •Ability to create and debug Powershell scripts. Behavioral Competencies Ensures Accountability Manages Complexity Communicates Effectively Balances Stakeholders Collaborates Effectively PDI is committed to offering a well-rounded benefits program, designed to support and care for you, and your family throughout your life and career. This includes a competitive salary, market-competitive benefits, and a quarterly perks program. We encourage a good work-life balance with ample time off [time away] and, where appropriate, hybrid working arrangements. Employees have access to continuous learning, professional certifications, and leadership development opportunities. Our global culture fosters diversity, inclusion, and values authenticity, trust, curiosity, and diversity of thought, ensuring a supportive environment for all.
Posted 2 days ago
3.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. Those in artificial intelligence and machine learning at PwC will focus on developing and implementing advanced AI and ML solutions to drive innovation and enhance business processes. Your work will involve designing and optimising algorithms, models, and systems to enable intelligent decision-making and automation. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities Role : Senior Associate Exp : 3-6 Years Location: Mumbai Job Description: Candidate with 3-5 years of exp and a strong background in machine learning, technical expertise, and domain knowledge in Banking, Financial Services, and Insurance (BFSI). Experience with Generative AI (GenAI) is a must have. Key Responsibilities: Collaborate with clients to understand their business needs and provide data-driven solutions. Develop and implement machine learning models to solve complex business problems. Analyze large datasets to extract actionable insights and drive decision-making. Present findings and recommendations to stakeholders in a clear and concise manner. Stay updated with the latest trends and advancements in data science and machine learning. GenAI Experience: Generative AI (GenAI) experience, including working with models like GPT, BERT, and other transformer-based architectures Ability to leverage GenAI for tasks such as text generation, summarization, and conversational AI Experience in developing and deploying GenAI solutions to enhance business processes and customer experiences Technical Skills: Programming Languages: Proficiency in Python, R, and SQL for data manipulation, analysis, and model development. Machine Learning Frameworks: Extensive experience with TensorFlow, PyTorch, and Scikit-learn for building and deploying models. Data Visualization Tools: Strong knowledge of Tableau, Power BI, and Matplotlib to create insightful visualizations. Cloud Platforms: Expertise in AWS, Azure, and Google Cloud for scalable and efficient data solutions. Database Management: Proficiency in SQL and NoSQL databases for data storage and retrieval. Version Control: Experience with Git for collaborative development and code management. APIs and Web Services: Ability to integrate and utilize APIs for data access and model deployment. Machine Learning algorithms: Supervised and Unsupervised Learning Regression Analysis Classification Techniques Clustering Algorithms Natural Language Processing (NLP) Time Series Analysis Deep Learning Reinforcement Learning Qualifications: Bachelor's or Master's degree in Data Science, Computer Science, Statistics, or a related field 3-5 years of relevant experience in data science and machine learning Strong analytical and problem-solving skills Excellent communication and presentation abilities Ability to work independently and as part of a team Mandatory Skill Sets GenAI/BFSI/ Data Visualization Preferred Skill Sets GenAI/ BFSI / Data Visualization Years Of Experience Required 3-6 Years Education Qualification B.E.(B.Tech)/M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Generative AI Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, AI Implementation, C++ Programming Language, Communication, Complex Data Analysis, Data Analysis, Data Infrastructure, Data Integration, Data Modeling, Data Pipeline, Data Quality, Deep Learning, Emotional Regulation, Empathy, GPU Programming, Inclusion, Intellectual Curiosity, Java (Programming Language), Machine Learning, Machine Learning Libraries, Named Entity Recognition, Natural Language Processing (NLP), Natural Language Toolkit (NLTK) {+ 20 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Up to 40% Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 2 days ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Welcome to Warner Bros. Discovery… the stuff dreams are made of. Who We Are… When we say, “the stuff dreams are made of,” we’re not just referring to the world of wizards, dragons and superheroes, or even to the wonders of Planet Earth. Behind WBD’s vast portfolio of iconic content and beloved brands, are the storytellers bringing our characters to life, the creators bringing them to your living rooms and the dreamers creating what’s next… From brilliant creatives, to technology trailblazers, across the globe, WBD offers career defining opportunities, thoughtfully curated benefits, and the tools to explore and grow into your best selves. Here you are supported, here you are celebrated, here you can thrive. Your New Role This position will join the Enterprise Data and AI team that supports all brands in the Warner Bros umbrella including WB films in theatrical and home entertainment, DC studios, Consumer Products, games, etc. The ideal candidate is a subject matter expert in data science with exposure to predictive modeling, forecasting, recommendation engines and data analytics. This person will build data pipelines, apply statistical modeling and machine learning, and deliver meaningful insights about customers, products and business strategy WBD to drive data-based decisions Responsibilities As a Staff Data Scientist, you will play a critical role in advancing data-driven solutions to complex business challenges, influencing data strategy efforts for WBD Businesses. The responsibilities include: Analyze complex, high volumes of data from various sources using various tools and data analytics techniques. Partner with stakeholders to understand business questions and provide answers using the most appropriate mathematical techniques. Model Development and Implementation: Design, develop, and implement statistical models, predictive models and machine learning algorithms that inform strategic decisions across various business units. Exploratory Data Analysis: Utilize exploratory data analysis techniques to identify and investigate new opportunities through innovative analytical and engineering methods. Advanced Analytics Solutions: Collaborate with Product and Business stakeholders to understand business challenges and develop sophisticated analytical solutions. Data Automation: Advance automation initiatives that reduce the time spent on data preparation, enabling more focus on strategic analysis. Innovative Frameworks Construction: Develop and enhance frameworks that improve productivity and are intuitive for adoption across other data teams and be abreast with innovative machine learning techniques (e.g., deep learning, reinforcement learning, ensemble methods) and emerging AI technologies to stay ahead of industry trends. Collaborate with data engineering teams to architect and scale robust, efficient data pipelines capable of handling large, complex datasets, ensuring the smooth and automated flow of data from raw collection to insights generation. Deployment of machine learning models into production environments, collaborating with DevOps and engineering teams to ensure smooth integration and scalability. Quality Assurance: Implement robust systems to detect, alert, and rectify data anomalies. Qualifications & Experiences Bachelor’s degree, MS, or greater in Computer/Data Science, Engineering, Mathematics, Statistics, or related quantitative discipline. 8+ years relevant experience in Data Science. Expertise in a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks, random forests, deep learning etc.) and experience with applications of these techniques. Expertise in advanced statistical techniques and concepts (regressions, statistical tests etc.) and experience with application of these tools. A demonstrated track record of utilizing data science to solve business problems in a professional environment. Expertise in SQL and either Python or R, including experience with application deployment packages like R Streamlit or Shiny. Experience with database technologies such as Databricks, Snowflake, and others. Familiarity with BI tools (Power BI, Looker, Tableau) and experience managing workflows in an Agile environment. Strong analytical and problem-solving abilities. Excellent communication skills to effectively convey complex data-driven insights to stakeholders. High attention to detail and capability to work independently in managing multiple priorities under tight deadlines. Proficiency in big data technologies (e.g., Spark, Kafka, Hive). Experience working in a cloud environment (AWS, Azure, GCP) to facilitate data solutions. Ability to collaborate effectively with business partners and develop and maintain productive professional relationships. Experience with adhering to established data management practices and standards. Ability to communicate to all levels of business, prioritize and manage assignments to meet deadlines and establish strong relationships. Interest in movies, games, and comics is a plus. How We Get Things Done… This last bit is probably the most important! Here at WBD, our guiding principles are the core values by which we operate and are central to how we get things done. You can find them at www.wbd.com/guiding-principles/ along with some insights from the team on what they mean and how they show up in their day to day. We hope they resonate with you and look forward to discussing them during your interview. Championing Inclusion at WBD Warner Bros. Discovery embraces the opportunity to build a workforce that reflects a wide array of perspectives, backgrounds and experiences. Being an equal opportunity employer means that we take seriously our responsibility to consider qualified candidates on the basis of merit, regardless of sex, gender identity, ethnicity, age, sexual orientation, religion or belief, marital status, pregnancy, parenthood, disability or any other category protected by law. If you’re a qualified candidate with a disability and you require adjustments or accommodations during the job application and/or recruitment process, please visit our accessibility page for instructions to submit your request.
Posted 2 days ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
Company Overview Docusign brings agreements to life. Over 1.5 million customers and more than a billion people in over 180 countries use Docusign solutions to accelerate the process of doing business and simplify people’s lives. With intelligent agreement management, Docusign unleashes business-critical data that is trapped inside of documents. Until now, these were disconnected from business systems of record, costing businesses time, money, and opportunity. Using Docusign’s Intelligent Agreement Management platform, companies can create, commit, and manage agreements with solutions created by the #1 company in e-signature and contract lifecycle management (CLM). What you'll do You will play an important role in applying and implementing effective machine learning solutions, with a significant focus on Generative AI. You will work with product and engineering teams to contribute to data-driven product strategies, explore and implement GenAI applications, and deliver impactful insights. This positionis an individual contributor role reporting to the Senior Manager, Data Science. Responsibility Experiment with, apply, and implement DL/ML models, with a strong emphasis on Large Language Models (LLMs), Agentic Frameworks, and other Generative AI techniques to predict user behavior, enhance product features, and improve automation Utilize and adapt various GenAI techniques (e.g., prompt engineering, RAG, fine-tuning existing models) to derive actionable insights, generate content, or create novel user experiences Collaborate with product, engineering, and other teams (e.g., Sales, Marketing, Customer Success) to build Agentic system to run campaigns at-scale Conduct in-depth analysis of customer data, market trends, and user insights to inform the development and improvement of GenAI-powered solutions Partner with product teams to design, administer, and analyze the results of A/B and multivariate tests, particularly for GenAI-driven features Leverage data to develop actionable analytical insights & present findings, including the performance and potential of GenAI models, to stakeholders and team members Communicate models, frameworks (especially those related to GenAI), analysis, and insights effectively with stakeholders and business partners Stay updated on the latest advancements in Generative AI and propose their application to relevant business problems Complete assignments with a sense of urgency and purpose, identify and help resolve roadblocks, and collaborate with cross-functional team members on GenAI initiatives Job Designation Hybrid: Employee divides their time between in-office and remote work. Access to an office location is required. (Frequency: Minimum 2 days per week; may vary by team but will be weekly in-office expectation) Positions at Docusign are assigned a job designation of either In Office, Hybrid or Remote and are specific to the role/job. Preferred job designations are not guaranteed when changing positions within Docusign. Docusign reserves the right to change a position's job designation depending on business needs and as permitted by local law. What you bring Basic Bachelor's or Master's degree in Computer Science, Physics, Mathematics, Statistics, or a related field 3+ years of hands-on experience in building data science applications and machine learning pipelines, with demonstrable experience in Generative AI projects Experience with Python for research and software development purposes, including common GenAI libraries and frameworks Experience with or exposure to prompt engineering, and utilizing pre-trained LLMs (e.g., via APIs or open-source models) Experience with large datasets, distributed computing, and cloud computing platforms (e.g., AWS, Azure, GCP) Proficiency with relational databases (e.g., SQL) Experience in training, evaluating, and deploying machine learning models in production environments, with an interest in MLOps for GenAI Proven track record in contributing to ML/GenAI projects from ideation through to deployment and iteration Experience using machine learning and deep learning algorithms like CatBoost, XGBoost, LGBM, Feed Forward Networks for classification, regression, and clustering problems, and an understanding of how these can complement GenAI solutions Experience as a Data Scientist, ideally in the SaaS domain with some focus on AI-driven product features Preferred PhD in Statistics, Computer Science, or Engineering with specialization in machine learning, AI, or Statistics, with research or projects in Generative AI 5+ years of prior industry experience, with at least 1-2 years focused on GenAI applications Previous experience applying data science and GenAI techniques to customer success, product development, or user experience optimization Hands-on experience with fine-tuning LLMs or working with RAG methodologies Experience with or knowledge of experimentation platforms (like DataRobot) and other AI related ones (like CrewAI) Experience with or knowledge of the software development lifecycle/agile methodology, particularly in AI product development Experience with or knowledge of Github, JIRA/Confluence Contributions to open-source GenAI projects or a portfolio of GenAI related work Programming Languages like Python, SQL; familiarity with R Strong knowledge of common machine learning, deep learning, and statistics frameworks and concepts, with a specific understanding of Large Language Models (LLMs), transformer architectures, and their applications Ability to break down complex technical concepts (including GenAI) into simple terms to present to diverse, technical, and non-technical audiences Life at Docusign Working here Docusign is committed to building trust and making the world more agreeable for our employees, customers and the communities in which we live and work. You can count on us to listen, be honest, and try our best to do what’s right, every day. At Docusign, everything is equal. We each have a responsibility to ensure every team member has an equal opportunity to succeed, to be heard, to exchange ideas openly, to build lasting relationships, and to do the work of their life. Best of all, you will be able to feel deep pride in the work you do, because your contribution helps us make the world better than we found it. And for that, you’ll be loved by us, our customers, and the world in which we live. Accommodation Docusign is committed to providing reasonable accommodations for qualified individuals with disabilities in our job application procedures. If you need such an accommodation, or a religious accommodation, during the application process, please contact us at accommodations@docusign.com. If you experience any issues, concerns, or technical difficulties during the application process please get in touch with our Talent organization at taops@docusign.com for assistance. Applicant and Candidate Privacy Notice
Posted 2 days ago
12.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Job Summary We are seeking an experienced Database Lead with a strong background in MS SQL Server (L4 Architect level) and working knowledge of Oracle (L3) . Experience in PostgreSQL will be considered a plus. This role demands excellent communication skills and proven experience in leading and mentoring database teams. You will be responsible for architecting, optimizing, and managing critical database systems that support enterprise-level applications. Key Responsibilities Lead the design, implementation, and maintenance of scalable and high-performing database solutions primarily using MS SQL Server. Provide architectural guidance on database design, performance tuning, and capacity planning. Act as the subject matter expert (SME) for MS SQL Server at an architect level. Support and maintain Oracle databases at L3 support level. Provide direction and recommendations on PostgreSQL if/when required. Mentor and manage a team of 4+ database administrators, fostering collaboration and growth. Establish best practices for database development, deployment, and maintenance. Collaborate with cross-functional teams including development, infrastructure, and application support. Ensure data integrity, security, and availability across all managed database platforms. Participate in on-call support rotation and manage incident resolution in a timely manner. Required Skills & Qualifications 12+ years of overall experience in database administration and architecture. MS SQL Server (L4 / Architect Level): Extensive hands-on experience in architecture, clustering, replication, performance tuning, and high availability. Oracle (L3 Support Level): Solid experience in installation, backup & recovery, and performance optimization. Exposure to PostgreSQL environments is a strong plus. Strong understanding of database security, backup, and disaster recovery solutions. Experience leading and mentoring teams for 4+ years. Excellent verbal and written communication skills. Ability to work in a fast-paced, collaborative environment
Posted 2 days ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Design, develop, and deploy AI/NLP solutions to solve diverse business challenges—particularly in areas like text classification, information extraction, summarization, and semantic search Conduct exploratory data analysis and feature engineering Contribute to the development initiatives in the GenAI domain, focusing on cutting-edge technologies like Large Language Models, Retrieval-Augmented Generation, and autonomous agents. Validate and monitor solution quality using real-world feedback data Work closely with ML engineers and DevOps teams to operationalize models (on cloud and on-prem environments) Hands-on experience on deploying solutions to cloud-native AI platforms (AWS/Azure/GCP) Collaborate with clients and business stakeholders to scope and refine requirements, validate model behavior, and ensure successful deployment Explore and experiment with LLMs, prompt engineering, and retrieval-augmented generation (RAG) techniques for advanced use cases Contribute to building reusable components, best practices, and scalable frameworks for AI delivery Exeperience of development of retrieval-augmented systems by combining LLMs with document retrieval, clustering, and search techniques. Qualifications 3–6 years of hands-on experience in data science, with a focus on NLP, deep learning, and machine learning applications Strong programming skills in Python; experience with relevant libraries such as scikit-learn, spaCy, NLTK, PyTorch, TensorFlow, or Hugging Face Proven experience in delivering NLP/LLM-based solutions Familiarity with cloud platforms (AWS, Azure, or GCP) and experience with deploying AI models to production Ability to handle end-to-end ownership of solutions, from POC to deployment Prior experience in consulting or client-facing data science roles is a plus Exposure to document databases (e.g., MongoDB), graph databases, or vector databases (e.g., FAISS, Pinecone) is a bonus
Posted 2 days ago
4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Achieving our goals starts with supporting yours. Grow your career, access top-tier health and wellness benefits, build lasting connections with your team and our customers, and travel the world using our extensive route network. Come join us to create what’s next. Let’s define tomorrow, together. Description United's Digital Technology team designs, develops, and maintains massively scaling technology solutions brought to life with innovative architectures, data analytics, and digital solutions. Find your future at United! We’re reinventing what our industry looks like, and what an airline can be – from the planes we fly to the people who fly them. When you join us, you’re joining a global team of 100,000+ connected by a shared passion with a wide spectrum of experience and skills to lead the way forward. Achieving our ambitions starts with supporting yours. Evolve your career and find your next opportunity. Get the care you need with industry-leading health plans and best-in-class programs to support your emotional, physical, and financial wellness. Expand your horizons with travel across the world’s biggest route network. Connect outside your team through employee-led Business Resource Groups. Create what’s next with us. Let’s define tomorrow together. Job Overview And Responsibilities This position manages the engineering and administration of all on-prem SQL instances and databases including the security, availability, performance, and data protection for those databases. This position manages the off-hours patching and deployments for all Tier 1 thru Tier 5 SQL and Couchbase databases. Additionally, this position is responsible for AWS cloud migrations, support, and deployments. Off-hours support for all Tier1 – Tier5 SQL Databases and Instances Create physical database structures based on physical design for development, test, and production environments Coordinate with systems engineers to configure servers for DBMS product installation and database creation Install, configure, and maintain DBMS product software on database and application servers Assist in the consultation to application development teams on DBMS product technical issues and techniques Implement monitoring procedures to maximize availability and performance of the database, while meeting defined SLA's Investigate, troubleshoot, and resolve database problems Communicate the required downtime with the application development teams and systems engineers to implement approved changes Identify, define and implement database backup / recovery and security strategies Install and support of DBMS (Database Management Systems) software and tools Perform various database activities which include monitoring, tuning, and troubleshooting, with appropriate supervision, if required Review deployment for all SQL database changes Complete pre-deployment code reviews with application teams as requested Review and provide feedback on all SQL code updates Work with deployment manages on dates and time for releases including assignments Patching of all SQL Server and some Couchbase Work with application teams to create schedule Send advanced and timely notifications for database instances to be patched Conduct database patching including any troubleshooting and validation post patching Project management and engagements for database migration Database Engineering Performance consultations Work with applications teams on current and new features such as partitioning, memory optimized tables, Always-On availability groups etc. Provide diagnoses for performance issues Tables and index reviews Data Purging and job scheduling This position is offered on local terms and conditions. Expatriate assignments and sponsorship for employment visas, even on a time-limited visa status, will not be awarded. This position is for United Airlines Business Services Pvt. Ltd - a wholly owned subsidiary of United Airlines Inc. Qualifications What’s needed to succeed (Minimum Qualifications): Bachelor's degree or 4 years of relevant work experience in Computer Science, Engineering, or related discipline 7+ years of experience Proficient in SQL development and administration disciplines with current hands-on experience with the latest SQL Server releases including SQL 2019, 2017, 2016 Strong background and experience with all BC and DR capabilities of Microsoft SQL Server including Always-On, Mirroring, Log Shipping, and Clustering with a practical understanding of other Infrastructure BC/DC capabilities Leverage metrics to drive capacity planning and trending to proactively identify potential problems and mitigate before they result in customer impact Understand the place of automation and standardization when delivering stable, maintainable, and performant database services at scale Perform platform, database, and query optimization Must be legally authorized to work in India for any employer without sponsorship Must be fluent in English (written and spoken) Successful completion of interview required to meet job qualification Reliable, punctual attendance is an essential function of the position What will help you propel from the pack (Preferred Qualifications): Bachelor's degree or 4 years of relevant work experience in Computer Science, Engineering, or related discipline Microsoft SQL Server or AWS certification Hands-On experience with AWS native databases, compute, storage, monitoring technologies, and continuous integration pipelines Experience implementing automation of Microsoft SQL Server deployment and maintenance, and support activities preferred Collaborate both vertically and horizontally to evolve overall database services and technology strategies Experience supporting SSAS, SSIS, and SSRS Very large Database (10+ TB) experience preferred Experience with PowerShell or other scripting languages a plus Experience with PCI, SOC, and SQL Auditing a plus
Posted 2 days ago
4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Achieving our goals starts with supporting yours. Grow your career, access top-tier health and wellness benefits, build lasting connections with your team and our customers, and travel the world using our extensive route network. Come join us to create what’s next. Let’s define tomorrow, together. Description United's Digital Technology team designs, develops, and maintains massively scaling technology solutions brought to life with innovative architectures, data analytics, and digital solutions. Find your future at United! We’re reinventing what our industry looks like, and what an airline can be – from the planes we fly to the people who fly them. When you join us, you’re joining a global team of 100,000+ connected by a shared passion with a wide spectrum of experience and skills to lead the way forward. Achieving our ambitions starts with supporting yours. Evolve your career and find your next opportunity. Get the care you need with industry-leading health plans and best-in-class programs to support your emotional, physical, and financial wellness. Expand your horizons with travel across the world’s biggest route network. Connect outside your team through employee-led Business Resource Groups. Create what’s next with us. Let’s define tomorrow together. Job Overview And Responsibilities United Offshore SQL DBA Team supports critical after hours work to support timely releases and patching activities overnight along with 8pm-8am rotational on call to support for very critical DB operations monitoring and incident support. SQL DBA team in India works along with offshore development teams in code review and troubleshooting for performance issues essential for United’s 24x7 technology support structure. They are actively engaged in migration projects for SQL desupported version remediation and supporting upgrades.Team also works on AWS setup and support across all areas of clould migrations and production support. SQL Server Production Support Off-hours support for all Tier1 – Tier5 SQL Databases and InstancesCreate physical database structures based on physical design for development, test, and production environments Coordinate with systems engineers to configure servers for DBMS product installation and database creation Install, configure, and maintain DBMS product software on database and application servers Assist in the consultation to application development teams on DBMS product technical issues and techniques Implement monitoring procedures to maximize availability and performance of the database, while meeting defined SLA's Investigate, troubleshoot, and resolve database problems Communicate the required downtime with the application development teams and systems engineers to implement approved changes Identify, define and implement database backup / recovery and security strategies Install and support of DBMS (Database Management Systems) software and tools Perform various database activities which include monitoring, tuning, and troubleshooting, with appropriate supervision, if required Review deployment for all SQL database changes Complete pre-deployment code reviews with application teams as requested Review and provide feedback on all SQL code updates Work with deployment manages on dates and time for releases including assignments Performance Tunning and code review Migrations and DB setup (Cloud-AWS, SQL) Patching of all SQL Server and some Couchbase Work with application teams to create schedule Send advanced and timely notifications for database instances to be patched Conduct database patching including any troubleshooting and validation post patching Code release and Techincal Documentation Backup Recovery and DR This position is offered on local terms and conditions. Expatriate assignments and sponsorship for employment visas, even on a time-limited visa status, will not be awarded. This position is for United Airlines Business Services Pvt. Ltd - a wholly owned subsidiary of United Airlines Inc. Qualifications What’s needed to succeed (Minimum Qualifications): Bachelor's degree or 4 years of relevant work experience in Computer Science, Engineering, or related discipline Microsoft SQL Server Certification 5 Years of related experience Proficient in SQL development and administration disciplines with current hands-on experience with the latest SQL Server releases including SQL 2019, 2017, 2016 Strong background and experience with all BC and DR capabilities of Microsoft SQL Server including Always-On, Mirroring, Log Shipping, and Clustering with a practical understanding of other Infrastructure BC/DC capabilities Leverage metrics to drive capacity planning and trending to proactively identify potential problems and mitigate before they result in customer impact Understand the place of automation and standardization when delivering stable, maintainable, and performant database services at scale Perform platform, database, and query optimization Must be legally authorized to work in India for any employer without sponsorship Must be fluent in English (written and spoken) Successful completion of interview required to meet job qualification Reliable, punctual attendance is an essential function of the position What will help you propel from the pack (Preferred Qualifications): Master's degree in Computer Science, Engineering, or related discipline Microsoft/AWS certifications on DB track preferred Hands-On experience with AWS native databases, compute, storage, monitoring technologies, and continuous integration pipelines Experience implementing automation of Microsoft SQL Server deployment and maintenance, and support activities preferred Collaborate both vertically and horizontally to evolve overall database services and technology strategies Experience supporting SSAS, SSIS, and SSRS Very large Database (10+ TB) experience preferred Experience with PowerShell or other scripting languages a plus Experience with PCI, SOX, GDPR, and SQL Auditing a plus Ability to support 24 X 7 United operations databases. Quick learner of new technology and guidelines with flexible, positive attitude and team player with independent decision making
Posted 2 days ago
0 years
0 - 1 Lacs
Thiruvananthapuram
On-site
Data Science and AI Developer **Job Description:** We are seeking a highly skilled and motivated Data Science and AI Developer to join our dynamic team. As a Data Science and AI Developer, you will be responsible for leveraging cutting-edge technologies to develop innovative solutions that drive business insights and enhance decision-making processes. **Key Responsibilities:** 1. Develop and deploy machine learning models for predictive analytics, classification, clustering, and anomaly detection. 2. Design and implement algorithms for data mining, pattern recognition, and natural language processing. 3. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. 4. Utilize advanced statistical techniques to analyze complex datasets and extract actionable insights. 5. Implement scalable data pipelines for data ingestion, preprocessing, feature engineering, and model training. 6. Stay updated with the latest advancements in data science, machine learning, and artificial intelligence research. 7. Optimize model performance and scalability through experimentation and iteration. 8. Communicate findings and results to stakeholders through reports, presentations, and visualizations. 9. Ensure compliance with data privacy regulations and best practices in data handling and security. 10. Mentor junior team members and provide technical guidance and support. **Requirements:** 1. Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, or a related field. 2. Proven experience in developing and deploying machine learning models in production environments. 3. Proficiency in programming languages such as Python, R, or Scala, with strong software engineering skills. 4. Hands-on experience with machine learning libraries/frameworks such as TensorFlow, PyTorch, Scikit-learn, or Spark MLlib. 5. Solid understanding of data structures, algorithms, and computer science fundamentals. 6. Excellent problem-solving skills and the ability to think creatively to overcome challenges. 7. Strong communication and interpersonal skills, with the ability to work effectively in a collaborative team environment. 8. Certification in Data Science, Machine Learning, or Artificial Intelligence (e.g., Coursera, edX, Udacity, etc.). 9. Experience with cloud platforms such as AWS, Azure, or Google Cloud is a plus. 10. Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka) is an advantage. Data Manipulation and Analysis : NumPy, Pandas Data Visualization : Matplotlib, Seaborn, Power BI Machine Learning Libraries : Scikit-learn, TensorFlow, Keras Statistical Analysis : SciPy Web Scrapping : Scrapy IDE : PyCharm, Google Colab HTML/CSS/JavaScript/React JS Proficiency in these core web development technologies is a must. Python Django Expertise: In-depth knowledge of e-commerce functionalities or deep Python Django knowledge. Theming: Proven experience in designing and implementing custom themes for Python websites. Responsive Design: Strong understanding of responsive design principles and the ability to create visually appealing and user-friendly interfaces for various devices. Problem Solving: Excellent problem-solving skills with the ability to troubleshoot and resolve issues independently. Collaboration: Ability to work closely with cross-functional teams, including marketing and design, to bring creative visions to life. interns must know about how to connect front end with datascience Also must Know to connect datascience to frontend **Benefits:** - Competitive salary package - Flexible working hours - Opportunities for career growth and professional development - Dynamic and innovative work environment Job Type: Full-time Pay: ₹8,000.00 - ₹12,000.00 per month Ability to commute/relocate: Thiruvananthapuram, Kerala: Reliably commute or planning to relocate before starting work (Preferred) Work Location: In person
Posted 2 days ago
5.0 years
3 - 4 Lacs
Hyderābād
On-site
Job Description Specialist, Oncology New Products, Oncology Global Commercial Pipeline Analytics, HHDDA Our Human Health Digital Data and Analytics (HHDDA) team is innovating how we understand our patients and their needs. Working cross functionally we are inventing new ways of engaging, interacting with our customers and patients leveraging digital, data and analytics and measuring the impact. The Specialist, Oncology New Products, Oncology Global Commercial Pipeline Analytics, HHDDA will be responsible for developing and delivering data and analytics, generating strategic insights, and addressing key business questions from the Global Oncology New Products Marketing team to inform current and future pipeline strategies. The team member will partner closely with multiple cross-functional teams, including global marketing, regional marketing, clinical, outcomes research, medical affairs, as well as across the depth of the HHDDA organization. Reporting to Associate Director, Oncology Global Commercial Pipeline Analytics, within HHDDA, this role will lead the development of analytics capabilities for the innovative oncology new products and pipeline priorities, spanning all tumor areas across oncology and hematology. The successful candidate will ’connect the dots’ across HHDDA capability functions like market research, forecasting, payer insights & analytics, data science, data strategy & solutions. Primary Responsibilities: Pipeline Analytics & Insights: Conduct analytics and synthesize insights enable launch excellence for multiple new assets. Conceptualize and build set of analytics capabilities and tools anchored to our marketing and launch frameworks to support strategic decision- making for Global Oncology portfolio (e.g. market and competitor landscape assessment tools, commercial opportunity assessments, market maps, analytical patient and HCP journeys, benchmark libraries). Analytics Delivery: Hands-on analytics project delivery with advanced expertise in data manipulation, analysis, and visualization using tools such as Excel-VBA, SQL, R, Python, PowerBI, ThoughtSpot or similar technologies and capabilities. Leverage a variety of patient modeling techniques including statistical, patient-flow, and simulations-based techniques for insight generation. Benchmarking Analytics: Lead benchmarking analytics to collect, analyze, and translate insights into recommended business actions to inform strategic business choices. Stakeholder Collaboration: Partner effectively with global marketing teams, HHDDA teams, and other cross-functional teams to inform strategic decisions and increase commercial rigor through all phases of pipeline asset development. Communication and Transparency: Provide clear and synthesized communication to global marketing leaders and cross-functional teams, on commercial insights addressing the priority business questions. Required Experience and Skills: Bachelor's degree, preferably in a scientific, engineering, or business-related field. Overall experience of 5+ years, with 3+ years of relevant experience in oncology commercialization, advanced analytics, oncology forecasting, insights syndication, clinical development, or related roles within the pharmaceutical or biotechnology industry Therapeutic area experience in Oncology and/or emerging oncology therapies Strong problem-solving abilities, to find and execute solutions to complex or ambiguous business problems. Experience conducting predictive modelling and secondary data analytics on large datasets using relevant skills (e.g., excel VBA, Python, SQL) and understanding of algorithms (such as regressions, decision trees, clustering etc.) Deep understanding of commercial Oncology global data ecosystem e.g., Epidemiology datasets, claims datasets, and real-world datasets Confident leader who takes ownership of responsibilities, is able to work autonomously and hold self and others accountable for delivery of quality output Strategic thinker who is consultative, collaborative and can “engage as equals.” Strong communication skills using effective storytelling grounded on data insights. Relationship-building and influencing skills with an ability to collaborate cross-functionally. Ability to connect dots across sources, and attention to detail Preferred Experience and Skills: Experience in diverse healthcare datasets, insights, and analytics Experience in Life Science or consulting industry Advanced degree (e.g., MBA, PharmD, PhD) preferred. Global experience preferred Team management experience Data visualization skills (e.g. PowerBI) Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status: Regular Relocation: VISA Sponsorship: Travel Requirements: Flexible Work Arrangements: Hybrid Shift: Valid Driving License: Hazardous Material(s): Required Skills: Biopharmaceutical Industry, Business Decisions, Business Intelligence (BI), Collaborative Communications, Collaborative Development, Cross-Functional Teamwork, Database Design, Data Engineering, Data Forecasting, Data Modeling, Data Science, Data Visualization, Digital Analytics, Health Data Analytics, Machine Learning, Patient Flow, Software Development, Stakeholder Engagement, Stakeholder Relationship Management, Strategic Insights, Waterfall Model Preferred Skills: Job Posting End Date: 08/31/2025 A job posting is effective until 11:59:59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID: R353742
Posted 2 days ago
10.0 - 12.0 years
4 - 7 Lacs
Hyderābād
On-site
Job description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist In this role, you will: Contribute to incident response activities, participating in investigations and reporting. Capable of automating repetitive tasks. Participates in the planning, development, design, testing, migration and implementation of Application products and upgrades of existing solutions. Proficient with sftp, scp, and Sterling/Connect:Direct trace utilities to identify file transfer issues Implements and maintains systems that are highly available, scalable, and self-healing. Requirements To be successful in this role, you should meet the following requirements: 10-12 years of experience in Windows / VMware / Unix / Linux Server Administration Managed File Transfer (MFT) and UNIX administration OS Troubleshooting, Performance Counters, Disk Administration, Windows Clustering / VCS Clustring Scripting know-how in Shell and Python. Connect Direct CDP / Scripting Hands on Cloud Technologies – GCP/AWS/Azure Knowledge of JIRA, Confluence, Splunk Hardware Administration – HP/IBM/Dell/Lenovo Experience in IBM Sterling Connect Direct Platform ( Unix and Windows servers) In depth Knowledge on SFTP Administration Platform ( Unix and Windows servers) Knowledge of Active Directory, Netbackup , Antivirus agent Knowledge of DevOps tools like Digs, Ansible, Puppet, Git , CI/CD Pipelines would be added advantage. Experience in designing, analyzing and troubleshooting large-scale distributed systems. Work with a global team spread across tech hubs in multiple geographies and time zones Microsoft certified, RHEL Certified,Cloud Technology Certified You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India
Posted 2 days ago
8.0 - 12.0 years
30 - 35 Lacs
Gurgaon
On-site
Qualifications Strong problem-solving skills with an emphasis on product development. A drive to learn and master new technologies and techniques. Experience using statistical computer languages (R, Python etc.) to manipulate data and draw insights from large data sets. Experience working with different data architectures. Knowledge and experience in statistical and data mining techniques. Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks. Prior experience in working on zero touch solutions can be an advantage We’re looking for someone with 8-12 years of experience manipulating data sets and building statistical models, has a Bachelors or master’s in computer science or another quantitative field, and is familiar with the following software: Coding knowledge and experience with several languages Python, R Experience with Python and common data science toolkits like Jupyter, Pandas, Numpy, Scikit-learn, TensorFlow, Keras etc. Knowledge and experience in statistical and data mining techniques: GLM/Regression, Random Forest, Boosting, Trees, text mining, social network analysis, etc. Experience in using different types of Databases - RDBMS, Graph, No-SQL etc. Strong experience in Generative AI Role & Responsibility: 1. Design production grade AI/ML solutions 2. Collaborate with verticals for AI use case discovery & solutions 3. Identify common components and enablers for AI/ML, GenAI 4. Define best practices for AI/ML development, usage and roll out 5. Hands on development of prototypes Job Type: Full-time Pay: ₹3,000,000.00 - ₹3,500,000.00 per year Application Question(s): Are you an immediate joiner? Experience: Data science: 9 years (Required) Python, R: 8 years (Required) Work Location: In person
Posted 2 days ago
0 years
0 Lacs
Rajkot, Gujarat, India
On-site
Are you passionate about Artificial Intelligence and Machine Learning? Start your AI/ML career with hands-on learning, real projects, and expert guidance at TechXperts! Skills Required : Technical Knowledge: • Basic understanding of Python and popular ML libraries like scikit-learn, pandas, NumPy • Familiarity with Machine Learning algorithms (Regression, Classification, Clustering, etc.) • Knowledge of Data Preprocessing, Model Training, and Evaluation Techniques • Understanding of AI concepts such as Deep Learning, Computer Vision, or NLP is a plus • Familiarity with tools like Jupyter Notebook, Google Colab, or TensorFlow/Keras is an advantage Soft Skills : • Curiosity to explore and learn new AI/ML techniques • Good problem-solving and analytical thinking • Ability to work independently and in a team • Clear communication and documentation skills What You’ll Do: • Assist in building and training machine learning models • Support data collection, cleaning, and preprocessing activities • Work on AI-driven features in real-time applications • Collaborate with senior developers to implement ML algorithms • Research and experiment with AI tools and frameworks Why Join TechXperts? • Learn by working on live AI/ML projects • Supportive mentorship from experienced developers • Exposure to the latest tools and techniques • Friendly work culture and growth opportunities
Posted 2 days ago
4.0 - 5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Title: Data Engineer (AWS QuickSight, Glue, PySpark) Location: Noida Job Summary: We are seeking a skilled Data Engineer with 4-5 years of experience to design, build, and maintain scalable data pipelines and analytics solutions within the AWS cloud environment. The ideal candidate will leverage AWS Glue, PySpark, and QuickSight to deliver robust data integration, transformation, and visualization capabilities. This role is critical in supporting business intelligence, analytics, and reporting needs across the organization. Key Responsibilities: Design, develop, and maintain data pipelines using AWS Glue, PySpark, and related AWS services to extract, transform, and load (ETL) data from diverse sources Build and optimize data warehouse/data lake infrastructure on AWS, ensuring efficient data storage, processing, and retrieval Develop and manage ETL processes to source data from various systems, including databases, APIs, and file storage, and create unified data models for analytics and reporting Implement and maintain business intelligence dashboards using Amazon QuickSight, enabling stakeholders to derive actionable insights Collaborate with cross-functional teams (business analysts, data scientists, product managers) to understand requirements and deliver scalable data solutions Ensure data quality, integrity, and security throughout the data lifecycle, implementing best practices for governance and compliance5. Support self-service analytics by empowering internal users to access and analyze data through QuickSight and other reporting tools1. Troubleshoot and resolve data pipeline issues , optimizing performance and reliability as needed Required Skills & Qualifications: Proficiency in AWS cloud services: AWS Glue, QuickSight, S3, Lambda, Athena, Redshift, EMR, and related technologies Strong experience with PySpark for large-scale data processing and transformation Expertise in SQL and data modeling for relational and non-relational databases Experience building and optimizing ETL pipelines and data integration workflows Familiarity with business intelligence and visualization tools , especially Amazon QuickSight Knowledge of data governance, security, and compliance best practices 5. Strong programming skills in Python ; experience with automation and scripting Ability to work collaboratively in agile environments and manage multiple priorities effectively Excellent problem-solving and communication skills . Preferred Qualifications: AWS certification (e.g., AWS Certified Data Analytics – Specialty, AWS Certified Developer) Good to have skills - understanding of machine learning , deep learning and Generative AI concepts, Regression, Classification, Predictive modeling, Clustering, Deep Learning
Posted 2 days ago
8.0 - 12.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Job Description: At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. We do this by driving Responsible Growth and delivering for our clients, teammates, communities and shareholders every day. Being a Great Place to Work is core to how we drive Responsible Growth. This includes our commitment to being an inclusive workplace, attracting and developing exceptional talent, supporting our teammates’ physical, emotional, and financial wellness, recognizing and rewarding performance, and how we make an impact in the communities we serve. Bank of America is committed to an in-office culture with specific requirements for office-based attendance and which allows for an appropriate level of flexibility for our teammates and businesses based on role-specific considerations. At Bank of America, you can build a successful career with opportunities to learn, grow, and make an impact. Join us! Believes diversity makes us stronger so we can reflect, connect and meet the diverse needs of our clients and employees around the world. Is committed to building a workplace where every employee is welcomed and given the support and resources to perform their jobs successfully. Wants to be a great place for people to work and strives to create an environment where all employees have the opportunity to achieve their goals. Provides continuous training and development opportunities to help employees achieve their career goals, whatever their background or experience. Is committed to advancing our tools, technology, and ways of working to better serve our clients and their evolving business needs. Believes in responsible growth and is dedicated to supporting our communities by connecting them to the lending, investing and giving them what they need to remain vibrant and vital. Job Description: We are seeking a skilled and proactive Problem Management specialist to join our Application Production Support team. This role is critical in ensuring service stability and continuous improvement across complex enterprise systems. The ideal candidate will drive problem management processes end-to-end, lead post-incident reviews (post-mortems), follow up on corrective actions, coordinate across multiple teams, and ensure adherence to interna controls and regulatory requirements. Responsibilities: Problem Management & Root Cause Analysis Own the problem management lifecycle, including identification, investigation, root cause analysis (RCA), and resolution tracking. Point of contact for assigned incidents of higher severity (from incident retrospective calls all the way up to Management Report (MR) documentation and publishing Facilitate structured post-mortem reviews for high-severity incidents, ensuring detailed documentation of impact, root cause, contributing factors, and lessons learnt. Drive the creation and implementation of permanent fixes or preventive measures in coordination with development, infrastructure, and support teams. Communicate well with technical & non-technical colleagues Work to a high standard with agreed timescales Able to demonstrate authority in the RCA calls while coordinating with other stakeholders & solve the discrepancy in blameless ways Regulatory & Audit Compliance Ensure all problem records related to regulatory-impacting incidents are properly tracked and reported, Support timely completion of regulatory post-incident report and provide high-quality input to external and internal stakeholders, including risk and compliance teams. Track and ensure closure of all problem related remediation actions with documented evidence, in line with audit requirements. Cross-Functional Coordination Act as a central point of contact for problem-related topics across Application Support, Development, Infrastructure and Risk functions. Champion and drive systemic improvements by influencing across siloed teams and escalating blockers when necessary. Drive continuous service improvement initiatives by identifying recurring issues, systemic risk and operational inefficiencies. Governance & Reporting Ensure problem management KPIs and metrics are consistently tracked, reported and improved. Prepare and present regular dashboards, analysis and governance packs for senior technology and business management. Maintain high-quality problem records in the ITSM system, ensuring they are accurate, complete and up to date. Perform data analysis & provide suggestion on identifying service level trend. Identify event/incident clustering for improvements. Required Skills: 8-12 years of experience in IT Operations, Application Support, or Problem Management in a complex enterprise environment. Familiarity with ITIL Problem Management lifecycle and practices (ITIL certification preferred). Strong analytical and technical skills to understand complex application landscapes and failure nodes. Experience working with ITSM tools such as Service Now, Remedy or JIRA. Excellent facilitation and communication skills, able to engage senior stakeholders across Technology & Business. Ability in influence without authority and drive outcomes across geographically dispersed teams. Strong documentation and presentation skills for post-mortem reviews and executive reporting. Experience handling post-incident reporting for regulators is highly preferred. Awareness of audit and control expectations in a banking or financial services environment. Desired Skills: Well versed with Root cause analysis (RCA) Techniques. Familiarity with ITIL v3 or ITIL 4 framework preferred. Trend and Pattern analysis to identify recurring incidents and patterns. Knowledge of Infrastructure and application Architecture. Change management awareness to access the impact of change on services. Experienced in generating problem metrics. Ability to dissect complex problems, work through technical logs, monitoring tools, and alerts. Clear and concise communication to technical and non-technical stakeholders. Good at stakeholder management, provide regular updates and post-mortems. Well versed with problem record creation and data quality maintenance. Proactive mindset and attention to details. Taking ownership of problems from detection to closure.
Posted 2 days ago
4.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description: We are seeking a highly motivated and enthusiastic Senior Data Scientist with over 4 years of experience to join our dynamic team. The ideal candidate will have a strong background in AI/ML analytics and a passion for leveraging data to drive business insights and innovation. Key Responsibilities: Develop and implement machine learning models and algorithms. Work closely with project stakeholders to understand requirements and translate them into deliverables. Utilize statistical and machine learning techniques to analyze and interpret complex data sets. Stay updated with the latest advancements in AI/ML technologies and methodologies. Collaborate with cross-functional teams to support various AI/ML initiatives. Qualifications: Bachelor’s degree in Computer Science, Data Science, Statistics, Mathematics, or a related field. Strong understanding of machine learning , deep learning and Generative AI concepts. Preferred Skills: Experience in machine learning techniques such as Regression, Classification, Predictive modeling, Clustering, Deep Learning stack, NLP using python Strong knowledge and experience in Generative AI/ LLM based development. Strong experience working with key LLM models APIs (e.g. AWS Bedrock, Azure Open AI/ OpenAI) and LLM Frameworks (e.g. LangChain, LlamaIndex). Experience with cloud infrastructure for AI/Generative AI/ML on AWS, Azure. Expertise in building enterprise grade, secure data ingestion pipelines for unstructured data – including indexing, search, and advance retrieval patterns. Knowledge of effective text chunking techniques for optimal processing and indexing of large documents or datasets. Proficiency in generating and working with text embeddings with understanding of embedding spaces and their applications in semantic search and information. retrieval. Experience with RAG concepts and fundamentals (VectorDBs, AWS OpenSearch, semantic search, etc.), Expertise in implementing RAG systems that combine knowledge bases with Generative AI models. Knowledge of training and fine-tuning Foundation Models (Athropic, Claud , Mistral, etc.), including multimodal inputs and outputs. Proficiency in Python, TypeScript, NodeJS, ReactJS (and equivalent) and frameworks. (e.g., pandas, NumPy, scikit-learn), Glue crawler, ETL Experience with data visualization tools (e.g., Matplotlib, Seaborn, Quicksight). Knowledge of deep learning frameworks (e.g., TensorFlow, Keras, PyTorch). Experience with version control systems (e.g., Git, CodeCommit). Good to have Skills Knowledge and Experience in building knowledge graphs in production. Understanding of multi-agent systems and their applications in complex problem-solving scenarios.
Posted 2 days ago
10.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
Solution Architect (India) Work Mode: Remote/ Hybrid Required exp: 10+ years Shift timing: Minimum 4 hours overlap required with US time Role Summary: The Solution Architect is responsible for designing robust, scalable, and high- performance AI and data-driven systems that align with enterprise goals. This role serves as a critical technical leader—bridging AI/ML, data engineering, ETL, cloud architecture, and application development. The ideal candidate will have deep experience across traditional and generative AI, including Retrieval- Augmented Generation (RAG) and agentic AI systems, along with strong fundamentals in data science, modern cloud platforms, and full-stack integration. Key Responsibilities: Design and own the end-to-end architecture of intelligent systems including data ingestion (ETL/ELT), transformation, storage, modeling, inferencing, and reporting. Architect GenAI-powered applications using LLMs, vector databases, and RAG pipelines; Agentic Workflow, integrate with enterprise knowledge graphs and document repositories. Lead the design and deployment of agentic AI systems that can plan, reason, and interact autonomously within business workflows. Collaborate with cross-functional teams including data scientists, data engineers, MLOps, and frontend/backend developers to deliver scalable and maintainable solutions. Define patterns and best practices for traditional ML and GenAI projects, covering model governance, explainability, reusability, and lifecycle management. Ensure seamless integration of ML/AI systems via RESTful APIs with frontend interfaces (e.g., dashboards, portals) and backend systems (e.g., CRMs, ERPs). Architect multi-cloud or hybrid cloud AI solutions, leveraging services from AWS, Azure, or GCP for scalable compute, storage, orchestration, and deployment. Provide technical oversight for data pipelines (batch and real-time), data lakes, and ETL frameworks ensuring secure and governed data movement. Conduct architecture reviews, mentor engineering teams, and drive design standards for AI/ML, data engineering, and software integration. Qualifications : Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. 10+ years of experience in software architecture, including at least 4 years in AI/ML-focused roles. Required Skills: Expertise in machine learning (regression, classification, clustering), deep learning (CNNs, RNNs, transformers), and NLP. Experience with Generative AI frameworks and services (e.g., OpenAI, LangChain, Azure OpenAI, Amazon Bedrock). Strong hands-on Python skills, with experience in libraries such as Scikit-learn, Pandas, NumPy, TensorFlow, or PyTorch. Proficiency in RESTful API development and integration with frontend components (React, Angular, or similar is a plus). Deep experience in ETL/ELT processes using tools like Apache Airflow, Azure Data Factory, or AWS Glue. Strong knowledge of cloud-native architecture and AI/ML services on either one of the cloud AWS, Azure, or GCP. Experience with vector databases (e.g., Pinecone, FAISS, Weaviate) and semantic search patterns. Experience in deploying and managing ML models with MLOps frameworks (MLflow, Kubeflow). Understanding of microservices architecture, API gateways, and container orchestration (Docker, Kubernetes). Having forntend exp is good to have.
Posted 2 days ago
0.0 years
0 - 0 Lacs
Thiruvananthapuram, Kerala
On-site
Data Science and AI Developer **Job Description:** We are seeking a highly skilled and motivated Data Science and AI Developer to join our dynamic team. As a Data Science and AI Developer, you will be responsible for leveraging cutting-edge technologies to develop innovative solutions that drive business insights and enhance decision-making processes. **Key Responsibilities:** 1. Develop and deploy machine learning models for predictive analytics, classification, clustering, and anomaly detection. 2. Design and implement algorithms for data mining, pattern recognition, and natural language processing. 3. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. 4. Utilize advanced statistical techniques to analyze complex datasets and extract actionable insights. 5. Implement scalable data pipelines for data ingestion, preprocessing, feature engineering, and model training. 6. Stay updated with the latest advancements in data science, machine learning, and artificial intelligence research. 7. Optimize model performance and scalability through experimentation and iteration. 8. Communicate findings and results to stakeholders through reports, presentations, and visualizations. 9. Ensure compliance with data privacy regulations and best practices in data handling and security. 10. Mentor junior team members and provide technical guidance and support. **Requirements:** 1. Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, or a related field. 2. Proven experience in developing and deploying machine learning models in production environments. 3. Proficiency in programming languages such as Python, R, or Scala, with strong software engineering skills. 4. Hands-on experience with machine learning libraries/frameworks such as TensorFlow, PyTorch, Scikit-learn, or Spark MLlib. 5. Solid understanding of data structures, algorithms, and computer science fundamentals. 6. Excellent problem-solving skills and the ability to think creatively to overcome challenges. 7. Strong communication and interpersonal skills, with the ability to work effectively in a collaborative team environment. 8. Certification in Data Science, Machine Learning, or Artificial Intelligence (e.g., Coursera, edX, Udacity, etc.). 9. Experience with cloud platforms such as AWS, Azure, or Google Cloud is a plus. 10. Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka) is an advantage. Data Manipulation and Analysis : NumPy, Pandas Data Visualization : Matplotlib, Seaborn, Power BI Machine Learning Libraries : Scikit-learn, TensorFlow, Keras Statistical Analysis : SciPy Web Scrapping : Scrapy IDE : PyCharm, Google Colab HTML/CSS/JavaScript/React JS Proficiency in these core web development technologies is a must. Python Django Expertise: In-depth knowledge of e-commerce functionalities or deep Python Django knowledge. Theming: Proven experience in designing and implementing custom themes for Python websites. Responsive Design: Strong understanding of responsive design principles and the ability to create visually appealing and user-friendly interfaces for various devices. Problem Solving: Excellent problem-solving skills with the ability to troubleshoot and resolve issues independently. Collaboration: Ability to work closely with cross-functional teams, including marketing and design, to bring creative visions to life. interns must know about how to connect front end with datascience Also must Know to connect datascience to frontend **Benefits:** - Competitive salary package - Flexible working hours - Opportunities for career growth and professional development - Dynamic and innovative work environment Job Type: Full-time Pay: ₹8,000.00 - ₹12,000.00 per month Ability to commute/relocate: Thiruvananthapuram, Kerala: Reliably commute or planning to relocate before starting work (Preferred) Work Location: In person
Posted 2 days ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Data Analytics Trainer Location: Shadnagar, Hyderabad Duration: 3 Months (400hrs) Job Summary We are seeking an experienced and passionate Data Analytics Trainer to deliver high-quality training in our 400-hour Data Analytics program, spanning 3 months. The ideal candidate will have deep expertise in Advanced Excel , Power BI , Tableau , MySQL , Python for Data Analysis , and foundational Machine Learning concepts. The trainer will facilitate both theoretical and practical sessions, guide students through hands-on projects, and prepare them for real-world data analytics challenges across domains like finance, healthcare, e-commerce, and more. Key Responsibilities Deliver Training Modules : Conduct engaging and interactive sessions covering the following: Advanced Excel (60 hours): Teach cell referencing, arithmetic/logical/lookup functions, data validation, pivot tables, charts, dashboards, and Power Query/Power Pivot. Power BI (66 hours): Guide students through data loading, visualization (column/line charts, conditional formatting), Power Query Editor, DAX expressions, and dashboard creation. Tableau (66 hours): Train on data visualization, filters, calculations (basic, LOD, table), custom charts, and dashboard actions, including Tableau Public integration. MySQL (60 hours): Instruct on SQL commands (DDL, DML, DQL, TCL), joins, indexes, views, stored procedures, triggers, and sub-queries. Python for Data Analysis (24 hours): Teach Python basics, data types, pandas for EDA, data visualization with matplotlib/seaborn, and data wrangling. Introduction to Machine Learning (72 hours): Cover statistics, hypothesis testing, EDA, linear/logistic regression, clustering, feature engineering, and model validation. CRT Training (54 hours): Facilitate sessions on quantitative aptitude, logical reasoning, verbal ability, and soft skills (e.g., presentation, teamwork, interview skills). Required Qualifications Education : Bachelor’s/Master’s degree in Data Science, Computer Science, Statistics, or a related field. Experience : 3+ years of professional experience in data analytics or data science. 1+ years of training or teaching experience in data analytics tools (Excel, Power BI, Tableau, MySQL, Python). Hands-on experience with machine learning concepts and Python libraries (pandas, matplotlib, seaborn). Technical Skills : Proficiency in Advanced Excel (VLOOKUP, INDEX-MATCH, Pivot Tables, Power Query). Expertise in Power BI (DAX, Power Query, dashboard creation) and Tableau (LOD calculations, custom charts). Strong knowledge of MySQL (joins, stored procedures, triggers) and Python (pandas, data visualization). Familiarity with machine learning concepts (regression, clustering, feature engineering). Soft Skills : Excellent communication and presentation skills. Ability to simplify complex concepts for beginners. Strong problem-solving and mentoring abilities. Preferred Qualifications Industry experience in domains like finance, healthcare, e-commerce, or supply chain analytics. Certifications in Power BI , Tableau , or Python (e.g., Microsoft Certified: Data Analyst Associate). Experience with capstone project mentoring in data analytics or machine learning. Familiarity with quantitative aptitude, logical reasoning, and soft skills training. Note : Mode of delivery is offline. Transportation and Accommodation will be provided.
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough