Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Design and develop scalable systems for processing unstructured data into actionable insights using Python, Flask, and Azure Cognitive Services Integrate Optical Character Recognition (OCR), Speech-to-Text, and NLP models into workflows to handle various file formats such as PDFs, images, audio files, and text documents Implement robust error-handling mechanisms, multithreaded architectures, and RESTful APIs to ensure seamless user experiences. Utilize Azure OpenAI, Azure Speech SDK, and Azure Form Recognizer to create AI-powered solutions tailored to meet complex business requirements Collaborate with cross-functional teams to drive innovation and implement analytics workflows and ML models to enhance business processes and decision-making Ensure the accuracy, efficiency, and scalability of systems focusing on healthcare claims processing, document digitization, and data extraction Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications 8+ years of relevant experience in AI/ML engineering and cognitive automation Proven experience as an AI/ML Engineer, Software Engineer, Data Analyst, or a similar role in the tech industry Extensive experience with Azure Cognitive Services and other AI technologies SQL, Python, PySpark, Scala experience Proficient in developing and deploying machine learning models and handling large data sets Proven solid programming skills in Python and familiarity with Flask web framework Proven excellent problem-solving skills and the ability to work in a fast-paced environment Proven solid communication and collaboration skills, capable of working effectively with cross-functional teams. Demonstrated ability to implement robust ETL or ELT workflows for structured and unstructured data ingestion, transformation, and storage Preferred Qualification Experience in healthcare industries Skills Python Programming and SQL Data Analytics and Machine Learning Classification and Unsupervised Learning Regression and NLP Cloud and DevOps Foundations Data Visualization and Reporting At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.
Posted 1 week ago
7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
7+ years of experience in Big Data with strong expertise in Spark and Scala Mandatory Skills: Big Data Primarily Spark and Scala Strong Knowledge in HDFS, Hive, Impala with knowledge on Unix , Oracle, Autosys, Good to Have : Agile Methodology and Banking Expertise Strong Communication Skills Not limited to Spark batch, need Spark streaming experience No SQL DB Experience : HBase/Mongo/Couchbase
Posted 1 week ago
2.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Job Description: Business Title QA Manager Years Of Experience 10+ Job Descreption The purpose of this role is to ensure the developed software meets the client requirements and the business’ quality standards within the project release cycle and established processes. To lead QA technical initiatives in order to optimize the test approach and tools. Must Have Skills At least 2 years in a lead role. Experience with Azure cloud. Testing file-based data lake solutions or Big data based solution. Worked on migration or implementation of Azure Data Factory projects. Strong experience in ETL/data pipeline testing, preferably with Azure Data Factory. Proficiency in SQL for data validation and test automation. Familiarity with Azure services: Data Lake, Synapse Analytics, Azure SQL, Key Vault, and Logic Apps. Experience with test management tools (e.g., Azure DevOps, JIRA, TestRail). Understanding of CI/CD pipelines and integration of QA in DevOps workflows. Experience with data quality frameworks (e.g., Great Expectations, Deequ). Knowledge of Python or PySpark for data testing automation. Exposure to Power BI or other BI tools for test result visualization. Azure Data Factory Exposure to Azure Databricks SQL/stored procedure on SQL Server ADLS Gen2 Exposure to Python/ Shell script Good To Have Skills Exposure to any ETL tool experience. Any other Cloud experience (AWS / GCP). Exposure to Spark architecture, including Spark Core, Spark SQL, DataFrame, Spark Streaming, and fault tolerance mechanisms. ISTQB or equivalent QA certification. Working experience on JIRA and Agile Experience with testing SOAP / API projects Stakeholder communication Microsoft Office Key responsibiltes Lead the QA strategy, planning, and execution for ADF-based data pipelines and workflows. Design and implement test plans, test cases, and test automation for data ingestion, transformation, and loading processes. Validate data accuracy, completeness, and integrity across source systems, staging, and target data stores (e.g., Azure SQL, Synapse, Data Lake). Collaborate with data engineers, architects, and business analysts to understand data flows and ensure test coverage. Develop and maintain automated data validation scripts using tools like PySpark, SQL, PowerShell, or Azure Data Factory Data Flows. Monitor and report on data quality metrics, defects, and test coverage. Ensure compliance with data governance, security, and privacy standards. Mentor junior QA team members and coordinate testing efforts across sprints. Education Qulification Minimum Bachelor’s degree in computer science, Information Systems, or related field. Certification If Any Any Basic level certification in AWS / AZURE / GCP Snowflake Associate / Core Shift timing 12 PM to 9 PM and / or 2 PM to 11 PM - IST time zone Location: DGS India - Mumbai - Goregaon Prism Tower Brand: Merkle Time Type: Full time Contract Type: Permanent
Posted 1 week ago
4.0 - 6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Mandatory Skills 4-6 years of exp with basic proficiency in Python, SQL and familiarity with libraries like NumPy or Pandas. Understanding of fundamental programming concepts (data structures, algorithms, etc.). Eagerness to learn new tools and frameworks, including Generative AI technologies. Familiarity with version control systems (e.g., Git). Strong problem-solving skills and attention to detail. Exposure to data processing tools like Apache Spark or PySpark, SQL. Basic understanding of APIs and how to integrate them. Interest in AI/ML and willingness to explore frameworks like LangChain. Familiarity with cloud platforms (AWS, Azure, or GCP) is a plus Job Description We are seeking a motivated Python Developer to join our team. The ideal candidate will have a foundational understanding of Python programming, SQL and a passion for learning and growing in the field of software development. You will work closely with senior developers and contribute to building and maintaining applications, with opportunities to explore Generative AI frameworks and data processing tools. Key Responsibilities Assist in developing and maintaining Python-based applications. Write clean, efficient, and well-documented code. Collaborate with senior developers to integrate APIs and frameworks. Support data processing tasks using libraries like Pandas or PySpark. Learn and work with Generative AI frameworks (e.g., LangChain, LangGraph) under guidance. Debug and troubleshoot issues in existing applications.
Posted 1 week ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description As a GCP Data Engineer, you will integrate data from various sources into novel data products. You will build upon existing analytical data, including merging historical data from legacy platforms with data ingested from new platforms. You will also analyze and manipulate large datasets, activating data assets to enable enterprise platforms and analytics within GCP. You will design and implement the transformation and modernization on GCP, creating scalable data pipelines that land data from source applications, integrate into subject areas, and build data marts and products for analytics solutions. You will also conduct deep-dive analysis of Current State Receivables and Originations data in our data warehouse, performing impact analysis related to Ford Credit North America's modernization and providing implementation solutions. Moreover, you will partner closely with our AI, data science, and product teams, developing creative solutions that build the future for Ford Credit. Experience with large-scale solutions and operationalizing data warehouses, data lakes, and analytics platforms on Google Cloud Platform or other cloud environments is a must. We are looking for candidates with a broad set of analytical and technology skills across these areas and who can demonstrate an ability to design the right solutions with the appropriate combination of GCP and 3rd party technologies for deployment on Google Cloud Platform. Responsibilities Design and build production data engineering solutions on Google Cloud Platform (GCP) using services such as BigQuery, Dataflow, DataForm, Astronomer, Data Fusion, DataProc, Cloud Composer/Air Flow, Cloud SQL, Compute Engine, Cloud Functions, Cloud Run, Artifact Registry, GCP APIs, Cloud Build, App Engine, and real-time data streaming platforms like Apache Kafka and GCP Pub/Sub. Design new solutions to better serve AI/ML needs. Lead teams to expand our AI-enabled services. Partner with governance teams to tackle key business needs. Collaborate with stakeholders and cross-functional teams to gather and define data requirements and ensure alignment with business objectives. Partner with analytics teams to understand how value is created using data. Partner with central teams to leverage existing solutions to drive future products. Design and implement batch, real-time streaming, scalable, and fault-tolerant solutions for data ingestion, processing, and storage. Create insights into existing data to fuel the creation of new data products. Perform necessary data mapping, impact analysis for changes, root cause analysis, and data lineage activities, documenting information flows. Implement and champion an enterprise data governance model. Actively promote data protection, sharing, reuse, quality, and standards to ensure data integrity and confidentiality. Develop and maintain documentation for data engineering processes, standards, and best practices. Ensure knowledge transfer and ease of system maintenance. Utilize GCP monitoring and logging tools to proactively identify and address performance bottlenecks and system failures. Provide production support by addressing production issues as per SLAs. Optimize data workflows for performance, reliability, and cost-effectiveness on the GCP infrastructure. Work within an agile product team. Deliver code frequently using Test-Driven Development (TDD), continuous integration, and continuous deployment (CI/CD). Continuously enhance your domain knowledge. Stay current on the latest data engineering practices. Contribute to the company's technical direction while maintaining a customer-centric approach. Qualifications GCP certified Professional Data Engineer Successfully designed and implemented data warehouses and ETL processes for over five years, delivering high-quality data solutions. 5+ years of complex SQL development experience 2+ experience with programming languages such as Python, Java, or Apache Beam. Experienced cloud engineer with 3+ years of GCP expertise, specializing in managing cloud infrastructure and applications to production-scale solutions. In-depth understanding of GCP’s underlying architecture and hands-on experience of crucial GCP services, especially those related to data processing (Batch/Real Time) leveraging Terraform, Big Query, Dataflow, Pub/Sub, Data form, astronomer, Data Fusion, DataProc, Pyspark, Cloud Composer/Air Flow, Cloud SQL, Compute Engine, Cloud Functions, Cloud Run, Cloud build and App Engine, alongside and storage including Cloud Storage DevOps tools such as Tekton, GitHub, Terraform, Docker. Expert in designing, optimizing, and troubleshooting complex data pipelines. Experience developing and deploying microservices architectures leveraging container orchestration frameworks Experience in designing pipelines and architectures for data processing. Passion and self-motivation to develop/experiment/implement state-of-the-art data engineering methods/techniques. Self-directed, work independently with minimal supervision, and adapts to ambiguous environments. Evidence of a proactive problem-solving mindset and willingness to take the initiative. Strong prioritization, collaboration & coordination skills, and ability to simplify and communicate complex ideas with cross-functional teams and all levels of management. Proven ability to juggle multiple responsibilities and competing demands while maintaining a high level of productivity. Master’s degree in computer science, software engineering, information systems, Data Engineering, or a related field. Data engineering or development experience gained in a regulated financial environment. Experience in coaching and mentoring Data Engineers Project management tools like Atlassian JIRA Experience working in an implementation team from concept to operations, providing deep technical subject matter expertise for successful deployment. Experience with data security, governance, and compliance best practices in the cloud. Experience using data science concepts on production datasets to generate insights
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Evernorth Evernorth℠ exists to elevate health for all, because we believe health is the starting point for human potential and progress. As champions for affordable, predictable and simple health care, we solve the problems others don’t, won’t or can’t. Our innovation hub in India will allow us to work with the right talent, expand our global footprint, improve our competitive stance, and better deliver on our promises to stakeholders. We are passionate about making healthcare better by delivering world-class solutions that make a real difference. We are always looking upward. And that starts with finding the right talent to help us get there. Position Overview Excited to grow your career? This position’s primary responsibility will be to translate software requirements into functions using Mainframe , ETL , Data Engineering with expertise in Databricks and Database technologies. This position offers the opportunity to work on modernizing legacy systems, contribute to cloud infrastructure automation, and support production systems in a fast-paced, agile environment. You will work across multiple teams and technologies to ensure reliable, high-performance data solutions that align with business goals. As a Mainframe & ETL Engineer, you will be responsible for the end-to-end development and support of data processing solutions using tools such as Talend, Ab Initio, AWS Glue, and PySpark, with significant work on Databricks and modern cloud data platforms. You will support infrastructure provisioning using Terraform, assist in modernizing legacy systems including mainframe migration, and contribute to performance tuning of complex SQL queries across multiple database platforms including Teradata, Oracle, Postgres, and DB2. You will also be involved in CI/CD practices Responsibilities Support, maintain and participate in the development of software utilizing technologies such as COBOL, DB2, CICS and JCL. Support, maintain and participate in the ETL development of software utilizing technologies such as Talend, Ab-Initio, Python, PySpark using Databricks. Work with Databricks to design and manage scalable data processing solutions. Implement and support data integration workflows across cloud (AWS) and on-premises environments. Support cloud infrastructure deployment and management using Terraform. Participate in the modernization of legacy systems, including mainframe migration. Perform complex SQL queries and performance tuning on large datasets. Contribute to CI/CD pipelines, version control, and infrastructure automation. Provide expertise, tools, and assistance to operations, development, and support teams for critical production issues and maintenance Troubleshoot production issues, diagnose the problem, and implement a solution - First line of defense in finding the root cause Work cross-functionally with the support team, development team and business team to efficiently address customer issues. Active member of high-performance software development and support team in an agile environment Engaged in fostering and improving organizational culture. Qualifications Required Skills: Strong analytical and technical skills. Proficiency in Databricks – including notebook development, Delta Lake, and Spark-based process. Experience with mainframe modernization or migrating legacy systems to modern data platforms. Strong programming skills, particularly in PySpark for data processing. Familiarity with data warehousing concepts and cloud-native architecture. Solid understanding of Terraform for managing infrastructure as code on AWS. Familiarity with CI/CD practices and tools (e.g., Git, Jenkins). Strong SQL knowledge on OLAP DB platforms (Teradata, Snowflake) and OLTP DB platforms (Oracle, DB2, Postgres, SingleStore). Strong experience with Teradata SQL and Utilities Strong experience with Oracle, Postgres and DB2 SQL and Utilities Develop high quality database solutions Ability to do extensive analysis on complex SQL processes and design skills Ability to analyze existing SQL queries for performance improvements Experience in software development phases including design, configuration, testing, debugging, implementation, and support of large-scale, business centric and process-based applications Proven experience working with diverse teams of technical architects, business users and IT areas on all phases of the software development life cycle. Exceptional analytical and problem-solving skills Structured, methodical approach to systems development and troubleshooting Ability to ramp up fast on a system architecture Experience in designing and developing process-based solutions or BPM (business process management) Strong written and verbal communication skills with the ability to interact with all levels of the organization. Strong interpersonal/relationship management skills. Strong time and project management skills. Familiarity with agile methodology including SCRUM team leadership. Familiarity with modern delivery practices such as continuous integration, behavior/test driven development, and specification by example. Desire to work in application support space Passion for learning and desire to explore all areas of IT. Required Experience & Education Minimum of 8-12 years of experience in application development role. Bachelor’s degree equivalent in Information Technology, Business Information Systems, Technology Management, or related field of study. Location & Hours of Work: Hyderabad and Hybrid (13:00 AM IST to 10:00 PM IST) Equal Opportunity Statement Evernorth is an Equal Opportunity Employer actively encouraging and supporting organization-wide involvement of staff in diversity, equity, and inclusion efforts to educate, inform and advance both internal practices and external work with diverse client populations. About Evernorth Health Services Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.
Posted 1 week ago
12.0 - 20.0 years
16 - 30 Lacs
Hyderabad, Bengaluru, Delhi / NCR
Work from Office
Role & responsibilities Role: Analytics Director, Pharma Analytics Role overview: Lead analytics engagements from discovery to delivery, managing key stakeholders across functions, and ensuring alignment with business goals. The position demands a strategic mindset to challenge conventional thinking, identify actionable insights, and deliver innovative, data-driven solutions. The individual will also ensure adherence to established policies in programming, documentation, and system management. Key Responsibilities: Lead end-to-end analytics engagements in the healthcare/pharma domain, managing cross-functional stakeholders across client and internal teams. Challenge conventional analytical approaches and proactively recommend innovative strategies and methodologies. Drive discovery and requirements gathering to ensure business context is accurately captured and translated into analytical solutions. Generate quantitatively-driven insights to solve complex commercial problems such as targeting, segmentation, campaign analytics, omnichannel analytics and performance optimization. Identify next-best actions and strategic recommendations to enhance brand performance, sales force effectiveness, launch analytics, test & control analysis, campaign effectiveness measurement etc. Ensure adherence to programming standards, project documentation protocols, and system management policies across analytics workstreams. Apply deep domain knowledge of pharma commercialization to design solutions aligned with industry-specific compliance, market access, and competitive dynamics. Preferred Qualifications: Bachelors or Masters degree in a quantitative discipline (e.g., Statistics, Economics, Mathematics, Engineering, Computer Science) or related field 12 years of Experience in commercial analytics in the pharmaceutical or healthcare domain Strong understanding of pharma commercial processes such as sales force effectiveness, brand performance tracking, patient analytics, and omnichannel marketing etc. Proficiency in analytical tools such as SQL, Python, Pyspark, Databricks for data manipulation and modelling. Familiarity with healthcare data sources like IQVIA (Xponent, DDD, LAAD), Symphony, APLD, EHR/EMR, and claims data. Proven ability to manage stakeholders and translate business needs into actionable analytics solutions. Strong communication and storytelling skills to present complex insights to both technical and non-technical audiences.
Posted 1 week ago
5.0 - 8.0 years
15 - 22 Lacs
Gurugram
Work from Office
Experience: 6- 8 years overall, with at least 23 years deep hands-on experience in each key area below. What you’ll do Own and evolve our end-to-end data platform, ensuring robust pipelines, data lakes, and warehouses with 100% uptime. Build and maintain real-time and batch pipelines using Debezium, Kafka, Spark, Apache Iceberg, Trino, and Clickhouse. Manage and optimize our databases (PostgreSQL, DocumentDB, MySQL RDS) for performance and reliability. Drive data quality management — understand, enrich, and maintain context for trustworthy insights. Develop and maintain reporting services for data exports, file deliveries, and embedded dashboards via Apache Superset. Use orchestration tools like Maestro (or similar DAGs) for reliable, observable workflows. Leverage LLMs and other AI models generate insights and automate agentic tasks that enhance analytics and reporting. Build domain expertise to solve complex data problems and deliver actionable business value. Collaborate with analysts, data scientists, and engineers to maximize the impact of our data assets. Write robust, production-grade Python code for pipelines, automation, and tooling. What you’ll bring Experience with our open-source data pipeline and datalike, warehouse stack Strong Python skills for data workflows and automation. Hands-on orchestration experience with Maestro, Airflow, or similar. Practical experience using LLMs or other AI models for data tasks. Solid grasp of data quality, enrichment, and business context. Experience with dashboards and BI using Apache Superset (or similar tools). Strong communication and problem-solving skills.
Posted 1 week ago
0 years
0 Lacs
Andhra Pradesh, India
On-site
Experience in building Pyspark process. Proficient in understanding distributed computing principles. Experience in managing Hadoop cluster with all services. Experience with Nosql Databases and Messaging systems like Kafka. Designing building installing configuring and supporting Hadoop Perform analysis of vast data stores. Good understanding of cloud technology. Must have strong technical experience in Design Mapping specifications HLD LLD. Must have the ability to relate to both business and technical members of the team and possess excellent communication skills. Leverage internal tools and SDKs, utilize AWS services such as S3, Athena, and Glue, and integrate with our internal Archival Service Platform for efficient data purging. Lead the integration efforts with the internal Archival Service Platform for seamless data purging and lifecycle management. Collaborate with the data engineering team to continuously improve data integration pipelines, ensuring adaptability to evolving business needs. Develop and maintain data platforms using Pyspark Work with AWS and Big Data, design and implement data pipelines, and ensure data quality and integrity Collaborate with crossfunctional teams to understand data requirements and design solutions that meet business needs Implement and manage agents for monitoring, logging, and automation within AWS environments Handling migration from PySpark to AWS
Posted 1 week ago
5.0 - 7.0 years
5 - 14 Lacs
Pune, Gurugram, Bengaluru
Work from Office
• Handson experience in objectoriented programming using Python, PySpark, APIs, SQL, BigQuery, GCP • Building data pipelines for huge volume of data • Dataflow Dataproc and BigQuery • Deep understanding of ETL concepts
Posted 1 week ago
5.0 - 8.0 years
5 - 8 Lacs
Bengaluru
Work from Office
Skills desired: Strong at SQL (Multi pyramid SQL joins) Python skills (FastAPI or flask framework) PySpark Commitment to work in overlapping hours GCP knowledge(BQ, DataProc and Dataflow) Amex experience is preferred(Not Mandatory) Power BI preferred (Not Mandatory) Flask, Pyspark, Python, Sql
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Join us as a Software Engineer, PySpark This is an opportunity for a driven Software Engineer to take on an exciting new career challenge Day-to-day, you'll build a wide network of stakeholders of varying levels of seniority It’s a chance to hone your existing technical skills and advance your career We're offering this role at associate vice president level What you'll do In your new role, you’ll engineer and maintain innovative, customer centric, high performance, secure and robust solutions. You’ll be working within a feature team and using your extensive experience to engineer software, scripts and tools that are often complex, as well as liaising with other engineers, architects and business analysts across the platform. You’ll Also Be Producing complex and critical software rapidly and of high quality which adds value to the business Working in permanent teams who are responsible for the full life cycle, from initial development, through enhancement and maintenance to replacement or decommissioning Collaborating to optimise our software engineering capability Designing, producing, testing and implementing our working code Working across the life cycle, from requirements analysis and design, through coding to testing, deployment and operations The skills you'll need You’ll need a background in software engineering, software design, architecture, and an understanding of how your area of expertise supports our customers. You'll need at least nine years of experience in PySpark, SQL and AWS. You’ll Also Need Experience of working with development and testing tools, bug tracking tools and wikis Experience in multiple programming languages or low code toolsets Experience of DevOps, Testing and Agile methodology and associated toolsets A background in solving highly complex, analytical and numerical problems Experience of implementing programming best practice, especially around scalability, automation, virtualisation, optimisation, availability and performance
Posted 1 week ago
5.0 - 8.0 years
7 - 10 Lacs
Chennai, Bengaluru
Work from Office
Availability: Immediate preferred Key Responsibilities: Design and implement advanced data science workflows using Azure Databricks. Collaborate with cross-functional teams to scale data pipelines. Optimize and fine-tune PySpark jobs for performance and efficiency. Support real-time analytics and big data use cases in a remote-first agile environment. Required Skills: Proven experience in Databricks, PySpark, and big data architecture. Ability to work with data scientists to operationalize models. Strong understanding of data governance, security, and performance. Location: Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad
Posted 1 week ago
175.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. How will you make an impact in this role? This role will be part of the Treasury Applications Platform team. We are currently modernizing our platform, migrating it to GCP. You will contribute towards making the platform more resilient and secure for future regulatory requirements and ensuring compliance and adherence to Federal Regulations. Preferably a BS or MS degree in computer science, computer engineering, or other technical discipline 10+ years of software development experience Ability to effectively interpret technical and business objectives and challenges and articulate solutions Willingness to learn new technologies and exploit them to their optimal potential Strong experience Finance, Controllership, Treasury Applications Strong background with Java, Python, Pyspark, SQL, Concurrency/parallelism, oracle, big data, in-memory computing platforms Cloud experience with GCP would be a preference Conduct IT requirements gathering. Define problems and provide solution alternatives. Solution Architecture and system design. Create detailed system design documentation. Implement deployment plans. Understand business requirements with the objective of providing high-quality IT solutions. Support team in different phases of the project including problem definition, effort estimation, diagnosis, solution generation, design and deployment. Under supervision participate in unit-level and organizational initiatives with the objective of providing high-quality and value adding consulting solutions. Troubleshoot issues, diagnose problems, and conduct root-cause analysis. Perform secondary research as instructed by supervisor to assist in strategy and business planning. Minimum Qualifications: Strong experience with Cloud architecture Deep understanding of SDLC, OOAD, CI/CD, Containerization, Agile, Java, PL/SQL Preferred Qualifications: GCP Big data processing systems Finance Treasury Cash Management Kotlin experience Kafka Open Telemetry Network We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.
Posted 1 week ago
6.0 - 9.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Summary JD AWS PySpark Data Engineer Experience: 6 9 years as a Data Engineer with a strong focus on PySpark and large scale data processing. PySpark Expertise: Decent to proficient in writing optimized PySpark code, including working with DataFrames, Spark SQL, and performing complex transformations. AWS Cloud Proficiency: Fair experience with core AWS services, such as S3, Glue, EMR, Lambda, and Redshift, with the ability to manage and optimize data workflows on AWS. Performance Optimization: Proven ability to optimize PySpark jobs for performance, including experience with partitioning, caching, and handling skewed data. Problem Solving Skills: Strong analytical and problem solving skills, with a focus on troubleshooting data issues and optimizing performance in distributed environments. Communication and Collaboration: Excellent communication skills to work effectively with cross functional teams and clearly document technical processes. Added advantage AWS Glue ETL: Hands on experience with AWS Glue ETL jobs, including creating and managing workflows, handling job bookmarks, and implementing transformations. Database ¿ Good working knowledge of Data warehouse like Redshift.
Posted 1 week ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description About the Company - We’re Salesforce, the Customer Company, inspiring the future of business with AI+ Data +CRM. Leading with our core values, we help companies across every industry blaze new trails and connect with customers in a whole new way. And, we empower you to be a Trailblazer, too — driving your performance and career growth, charting new paths, and improving the state of the world. If you believe in business as the greatest platform for change and in companies doing well and doing good – you’ve come to the right place. About The Role We’re looking for an experienced Data Scientist who will help us build marketing attribution, causal inference, and uplift models to improve the effectiveness and efficiency of our marketing efforts. This person will also design experiments and help us drive consistent approach to experimentation and campaign measurement to support a range of marketing, customer engagement, and digital use cases. This Lead Data Scientist brings significant experience in designing, developing, and delivering statistical models and AI/ML algorithms for marketing and digital optimization use cases on large-scale data sets in a cloud environment. They show rigor in how they prototype, test, and evaluate algorithm performance both in the testing phase of algorithm development and in managing production algorithms. They demonstrate advanced knowledge of statistical and machine learning techniques along with ensuring the ethical use of data in the algorithm design process. At Salesforce, Trust is our number one value and we expect all applications of statistical and machine learning models to adhere to our values and policies to ensure we balance business needs with responsible uses of technology. Responsibilities As part of the Marketing Effectiveness Data Science team within the Salesforce Marketing Data Science organization, develop statistical and machine learning models to improve marketing effectiveness - e.g., marketing attribution models, causal inference models, uplift models, etc. Develop optimization and simulation algorithms to provide marketing investment and allocation recommendations to improve ROI by optimizing spend across marketing channels. Own the full lifecycle of model development from ideation and data exploration, algorithm design and testing, algorithm development and deployment, to algorithm monitoring and tuning in production. Design experiments to support marketing, customer experience, and digital campaigns and develop statistically sound models to measure impact. Collaborate with other data scientists to develop and operationalize consistent approaches to experimentation and campaign measurement. Be a master in cross-functional collaboration by developing deep relationships with key partners across the company and coordinating with working teams. Constantly learn, have a clear pulse on innovation across the enterprise SaaS, AdTech, paid media, data science, customer data, and analytics communities. Required Skills 8+ years of experience designing models for marketing optimization such as multi-channel attribution models, customer lifetime value models, propensity models, uplift models, etc. using statistical and machine learning techniques. 8+ years of experience using advanced statistical techniques for experiment design (A/B and multi-cell testing) and causal inference methods for understanding business impact. Must have multiple, robust examples of using these techniques to measure effectiveness of marketing efforts and to solve business problems on large-scale data sets. 8+ years of experience with one or more programming languages such as Python, R, PySpark, Java. Expert-level knowledge of SQL with strong data exploration and manipulation skills. Experience using cloud platforms such as GCP and AWS for model development and operationalization is preferred. Must have superb quantitative reasoning and interpretation skills with strong ability to provide analysis-driven business insight and recommendations. Excellent written and verbal communication skills; ability to work well with peers and leaders across data science, marketing, and engineering organizations. Creative problem-solver who simplifies problems to their core elements. B2B customer data experience a big plus. Advanced Salesforce product knowledge is also a plus.
Posted 1 week ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY’s GDS Tax Technology team’s mission is to develop, implement and integrate technology solutions that better serve our clients and engagement teams. As a member of EY’s core Tax practice, you’ll develop a deep tax technical knowledge and outstanding database, data analytics and programming skills. Ever-increasing regulations require tax departments to gather, organize and analyse more data than ever before. Often the data necessary to satisfy these ever-increasing and complex regulations must be collected from a variety of systems and departments throughout an organization. Effectively and efficiently handling the variety and volume of data is often extremely challenging and time consuming for a company. EY's GDS Tax Technology team members work side-by-side with the firm's partners, clients and tax technical subject matter experts to develop and incorporate technology solutions that enhance value-add, improve efficiencies and enable our clients with disruptive and market leading tools supporting Tax. GDS Tax Technology works closely with clients and professionals in the following areas: Federal Business Tax Services, Partnership Compliance, Corporate Compliance, Indirect Tax Services, Human Capital, and Internal Tax Services. GDS Tax Technology provides solution architecture, application development, testing and maintenance support to the global TAX service line both on a pro-active basis and in response to specific requests. EY is currently seeking a Data Engineer - Staff to join our Tax Technology practice in India. Key Responsibilities Must have experience Azure Databricks. Must have strong knowledge of Python and PySpark programing. Must have strong Azure SQL Database and Azure SQL Datawarehouse concepts. Develops, maintains, and optimizes all data layer components for new and existing systems, including databases, stored procedures, ETL packages, and SQL queries Experience on Azure data platform offerings Ability to effectively communicate with other team members and stakeholders Qualification & Experience Required Candidates should have between 1.5 and 3 years of experience in Azure Data Platform (Azure Databricks) with strong knowledge of Python and PySpark is required Strong verbal and written communications skills Ability to work as an individual contributor. Experience on Azure Data Factory or SSIS or any other ETL tools. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
5.0 - 10.0 years
10 - 20 Lacs
Pune
Remote
Role & responsibilities Key Responsibilities: At least 5 years of experience in data engineering with a strong background on Azure Databricks and Scala/Python. Databricks with knowledge in Pyspark Database: Oracle or any other database Programming: Python with awareness of Streamlit
Posted 1 week ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Eviden, part of the Atos Group, with an annual revenue of circa € 5 billion is a global leader in data-driven, trusted and sustainable digital transformation. As a next generation digital business with worldwide leading positions in digital, cloud, data, advanced computing and security, it brings deep expertise for all industries in more than 47 countries. By uniting unique high-end technologies across the full digital continuum with 47,000 world-class talents, Eviden expands the possibilities of data and technology, now and for generations to come. Role Overview The Senior Tech Lead - AWS Data Engineering leads the design, development and optimization of data solutions on the AWS platform. The jobholder has a strong background in data engineering, cloud architecture, and team leadership, with a proven ability to deliver scalable and secure data systems. Responsibilities Lead the design and implementation of AWS-based data architectures and pipelines. Architect and optimize data solutions using AWS services such as S3, Redshift, Glue, EMR, and Lambda. Provide technical leadership and mentorship to a team of data engineers. Collaborate with stakeholders to define project requirements and ensure alignment with business goals. Ensure best practices in data security, governance, and compliance. Troubleshoot and resolve complex technical issues in AWS data environments. Stay updated on the latest AWS technologies and industry trends. Key Technical Skills & Responsibilities Overall 10+Yrs of Experience in IT Minimum 5-7 years in design and development of cloud data platforms using AWS services Must have experience of design and development of data lake / data warehouse / data analytics solutions using AWS services like S3, Lake Formation, Glue, Athena, EMR, Lambda, Redshift Must be aware about the AWS access control and data security features like VPC, IAM, Security Groups, KMS etc Must be good with Python and PySpark for data pipeline building. Must have data modeling including S3 data organization experience Must have an understanding of hadoop components, No SQL database, graph database and time series database; and AWS services available for those technologies Must have experience of working with structured, semi-structured and unstructured data Must have experience of streaming data collection and processing. Kafka experience is preferred. Experience of migrating data warehouse / big data application to AWS is preferred . Must be able to use Gen AI services (like Amazon Q) for productivity gain Eligibility Criteria Bachelor’s degree in Computer Science, Data Engineering, or a related field. Extensive experience with AWS data services and tools. AWS certification (e.g., AWS Certified Data Analytics - Specialty). Experience with machine learning and AI integration in AWS environments. Strong understanding of data modeling, ETL/ELT processes, and cloud integration. Proven leadership experience in managing technical teams. Excellent problem-solving and communication skills. Our Offering Global cutting-edge IT projects that shape the future of digital and have a positive impact on environment. Wellbeing programs & work-life balance - integration and passion sharing events. Attractive Salary and Company Initiative Benefits Courses and conferences Attractive Salary Hybrid work culture Let’s grow together.
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Eviden, part of the Atos Group, with an annual revenue of circa € 5 billion is a global leader in data-driven, trusted and sustainable digital transformation. As a next generation digital business with worldwide leading positions in digital, cloud, data, advanced computing and security, it brings deep expertise for all industries in more than 47 countries. By uniting unique high-end technologies across the full digital continuum with 47,000 world-class talents, Eviden expands the possibilities of data and technology, now and for generations to come. Roles and Responsibility The Senior Tech Lead - Databricks leads the design, development, and implementation of advanced data solutions. Has To have extensive experience in Databricks, cloud platforms, and data engineering, with a proven ability to lead teams and deliver complex projects. Responsibilities Lead the design and implementation of Databricks-based data solutions. Architect and optimize data pipelines for batch and streaming data. Provide technical leadership and mentorship to a team of data engineers. Collaborate with stakeholders to define project requirements and deliverables. Ensure best practices in data security, governance, and compliance. Troubleshoot and resolve complex technical issues in Databricks environments. Stay updated on the latest Databricks features and industry trends. Key Technical Skills & Responsibilities Experience in data engineering using Databricks or Apache Spark-based platforms. Proven track record of building and optimizing ETL/ELT pipelines for batch and streaming data ingestion. Hands-on experience with Azure services such as Azure Data Factory, Azure Data Lake Storage, Azure Databricks, Azure Synapse Analytics, or Azure SQL Data Warehouse. Proficiency in programming languages such as Python, Scala, SQL for data processing and transformation. Expertise in Spark (PySpark, Spark SQL, or Scala) and Databricks notebooks for large-scale data processing. Familiarity with Delta Lake, Delta Live Tables, and medallion architecture for data lakehouse implementations. Experience with orchestration tools like Azure Data Factory or Databricks Jobs for scheduling and automation. Design and implement the Azure key vault and scoped credentials. Knowledge of Git for source control and CI/CD integration for Databricks workflows, cost optimization, performance tuning. Familiarity with Unity Catalog, RBAC, or enterprise-level Databricks setups. Ability to create reusable components, templates, and documentation to standardize data engineering workflows is a plus. Ability to define best practices, support multiple projects, and sometimes mentor junior engineers is a plus. Must have experience of working with streaming data sources and Kafka (preferred) Eligibility Criteria Bachelor’s degree in Computer Science, Data Engineering, or a related field Extensive experience with Databricks, Delta Lake, PySpark, and SQL Databricks certification (e.g., Certified Data Engineer Professional) Experience with machine learning and AI integration in Databricks Strong understanding of cloud platforms (AWS, Azure, or GCP) Proven leadership experience in managing technical teams Excellent problem-solving and communication skills Our Offering Global cutting-edge IT projects that shape the future of digital and have a positive impact on environment. Wellbeing programs & work-life balance - integration and passion sharing events. Attractive Salary and Company Initiative Benefits Courses and conferences Attractive Salary Hybrid work culture Let’s grow together.
Posted 1 week ago
7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
A.P. Moller - Maersk A.P. Moller – Maersk is the global leader in container shipping services. The business operates in 130 countries and employs 80,000 staff. An integrated container logistics company, Maersk aims to connect and simplify its customers’ supply chains. Today, we have more than 180 nationalities represented in our workforce across 131 Countries and this mean, we have elevated level of responsibility to continue to build inclusive workforce that is truly representative of our customers and their customers and our vendor partners too. We are responsible for moving 20 % of global trade & is on a mission to become the Global Integrator of Container Logistics. To achieve this, we are transforming into an industrial digital giant by combining our assets across air, land, ocean, and ports with our growing portfolio of digital assets to connect and simplify our customer’s supply chain through global end-to-end solutions, all the while rethinking the way we engage with customers and partners. Key Responsibilities: Partner with business, product, and engineering teams to define problem statements, evaluate feasibility, and design AI/ML-driven solutions that deliver measurable business value. Lead and execute end-to-end AI/ML projects — from data exploration and model development to validation, deployment, and monitoring in production. Drive solution architecture using techniques in data engineering, programming, machine learning, NLP, and Generative AI. Champion the scalability, reproducibility, and sustainability of AI solutions by establishing best practices in model development, CI/CD, and performance tracking. Guide junior and associate AI/ML engineers through technical mentoring, code reviews, and solution reviews. Identify and evangelize the adoption of emerging tools, technologies, and methodologies across teams. Translate technical outputs into actionable insights for business stakeholders through storytelling, data visualizations, and stakeholder engagement. We are looking for: A seasoned AI/ML engineer with 7+ years of hands-on experience delivering enterprise-grade AI/ML solutions. Advanced proficiency in Python, SQL, PySpark, and experience working with cloud platforms (Azure preferred) and tools such as Databricks, Synapse, ADF, and Web Apps. Strong expertise in applied text analytics, NLP, and Generative AI, with real-world deployment exposure. Solid understanding of model evaluation, optimization, bias mitigation, and monitoring in production. A problem solver with scientific rigor, strong business acumen, and the ability to bridge the gap between data and decisions. Prior experience in leading cross-functional AI initiatives or collaborating with engineering teams to deploy ML pipelines. Bachelor's or master’s degree in computer science, Engineering, Statistics, or a related quantitative field. A PhD is a plus. Prior understanding in business domain of shipping and logistics is an advantage. Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing accommodationrequests@maersk.com.
Posted 1 week ago
5.0 - 10.0 years
15 - 22 Lacs
Pune, Chennai
Hybrid
We are looking for Data Engineers with 5-10 years experience , need candidates who can join us within 15 days Exp: 5-10 Yrs Location: Chennai/Pune Mode: Hybrid - 3 days in a week What we Need: Candidates with good exposure on the below skill: Azure Data Bricks, Azure Data Factory, Azure Data Lakes ,Devops, Python Pyspark, CDC
Posted 1 week ago
4.0 - 7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Key responsibilities: Collaborate with business, platform and technology stakeholders to understand the scope of projects. Perform comprehensive exploratory data analysis at various levels of granularity of data to derive inferences for further solutioning/experimentation/evaluation. Design, develop and deploy robust enterprise AI solutions using Generative AI, NLP, machine learning, etc. Continuously focus on providing business value while ensuring technical sustainability. Promote and drive adoption of cutting-edge data science and AI practices within the team. Continuously stay up to date on relevant technologies and use this knowledge to push the team forward. We are looking for: A team player having 4-7 years of experience in the field of data science and AI. Proficiency with programming/querying languages like python, SQL, pyspark along with Azure cloud platform tools like databricks, ADF, synapse, web app, etc. An individual with strong work experience in areas of text analytics, NLP and Generative AI. A person with a scientific and analytical thinking mindset comfortable with brainstorming and ideation. A doer with deep interest in driving business outcomes through AI/ML. A candidate with bachelor’s or master’s degree in engineering, computer science with/withput a specialization within the field of AI/ML. A candidate with strong business acumen and desire to collaborate with business teams and help them by solving business problems. Prior understanding in business domain of shipping and logistics is an advantage. Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing accommodationrequests@maersk.com.
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Senior Data Science Lead Primary Skills Hypothesis Testing, T-Test, Z-Test, Regression (Linear, Logistic), Python/PySpark, SAS/SPSS, Statistical analysis and computing, Probabilistic Graph Models, Great Expectation, Evidently AI, Forecasting (Exponential Smoothing, ARIMA, ARIMAX), Tools(KubeFlow, BentoML), Classification (Decision Trees, SVM), ML Frameworks (TensorFlow, PyTorch, Sci-Kit Learn, CNTK, Keras, MXNet), Distance (Hamming Distance, Euclidean Distance, Manhattan Distance), R/ R Studio Job requirements JD is below: The Agentic AI Lead is a pivotal role responsible for driving the research, development, and deployment of semi-autonomous AI agents to solve complex enterprise challenges. This role involves hands-on experience with LangGraph, leading initiatives to build multi-agent AI systems that operate with greater autonomy, adaptability, and decision-making capabilities. The ideal candidate will have deep expertise in LLM orchestration, knowledge graphs, reinforcement learning (RLHF/RLAIF), and real-world AI applications. As a leader in this space, they will be responsible for designing, scaling, and optimizing agentic AI workflows, ensuring alignment with business objectives while pushing the boundaries of next-gen AI automation. ________________________________________ Key Responsibilities 1. Architecting & Scaling Agentic AI Solutions Design and develop multi-agent AI systems using LangGraph for workflow automation, complex decision-making, and autonomous problem-solving. Build memory-augmented, context-aware AI agents capable of planning, reasoning, and executing tasks across multiple domains. Define and implement scalable architectures for LLM-powered agents that seamlessly integrate with enterprise applications. 2. Hands-On Development & Optimization Develop and optimize agent orchestration workflows using LangGraph, ensuring high performance, modularity, and scalability. Implement knowledge graphs, vector databases (Pinecone, Weaviate, FAISS), and retrieval-augmented generation (RAG) techniques for enhanced agent reasoning. Apply reinforcement learning (RLHF/RLAIF) methodologies to fine-tune AI agents for improved decision-making. 3. Driving AI Innovation & Research Lead cutting-edge AI research in Agentic AI, LangGraph, LLM Orchestration, and Self-improving AI Agents. Stay ahead of advancements in multi-agent systems, AI planning, and goal-directed behavior, applying best practices to enterprise AI solutions. Prototype and experiment with self-learning AI agents, enabling autonomous adaptation based on real-time feedback loops. 4. AI Strategy & Business Impact Translate Agentic AI capabilities into enterprise solutions, driving automation, operational efficiency, and cost savings. Lead Agentic AI proof-of-concept (PoC) projects that demonstrate tangible business impact and scale successful prototypes into production. 5. Mentorship & Capability Building Lead and mentor a team of AI Engineers and Data Scientists, fostering deep technical expertise in LangGraph and multi-agent architectures. Establish best practices for model evaluation, responsible AI, and real-world deployment of autonomous AI agents. ________________________________________
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Senior Data Science Lead Primary Skills Hypothesis Testing, T-Test, Z-Test, Regression (Linear, Logistic), Python/PySpark, SAS/SPSS, Statistical analysis and computing, Probabilistic Graph Models, Great Expectation, Evidently AI, Forecasting (Exponential Smoothing, ARIMA, ARIMAX), Tools(KubeFlow, BentoML), Classification (Decision Trees, SVM), ML Frameworks (TensorFlow, PyTorch, Sci-Kit Learn, CNTK, Keras, MXNet), Distance (Hamming Distance, Euclidean Distance, Manhattan Distance), R/ R Studio Job requirements JD is below: The Agentic AI Lead is a pivotal role responsible for driving the research, development, and deployment of semi-autonomous AI agents to solve complex enterprise challenges. This role involves hands-on experience with LangGraph, leading initiatives to build multi-agent AI systems that operate with greater autonomy, adaptability, and decision-making capabilities. The ideal candidate will have deep expertise in LLM orchestration, knowledge graphs, reinforcement learning (RLHF/RLAIF), and real-world AI applications. As a leader in this space, they will be responsible for designing, scaling, and optimizing agentic AI workflows, ensuring alignment with business objectives while pushing the boundaries of next-gen AI automation. ________________________________________ Key Responsibilities 1. Architecting & Scaling Agentic AI Solutions Design and develop multi-agent AI systems using LangGraph for workflow automation, complex decision-making, and autonomous problem-solving. Build memory-augmented, context-aware AI agents capable of planning, reasoning, and executing tasks across multiple domains. Define and implement scalable architectures for LLM-powered agents that seamlessly integrate with enterprise applications. 2. Hands-On Development & Optimization Develop and optimize agent orchestration workflows using LangGraph, ensuring high performance, modularity, and scalability. Implement knowledge graphs, vector databases (Pinecone, Weaviate, FAISS), and retrieval-augmented generation (RAG) techniques for enhanced agent reasoning. Apply reinforcement learning (RLHF/RLAIF) methodologies to fine-tune AI agents for improved decision-making. 3. Driving AI Innovation & Research Lead cutting-edge AI research in Agentic AI, LangGraph, LLM Orchestration, and Self-improving AI Agents. Stay ahead of advancements in multi-agent systems, AI planning, and goal-directed behavior, applying best practices to enterprise AI solutions. Prototype and experiment with self-learning AI agents, enabling autonomous adaptation based on real-time feedback loops. 4. AI Strategy & Business Impact Translate Agentic AI capabilities into enterprise solutions, driving automation, operational efficiency, and cost savings. Lead Agentic AI proof-of-concept (PoC) projects that demonstrate tangible business impact and scale successful prototypes into production. 5. Mentorship & Capability Building Lead and mentor a team of AI Engineers and Data Scientists, fostering deep technical expertise in LangGraph and multi-agent architectures. Establish best practices for model evaluation, responsible AI, and real-world deployment of autonomous AI agents. ________________________________________
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France