Home
Jobs

2873 Airflow Jobs - Page 49

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 9.0 years

15 - 25 Lacs

Hyderabad, Chennai

Work from Office

Naukri logo

Interested can also apply with sanjeevan.natarajan@careernet.in Role & responsibilities Technical Leadership Lead a team of data engineers and developers; define technical strategy, best practices, and architecture for data platforms. End-to-End Solution Ownership Architect, develop, and manage scalable, secure, and high-performing data solutions on AWS and Databricks. Data Pipeline Strategy Oversee the design and development of robust data pipelines for ingestion, transformation, and storage of large-scale datasets. Data Governance & Quality Enforce data validation, lineage, and quality checks across the data lifecycle. Define standards for metadata, cataloging, and governance. Orchestration & Automation Design automated workflows using Airflow, Databricks Jobs/APIs, and other orchestration tools for end-to-end data operations. Cloud Cost & Performance Optimization Implement performance tuning strategies, cost optimization best practices, and efficient cluster configurations on AWS/Databricks. Security & Compliance Define and enforce data security standards, IAM policies, and compliance with industry-specific regulatory frameworks. Collaboration & Stakeholder Engagement Work closely with business users, analysts, and data scientists to translate requirements into scalable technical solutions. Migration Leadership Drive strategic data migrations from on-prem/legacy systems to cloud-native platforms with minimal risk and downtime. Mentorship & Growth Mentor junior engineers, contribute to talent development, and ensure continuous learning within the team. Preferred candidate profile Python , SQL , PySpark , Databricks , AWS (Mandatory) Leadership Experience in Data Engineering/Architecture Added Advantage: Experience in Life Sciences / Pharma

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description Role Title - Team Lead and Lead Developer – Backend and Database (Node) Role Type - Full time Role Reports to Chief Technology Officer Category - Regular / Fixed Term Job location - 8 th floor, E Block, IITM Research Park, Taramani Job Overview We're seeking an experienced Senior Backend and Database Developer and Team Lead for our backend team. The ideal candidate will combine technical expertise in full-stack development with extensive experience in backend development, with strong process optimization skills and innovative thinking to drive team efficiency and product quality. Job Specifications Educational Qualifications - Any UG/PG graduates Experience - 5+ years Key Job Responsibilities Software architecture design Architect and oversee development of backend in Node Familiarity with MVC and design patterns and have a strong grasp of data structures Basic database theory – ACID vs eventually consistent, OLTP vs OLAP Different types of databases - relational stores, K/V stores, text stores, graph DBs, vector DBs, time series DBs Database design & structures Experience with data modeling concepts including normalization, normal forms, star schema (management and evolution), and dimensional modeling Expertise in SQL DBs (MySQL, PostgreSQL), and NoSQL DBs (MongoDB, Redis) Data pipeline design based on operational principles. Dealing with failures, restarts, reruns, pipeline changes, and various file storage formats Backend & API frameworks & other services To develop and maintain RESTful, JSON RPC and other APIs for various applications Understanding of backend JS frameworks such as Express.js, NestJS , and documentation tools like Postman, Swagger Experience with callbacks with Webhooks, callbacks and other event-driven systems, and third-party solution integrations (Firebase, Google Maps, Amplify and others) QA and testing Automation testing and tooling knowledge for application functionality validation and QA Experience with testing routines and fixes with various testing tools (JMeter, Artillery or others) Load balancers, caching and serving Experience with event serving (Apache Kafka and others), caching and processing (Redux, Apache Spark or other frameworks) and scaling (Kubernetes and other systems) Experience with orchestrators like Airflow for huge data workloads, Scripting and automation for various purposes including scheduling and logging Production, Deployment & Monitoring Experience of CI/CD pipelines with tools like Jenkins/Circle CI, and Docker for containerization Experience in deployment and monitoring of apps on cloud platforms e.g., AWS, Azure and bare metal configurations Documentation, version control and ticketing Version control with Git, and ticketing bugs and features with tools like Jira or Confluence Backend documentation and referencing with tools like Swagger, Postman Experience in creating ERDs for various data types and models and documentation of evolving models Behavioral competencies Attention to detail Ability to maintain accuracy and precision in financial records, reports, and analysis, ensuring compliance with accounting standards and regulations. Integrity and Ethics Commitment to upholding ethical standards, confidentiality, and honesty in financial practices and interactions with stakeholders. Time management Effective prioritization of tasks, efficient allocation of resources, and timely completion of assignments to meet sprint deadlines and achieve goals. Adaptability and Flexibility Capacity to adapt to changing business environments, new technologies, and evolving accounting standards, while remaining flexible in response to unexpected challenges. Communication & collaboration Experience presenting to stakeholders and executive teams Ability to bridge technical and non-technical communication Excellence in written documentation and process guidelines to work with other frontend teams Leadership competencies Team leadership and team building Lead and mentor a backend and database development team, including junior developers, and ensure good coding standards Agile methodology to be followed, Scrum meetings to be conducted for sync-ups Strategic Thinking Ability to develop and implement long-term goals and strategies aligned with the organization’s vision Ability to adopt new tech and being able to handle tech debt to bring the team up to speed with client requirements Decision-Making Capable of making informed and effective decisions, considering both short-term and long-term impacts Insight into resource allocation and sprint building for various projects Team Building Ability to foster a collaborative and inclusive team environment, promoting trust and cooperation among team members Code reviews Troubleshooting, weekly code reviews and feature documentation and versioning, and standards improvement Improving team efficiency Research and integrate AI-powered development tools (GitHub Copilot, Amazon CodeWhisperer) Added advantage points AI/ML applications Experience in AI/ML application backend workflows (e.g: MLFlow) and serving the models Data processing & maintenance Familiarity with at least one data processing platform (e.g., Spark, Flink, Beam/Google Dataflow, AWS Batch) Experience with Elasticsearch and other client-side data processing frameworks Understand data management and analytics – with metadata catalogs (e.g., AWS Glue), data warehousing (e.g., AWS Redshift) Data governance Quality control, policies around data duplication, definitions, company-wide processes around security and privacy Interested candidates can share the updated resumes to below mentioned ID. Contact Person - Janani Santhosh Senior HR Executive Email Id - careers@plenome.com Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

We’re hiring a Senior ML Engineer (MLOps) — 3-5 yrs Location: Chennai What you’ll do Tame data → pull, clean, and shape structured & unstructured data. Orchestrate pipelines → Airflow / Step Functions / ADF… your call. Ship models → build, tune, and push to prod on SageMaker, Azure ML, or Vertex AI. Scale → Spark / Databricks for the heavy lifting. Automate everything → Docker, Kubernetes, CI/CD, MLFlow, Seldon, Kubeflow. Pair up → work with engineers, architects, and business folks to solve real problems, fast. What you bring 3+ yrs hands-on MLOps (4-5 yrs total software experience). Proven chops on one hyperscaler (AWS, Azure, or GCP). Confidence with Databricks / Spark , Python, SQL, TensorFlow / PyTorch / Scikit-learn. You debug Kubernetes in your sleep and treat Dockerfiles like breathing. You prototype with open-source first, choose the right tool, then make it scale. Sharp mind, low ego, bias for action. Nice-to-haves Sagemaker, Azure ML, or Vertex AI in production. Love for clean code, clear docs, and crisp PRs. Why Datadivr? Domain focus: we live and breathe F&B — your work ships to plants, not just slides. Small team, big autonomy: no endless layers; you own what you build. 📬 How to apply Shoot your CV + a short note on a project you shipped to careers@datadivr.com or DM me here. We reply to every serious applicant. Know someone perfect? Please share — good people know good people. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 - 0 Lacs

Panaji

On-site

Education: Bachelor’s or master’s in computer science, Software Engineering, or a related field (or equivalent practical experience). Hands-On ML/AI Experience: Proven record of deploying, fine-tuning, or integrating large-scale NLP models or other advanced ML solutions. Programming & Frameworks: Strong proficiency in Python (PyTorch or TensorFlow) and familiarity with MLOps tools (e.g., Airflow, MLflow, Docker). Security & Compliance: Understanding of data privacy frameworks, encryption, and secure data handling practices, especially for sensitive internal documents. DevOps Knowledge: Comfortable setting up continuous integration/continuous delivery (CI/CD) pipelines, container orchestration (Kubernetes), and version control (Git). Collaborative Mindset: Experience working cross-functionally with technical and non-technical teams; ability to clearly communicate complex AI concepts. Role Overview Collaborate with cross-functional teams to build AI-driven applications for improved productivity and reporting. Lead integrations with hosted AI solutions (ChatGPT, Claude, Grok) for immediate functionality without transmitting sensitive data while laying the groundwork for a robust in-house AI infrastructure. Develop and maintain on-premises large language model (LLM) solutions (e.g. Llama) to ensure data privacy and secure intellectual property. Key Responsibilities LLM Pipeline Ownership: Set up, fine-tune, and deploy on-prem LLMs; manage data ingestion, cleaning, and maintenance for domain-specific knowledge bases. Data Governance & Security: Assist our IT department to implement role-based access controls, encryption protocols, and best practices to protect sensitive engineering data. Infrastructure & Tooling: Oversee hardware/server configurations (or cloud alternatives) for AI workloads; evaluate resource usage and optimize model performance. Software Development: Build and maintain internal AI-driven applications and services (e.g., automated report generation, advanced analytics, RAG interfaces, as well as custom desktop applications). Integration & Automation: Collaborate with project managers and domain experts to automate routine deliverables (reports, proposals, calculations) and speed up existing workflows. Best Practices & Documentation: Define coding standards, maintain technical documentation, and champion CI/CD and DevOps practices for AI software. Team Support & Training: Provide guidance to data analysts and junior developers on AI tool usage, ensuring alignment with internal policies and limiting model “hallucinations.” Performance Monitoring: Track AI system metrics (speed, accuracy, utilization) and implement updates or retraining as necessary. Job Types: Full-time, Permanent Pay: ₹30,000.00 - ₹40,000.00 per month Benefits: Health insurance Provident Fund Schedule: Day shift Monday to Friday Supplemental Pay: Yearly bonus Work Location: In person Application Deadline: 30/06/2025 Expected Start Date: 10/06/2025

Posted 2 weeks ago

Apply

7.0 years

4 - 7 Lacs

Thiruvananthapuram

On-site

Equifax is seeking creative, high-energy and driven software engineers with hands-on development skills to work on a variety of meaningful projects. Our software engineering positions provide you the opportunity to join a team of talented engineers working with leading-edge technology. You are ideal for this position if you are a forward-thinking, committed, and enthusiastic software engineer who is passionate about technology. What you’ll do Demonstrate a deep understanding of cloud native, distributed micro service based architectures Deliver solutions for complex business problems through software standard SDLC Build strong relationships with both internal and external stakeholders including product, business and sales partners Demonstrate excellent communication skills with the ability to both simplify complex problems and also dive deeper if needed Build and manage strong technical teams that deliver complex software solutions that scale Manage teams with cross functional skills that include software, quality, reliability engineers, project managers and scrum masters Provide deep troubleshooting skills with the ability to lead and solve production and customer issues under pressure Leverage strong experience in full stack software development and public cloud like GCP and AWS Mentor, coach and develop junior and senior software, quality and reliability engineers Lead with a data/metrics driven mindset with a maniacal focus towards optimizing and creating efficient solutions Ensure compliance with EFX secure software development guidelines and best practices and responsible for meeting and maintaining QE, DevSec, and FinOps KPIs Define, maintain and report SLA, SLO, SLIs meeting EFX engineering standards in partnership with the product, engineering and architecture teams Collaborate with architects, SRE leads and other technical leadership on strategic technical direction, guidelines, and best practices Drive up-to-date technical documentation including support, end user documentation and run books Lead Sprint planning, Sprint Retrospectives, and other team activity Responsible for implementation architecture decision making associated with Product features/stories, refactoring work, and EOSL decisions Create and deliver technical presentations to internal and external technical and non-technical stakeholders communicating with clarity and precision, and present complex information in a concise format that is audience appropriate What experience you need Bachelor's degree or equivalent experience 7+ years of software engineering experience 7+ years experience writing, debugging, and troubleshooting code in mainstream Java, SpringBoot, TypeScript/JavaScript, HTML, CSS 7+ years experience with Cloud technology: GCP, AWS, or Azure 7+ years experience designing and developing cloud-native solutions 7+ years experience designing and developing microservices using Java, SpringBoot, GCP SDKs, GKE/Kubernetes 7+ years experience deploying and releasing software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs What could set you apart Self-starter that identifies/responds to priority shifts with minimal supervision. Strong communication and presentation skills Strong leadership qualities Demonstrated problem solving skills and the ability to resolve conflicts Experience creating and maintaining product and software roadmaps Experience overseeing yearly as well as product/project budgets Working in a highly regulated environment Experience designing and developing big data processing solutions using Dataflow/Apache Beam, Bigtable, BigQuery, PubSub, GCS, Composer/Airflow, and others UI development (e.g. HTML, JavaScript, Angular and Bootstrap) Experience with backend technologies such as JAVA/J2EE, SpringBoot, SOA and Microservices Source code control management systems (e.g. SVN/Git, Github) and build tools like Maven & Gradle. Agile environments (e.g. Scrum, XP) Relational databases (e.g. SQL Server, MySQL) Atlassian tooling (e.g. JIRA, Confluence, and Github) Developing with modern JDK (v1.7+) Automated Testing: JUnit, Selenium, LoadRunner, SoapUI

Posted 2 weeks ago

Apply

12.0 years

1 - 5 Lacs

Hyderābād

On-site

Hyderabad, Telangana, India Category: Engineering Hire Type: Employee Job ID 4746 Date posted 02/24/2025 Synopsys’ Generative AI Center of Excellence defines the technology strategy to advance applications of Generative AI across the company. The Gen AI COE pioneers the core technologies – platforms, processes, data, and foundation models – to enable generative AI solutions, and partners with business groups and corporate functions to advance AI-focused roadmaps. We are looking for an experienced, passionate, and self-driven individual who possesses both a broad technical strategy and the ability to tackle architectural and modernization challenges. As an Ideal candidate will help build enterprise Machine Learning platform. They will work with a team of enthusiastic and dynamic ML engineers and Data scientists in building a platform to help Synopsys R&D teams to experiment, train models and build Gen AI & ML products. You will be responsible for: Building AI Platform for Synopsys to orchestrate enterprise-wide Data pipelines, ML training, and inferencing servers. Develop "AI App Store" eco system to enable R&D teams to host Gen AI applications in Cloud Develop capabilities to ship Cloud Native (Containerized) AI applications/AI systems to on-premises customers Orchestrate GPU Scheduling from within Kubernetes eco-system (e.g. Nvidia GPU Operator, MIG, and so on) Create reliable and cost-effective Hybrid cloud architecture using cutting edge technologies (E.g. Kubernetes Cluster Federation, Azure Arc and so on) Required Qualifications BS/MS/PhD in Computer Science/Software Engineering or an equivalent degree 12+ years of total experience building systems software, enterprise software applications, and microservices Expertise and/or experience in following programming languages : Go and Python Experience building highly scalable REST API Experience with event driven software architecture and message brokers (NATS / Kafka) Design complex distributed systems (High-level and low-level systems design) Knowing CAP theorem in depth and ing it in building real-world distributed systems. In-Depth Kubernetes knowledge: Be able to deploy Kubernetes on-prem,working experience with managed Kubernetes services (AKS/EKS/GKE) and Kubernetes APIs Strong systems knowledge in Linux Kernel, CGroups, namespaces, and Docker Experience with at least one cloud provider (AWS/GCP/Azure) Ability to solve complex problems using efficient algorithms Experience with using RDBMS (PostgreSQL preferred) for storing and queuing large sets of data Nice to have: Experience with service meshes (Istio) Experience with Kubernetes cluster federation Prior experience with AI/ML workflows and tools (PyTorch, ML Flow, AirFlow, …) Experience prototyping, experimenting, and testing with large datasets, and analytic data flows in production Strong fundamentals in Statistics, Machine Learning, and/or Deep Learning At Synopsys, we want talented people of every background to feel valued and supported to do their best work. Synopsys considers all applicants for employment without regard to race, color, religion, national origin, gender, sexual orientation, age, military veteran status, or disability.

Posted 2 weeks ago

Apply

12.0 years

0 Lacs

Hyderābād

On-site

Job description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Associate Director. In this role, you will: Design, develop, and optimize data pipelines using Azure Databricks, PySpark, and Prophesy. Implement and maintain ETL/ELT pipelines using Azure Data Factory (ADF) and Apache Airflow for orchestration. Develop and optimize complex SQL queries and Python-based data transformation logic. Work with version control systems (GitHub, Azure DevOps) to manage code and deployment processes. Automate deployment of data pipelines using CI/CD practices in Azure DevOps. Ensure data quality, security, and compliance with best practices. Monitor and troubleshoot performance issues in data pipelines. Collaborate with cross-functional teams to define data requirements and strategies. Requirements To be successful in this role, you should meet the following requirements: 12+ years of experience in data engineering, working with Azure Databricks, PySpark, and SQL. Hands-on experience with Prophesy for data pipeline development. Proficiency in Python for data processing and transformation. Experience with Apache Airflow for workflow orchestration. Strong expertise in Azure Data Factory (ADF) for building and managing ETL processes. Familiarity with GitHub and Azure DevOps for version control and CI/CD automation. Solid understanding of data modelling, warehousing, and performance optimization. Ability to work in an agile environment and manage multiple priorities effectively. Excellent problem-solving skills and attention to detail. Experience with Delta Lake and Lakehouse architecture. Hands-on experience with Terraform or Infrastructure as Code (IaC). Understanding of machine learning workflows in a data engineering context. You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSDI

Posted 2 weeks ago

Apply

5.0 years

3 - 8 Lacs

Hyderābād

On-site

Hyderabad, Telangana, India Category: Data Science Hire Type: Employee Job ID 8753 Date posted 02/24/2025 We Are: At Synopsys, we drive the innovations that shape the way we live and connect. Our technology is central to the Era of Pervasive Intelligence, from self-driving cars to learning machines. We lead in chip design, verification, and IP integration, empowering the creation of high-performance silicon chips and software content. Join us to transform the future through continuous technological innovation. You Are: As a Data Science Staff member located in Hyderabad, you are a visionary with a passion for data engineering and analytics. You thrive in dynamic environments and are motivated by the challenge of building robust data infrastructure. Your expertise in data modeling, algorithm development, and data pipeline construction is complemented by your ability to derive actionable insights from complex datasets. You possess a deep understanding of modern data stack tools and have hands-on experience with cloud data warehouses, transformation tools, and data ingestion technologies. Your technical acumen is matched by your ability to collaborate effectively with cross-functional teams, providing support and guidance to business users. You stay ahead of the curve by continuously exploring advancements in AI, Generative AI, and machine learning, seeking opportunities to integrate these innovations into your work. Your commitment to best practices in data management and your proficiency in various scripting languages and visualization tools make you an invaluable asset to our team. What You’ll Be Doing: Building the data engineering and analytics infrastructure for our new Enterprise Data Platform using Snowflake and Fivetran. Leading the development of data models, algorithms, data pipelines, and insights to enable data-driven decision-making. Collaborating with team members to shape the design and direction of the data platform. Working end-to-end on data products, from problem understanding to developing data pipelines, dimensional data models, and visualizations. Providing support and advice to business users, including data preparation for predictive and prescriptive modeling. Ensuring consistency of processes and championing best practices in data management. Evaluating and recommending new data tools or processes. Designing, developing, and deploying scalable AI/Generative AI and machine learning models as needed. Providing day-to-day production support to internal business unit customers, implementing enhancements and resolving defects. Maintaining awareness of emerging trends in AI, Generative AI, and machine learning to enhance existing systems and develop innovative solutions. The Impact You Will Have: Driving the development of a cutting-edge data platform that supports enterprise-wide data initiatives. Enabling data-driven decision-making across the organization through robust data models and insights. Enhancing the efficiency and effectiveness of data management processes. Supporting business users in leveraging data for predictive and prescriptive analytics. Innovating and integrating advanced AI and machine learning solutions to solve complex business challenges. Contributing to the overall success of Synopsys by ensuring high-quality data infrastructure and analytics capabilities. What You’ll Need: BS with 5+ years of relevant experience or MS with 3+ years of relevant experience in Computer Sciences, Mathematics, Engineering, or MIS. 5 years of experience in DW/BI development, reporting, and analytics roles, working with business and key stakeholders. Advanced knowledge of Data Warehousing, SQL, ETL/ELT, dimensional modeling, and databases (e.g., mySQL, Postgres, HANA). Hands-on experience with modern data stack tools, including cloud data warehouses (Snowflake), transformation tools (dbt), and cloud providers (Azure, AWS). Experience with data ingestion tools (e.g., Fivetran, HVR, Airbyte), CI/CD (GitLab, Kubernetes, Airflow), and data catalog tools (e.g., Datahub, Atlan) is a plus. Proficiency in scripting languages like Python, Unix, SQL, Scala, and Java for data extraction and exploration. Experience with visualization tools like Tableau and PowerBI is a plus. Knowledge of machine learning frameworks and libraries (e.g., Pandas, NumPy, TensorFlow, PyTorch) and LLM models is a plus. Understanding of data governance, data integrity, and data quality best practices. Experience with agile development methodologies and change control processes. Who You Are: You are a collaborative and innovative problem-solver with a strong technical background. Your ability to communicate effectively with diverse teams and stakeholders is complemented by your analytical mindset and attention to detail. You are proactive, continuously seeking opportunities to leverage new technologies and methodologies to drive improvements. You thrive in a fast-paced environment and are committed to delivering high-quality solutions that meet business needs. The Team You’ll Be A Part Of: You will join the Business Applications team, a dynamic group focused on building and maintaining the data infrastructure that powers our enterprise-wide analytics and decision-making capabilities. The team is dedicated to innovation, collaboration, and excellence, working together to drive the success of Synopsys through cutting-edge data solutions. Rewards and Benefits: We offer a comprehensive range of health, wellness, and financial benefits to cater to your needs. Our total rewards include both monetary and non-monetary offerings. Your recruiter will provide more details about the salary range and benefits during the hiring process. At Synopsys, we want talented people of every background to feel valued and supported to do their best work. Synopsys considers all applicants for employment without regard to race, color, religion, national origin, gender, sexual orientation, age, military veteran status, or disability.

Posted 2 weeks ago

Apply

5.0 - 8.0 years

6 - 9 Lacs

Hyderābād

On-site

About the Role: Grade Level (for internal use): 09 The Team: We are looking for a highly motivated Engineer to join our team supporting Marketplace Platform. S&P Global Marketplace technology team consists of geographically diversified software engineers responsible to develop scalable solutions by working directly with product development team. Our team culture is oriented towards equality in the realm of software engineering irrespective of hierarchy promoting innovation. One should feel empowered to iterate over ideas and experimentation without having fear of failure. Impact: You will enable S&P business to showcase our proprietary S&P Global data, combine it with “curated” alternative data, further enrich it with value-add services from Kensho and others, and deliver it via the clients’ channel of choice to help them make better investment and business decisions, with confidence. What you can expect: An unmatched experience in handling huge volumes of data, analytics, visualization, and services over cloud technologies along with appreciation in product development life cycle to convert an idea into revenue generating stream. Responsibilities: We are looking for a self-motivated, enthusiastic and passionate software engineer to develop technology solutions for S&P global marketplace product. The ideal candidate thrives in a highly technical role and will design and develop software using cutting edge technologies consisting of web applications, data pipelines, big data, machine learning and multi-cloud. The development is already underway so the candidate would be expected to get up to speed very quickly & start contributing. Experience implementing: Web Services (with WCF, RESTful JSON, SOAP, TCP), Windows Services, and Unit Tests . Have past experience working with AWS, Azure DevOps, Jenkins, Docker, Kubernetes/EKS, Ansible and Prometheus or related cloud technologies. Have good understanding of single, hybrid and multicloud architecture with preferably hands-on experience. Active participation in all scrum ceremonies, follow AGILE best practices effectively. Play a key role in the development team to build high-quality, high-performance, scalable code . Produce technical design documents and conduct technical walkthrough. Document and demonstrate solutions using technical design docs, diagrams and stubbed code . Collaborate effectively with technical and non-technical stakeholders . Respond to and resolve production issues. What we are looking for: Minimum of 5-8 years of significant experience in application development. Proficient with software development lifecycle (SDLC) methodologies like Agile, Test-driven development. Experience working with high volume data and computationally intensive system. Garbage collection friendly programming experience - tuning java garbage collection & performance is a must. Proficiency in the development environment, including IDE, web & application server, GIT, Continuous Integration, unit-testing tool and defect management tools . Domain knowledge in Financial Industry and Capital Markets is a plus. Excellent communication skills are essential, with strong verbal and writing proficiencies. Mentor teams, innovate and experiment, give face to business ideas and present to key stakeholders. Required technical skills: Build data pipelines . Utilize platforms like snowflake, talend, databricks etc. Utilize cloud managed services like AWS Step functions, AWS Lambda, AWS DynamoDB Develop custom solutions using Apache nifi, Airflow, Spark, Kafka, Hive, and/or Spring Cloud Data Flow . Develop federated data services to provide scalable and performant data APIs, REST, GraphQL, OData . Write infrastructure as code to develop sandbox environments . Provide analytical capabilities using BI tools like tableau, power BI etc. Feed data at scale to clients that are geographically distributed . Experience building sophisticated and highly automated infrastructure. Experience with automation tools such as terraform, Cloud technologies, cloud formation, ansible etc., Demonstrates ability to adapt to new technologies and learn quickly. Desirable technical skills: Java, Springboot, React, HTML/CSS, API development, micro-services pattern, cloud technologies and managed services preferably AWS, Big Data and Analytics, Relational databases preferably Postgresql, NoSql databases. About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence . What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 311642 Posted On: 2025-06-02 Location: Hyderabad, Telangana, India

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderābād

On-site

Job description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist. In this role, you will: Design, develop, and optimize data pipelines using Azure Databricks, PySpark, and Prophesy. Implement and maintain ETL/ELT pipelines using Azure Data Factory (ADF) and Apache Airflow for orchestration. Develop and optimize complex SQL queries and Python-based data transformation logic. Work with version control systems (GitHub, Azure DevOps) to manage code and deployment processes. Automate deployment of data pipelines using CI/CD practices in Azure DevOps. Ensure data quality, security, and compliance with best practices. Monitor and troubleshoot performance issues in data pipelines. Collaborate with cross-functional teams to define data requirements and strategies. Requirements To be successful in this role, you should meet the following requirements: 5+ years of experience in data engineering, working with Azure Databricks, PySpark, and SQL. Hands-on experience with Prophesy for data pipeline development. Proficiency in Python for data processing and transformation. Experience with Apache Airflow for workflow orchestration. Strong expertise in Azure Data Factory (ADF) for building and managing ETL processes. Familiarity with GitHub and Azure DevOps for version control and CI/CD automation. Solid understanding of data modelling, warehousing, and performance optimization. Ability to work in an agile environment and manage multiple priorities effectively. Excellent problem-solving skills and attention to detail. Experience with Delta Lake and Lakehouse architecture. Hands-on experience with Terraform or Infrastructure as Code (IaC). Understanding of machine learning workflows in a data engineering context. You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSDI

Posted 2 weeks ago

Apply

2.0 years

7 - 10 Lacs

Hyderābād

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Design, develop, and support ETL migration project from DataStage to Databricks or Cloud platform Assist in developing and implementing strategies for migrating ETL processes to cloud platforms like Azure Participate in assessing the current infrastructure and creating a detailed migration roadmap Utilize Unix shell scripting to automate data processing tasks and manage ETL workflows. Ensure data integrity and performance during monthly production run Provide technical support during monthly ETL process, resolve any issues and maintain data quality/integrity on monthly basis Implement and maintain scripts for data extraction, transformation, and loading Maintain comprehensive documentation of the ETL process along migration process, including data mappings, ETL workflows, and system configurations Assist in training team members and end-users on new cloud-based ETL processes and tools Conduct testing and validation to ensure the accuracy and performance of migrated data Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: 2+ years of experience working on Microsoft Azure Databricks 2+ years of experience in development/coding on Spark/Scala or Python or Pyspark 2+ years of relevant Datastage development experience or ETL tools Relevant experience on Databases like Teradata, Snowflake Hands-on development experience in UNIX scripting Experience in working on data warehousing projects with Agile methodologies Sound knowledge of SQL programming and SQL Query Skills Exposure to job schedulers like Airflow and ability to create and modify DAGs Exposure on DevOps methodology and creating CI/CD deployment pipeline Proficient in learning & adopting new technologies and use them to execute the use cases for business problem solving Proven solid Communication skills (written and Verbal) Proven excellent Analytical and Communication skills (Both Verbal and Written) Demonstrated ability to understand the existing application codebase, perform impact analysis and update the code when required based on the business logic or for optimization Demonstrated ability to apply the knowledge of principles and techniques to solve technical problems and write code based on technical design At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Company Description NielsenIQ’s Custom Engineering Team is looking for a Senior BI /Data Engineer who brings both hands-on expertise and architectural thinking to the table. If you’re passionate about turning data into insight at scale—and thrive on building impactful, modern BI solutions—we want to work with you. Our team transforms complex shopper data into clear, actionable insights for the world’s leading retailers. We build innovative, cloud-based, AI-driven reporting solutions that help clients understand and influence consumer behavior. As a Senior Engineer, you’ll not only develop advanced Power BI solutions, but also help shape the technical direction, mentor team members, and drive best practices across our growing BI and data engineering team. Job Description Who you are… You bring 7+ years of hands-on experience building BI and data solutions, with deep expertise in Power BI (data modeling, DAX, Power Query, and RLS) You have strong SQL skills and a solid grasp of data structures and performance optimization You are confident developing enterprise-grade/client facing dashboards and data models and have a strong eye for usability and design You’re passionate about clean, scalable architecture and can navigate the trade-offs in performance, complexity, and maintainability You’re eager to mentor others while remaining hands-on in development work You thrive in a collaborative environment and are experienced working with cross-functional teams (data engineers, product owners, UX, testers, DevOps, architects) You are proactive, self-driven, and excited about using modern technologies such as Azure, Python, GitHub Actions, and MS Fabric You stay current on industry trends and tools and actively contribute to team knowledge and innovation Why we need you… Develop and own end-to-end custom reporting solutions — from data ingestion and modeling to dashboard design and deployment Design and implement performant data models that power interactive, large-scale retail analytics solutions Collaborate closely with product owners, architects, and data engineers to align on requirements and solution design Contribute to system integration, including data flows into and out of Power BI, and integration with web portals Apply best practices in code development, testing, documentation, and CI/CD using tools like GitHub and Airflow Research and implement new Power BI features and cloud capabilities to enhance solution performance and user experience Lead peer reviews and provide technical mentorship to junior and mid-level developers Work in an agile environment, actively participating in planning, estimation, demos, and retrospectives Help define standards and contribute to the team’s long-term BI strategy and toolset selection Qualifications Technical skills: PowerBI, SQL, ETL & Data Modelling, Dava Visualization, Coding & scripting (e.g. python nice to have) Platform & tools: Microsoft ecosystem (Power Query, MS Fabric, Azure Cloud services (Data Lake, etc.) Version control and DevOps tools such as Github and Github Actions, or equivalent tools for source control and automation workflows Exposure to tools like Airflow is a plus for managing data pipeline Soft & Interpersonal Skills Problem-Solving Mindset: Proactive, analytical approach to technical challenges and business needs Team Collaboration: Comfortable working in agile teams, participating in standups, code reviews, and design sessions Communication: Ability to clearly explain technical concepts to both technical and non-technical stakeholders Continuous Learning: Eagerness to stay up-to-date with evolving tech, tools, and BI trends Additional Information Enjoy a flexible and rewarding work environment with peer-to-peer recognition platforms Recharge and revitalize with help of wellness plans made for you and your family Plan your future with financial wellness tools Stay relevant and upskill yourself with career development opportunities Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion Show more Show less

Posted 2 weeks ago

Apply

3.0 years

5 - 8 Lacs

Gurgaon

On-site

At Yum! We’re looking for a Software Engineer to add to our dynamic and rapid scaling team. We’re making this investment to help us optimize our digital channels and technology innovations with the end goal of creating competitive advantages for our restaurants around the globe. We’re looking for a solid lead engineer who brings fresh ideas from past experiences and is eager to tackle new challenges in our company. We’re in search of a candidate who is knowledgeable about and loves working with modern data integration frameworks, big data, and cloud technologies. Candidates must also be proficient with data programming languages (e.g., Python and SQL). The Yum! data engineer will build a variety of data pipelines and models to support advanced AI/ML analytics projects - with the intent of elevating the customer experience and driving revenue and profit growth in our restaurants globally. The candidate will work in our office in Gurgaon, India. As a Software Engineer, you will: Partner with KFC, Pizza Hut, Taco Bell & Habit Burger to build data pipelines to enable best-in-class restaurant technology solutions. Play a key role in our AIDA team - developing data solutions responsible for driving Yum! Growth. Develop & maintain high performance & scalable data solutions. Design and develop data pipelines – streaming and batch – to move data from point-of-sale, back of house, operational platforms and more to our Global Data Hub Contribute to standardizing and developing a framework to extend these pipelines across brands and markets Develop on the Yum! data platform by building applications using a mix of open-source frameworks (PySpark, Kubernetes, Airflow, etc.) and best in breed SaaS tools (Informatica Cloud, Snowflake, Domo, etc.). Implement and manage production support processes around data lifecycle, data quality, coding utilities, storage, reporting and other data integration points. Developing scalable REST APIs in python. Develop and maintain backend services using Python (e.g., FastAPI, Flask, Django). Minimum Requirement: Vast background in all things data related (3+ years of Experience) AWS platform development experience (EKS, S3, API Gateway, Lambda, etc.) Experience with modern ETL tools such as Informatica, Matillion, or DBT; Informatica CDI is a plus High level of proficiency with SQL (Snowflake a big plus) Proficiency with Python for transforming data and automating tasks Experience with Kafka, Pulsar, or other streaming technologies Experience orchestrating complex tasks flows across a variety of technologies. Bachelor’s degree from an accredited institution or relevant experience Experience with at least one of NoSQL databases MongoDB, Elasticsearch etc. The Yum! Brands story is simple. We have the four distinctive, relevant and easy global brands – KFC, Pizza Hut, Taco Bell and The Habit Burger Grill - born from the hopes and dreams, ambitions and grit of passionate entrepreneurs. And we want more of this to create our future! As the world’s largest restaurant company we have a clear and compelling mission: to build the world’s most love, trusted and fastest-growing restaurant brands. The key and not-so-secret ingredient in our recipe for growth is our unrivaled talent and culture, which fuels our results. We’re looking for talented, motivated, visionary and team-oriented leaders to join us as we elevate and personalize the customer experience across our 48,000 restaurants, operating in 145 countries and territories around the world! Employees may work for a single brand and potentially grow to support all company-owned brands depending on their role. Regardless of where they work, as a company opening an average of 8 restaurants a day worldwide, the growth opportunities are endless. Taco Bell has been named of the 10 Most Innovative Companies in the World by Fast Company; Pizza Hut delivers more pizzas than any other pizza company in the world and KFC’s still use its 75-year-old finger lickin’ good recipe including secret herbs and spices to hand-bread its chicken every day. Yum! and its brands have offices in Chicago, IL, Louisville KY, Irvine, CA, Plano, TX and other markets around the world. We don’t just say we are a great place to work – our commitments to the world and our employees show it. Yum! has been named to the Dow Jones Sustainability North America Index and ranked among the top 100 Best Corporate Citizens by Corporate Responsibility Magazine in addition to being named to the Bloomberg Gender-Equality Index. Our employees work in an environment where the value of “believe in all people” is lived every day, enjoying benefits including but not limited to: 4 weeks’ vacation PLUS holidays, sick leave and 2 paid days to volunteer at the cause of their choice and a dollar-for-dollar matching gift program; generous parental leave; competitive benefits including medical, dental, vision and life insurance as well as a 6% 401k match – all encompassed in Yum!’s world-famous recognition culture.

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Full-time Company Description NielsenIQ’s Custom Engineering Team is looking for a Senior BI /Data Engineer who brings both hands-on expertise and architectural thinking to the table. If you’re passionate about turning data into insight at scale—and thrive on building impactful, modern BI solutions—we want to work with you. Our team transforms complex shopper data into clear, actionable insights for the world’s leading retailers. We build innovative, cloud-based, AI-driven reporting solutions that help clients understand and influence consumer behavior. As a Senior Engineer, you’ll not only develop advanced Power BI solutions, but also help shape the technical direction, mentor team members, and drive best practices across our growing BI and data engineering team. Job Description Who you are… You bring 7+ years of hands-on experience building BI and data solutions, with deep expertise in Power BI (data modeling, DAX, Power Query, and RLS). You have strong SQL skills and a solid grasp of data structures and performance optimization. You are confident developing enterprise-grade/client facing dashboards and data models and have a strong eye for usability and design. You’re passionate about clean, scalable architecture and can navigate the trade-offs in performance, complexity, and maintainability. You’re eager to mentor others while remaining hands-on in development work. You thrive in a collaborative environment and are experienced working with cross-functional teams (data engineers, product owners, UX, testers, DevOps, architects). You are proactive, self-driven, and excited about using modern technologies such as Azure, Python, GitHub Actions, and MS Fabric. You stay current on industry trends and tools and actively contribute to team knowledge and innovation. Why we need you… Develop and own end-to-end custom reporting solutions — from data ingestion and modeling to dashboard design and deployment. Design and implement performant data models that power interactive, large-scale retail analytics solutions. Collaborate closely with product owners, architects, and data engineers to align on requirements and solution design. Contribute to system integration, including data flows into and out of Power BI, and integration with web portals. Apply best practices in code development, testing, documentation, and CI/CD using tools like GitHub and Airflow. Research and implement new Power BI features and cloud capabilities to enhance solution performance and user experience. Lead peer reviews and provide technical mentorship to junior and mid-level developers. Work in an agile environment, actively participating in planning, estimation, demos, and retrospectives. Help define standards and contribute to the team’s long-term BI strategy and toolset selection. Qualifications Technical skills: PowerBI, SQL, ETL & Data Modelling, Dava Visualization, Coding & scripting (e.g. python nice to have) Platform & Tools Microsoft ecosystem (Power Query, MS Fabric, Azure Cloud services (Data Lake, etc.) Version control and DevOps tools such as Github and Github Actions, or equivalent tools for source control and automation workflows Exposure to tools like Airflow is a plus for managing data pipeline Soft & Interpersonal Skills Problem-Solving Mindset: Proactive, analytical approach to technical challenges and business needs. Team Collaboration: Comfortable working in agile teams, participating in standups, code reviews, and design sessions. Communication: Ability to clearly explain technical concepts to both technical and non-technical stakeholders. Continuous Learning: Eagerness to stay up-to-date with evolving tech, tools, and BI trends. Additional Information Enjoy a flexible and rewarding work environment with peer-to-peer recognition platforms. Recharge and revitalize with help of wellness plans made for you and your family. Plan your future with financial wellness tools. Stay relevant and upskill yourself with career development opportunities. Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion I'm interested I'm interested Privacy Policy Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description Experience in SonarQube, CICD, Tekton, terraform, GCS, GCP Looker, Google cloud build, cloud run, Vertex AI, Airflow, TensorFlow, etc., Experience in Train, Build and Deploy ML, DL Models Experience in HuggingFace, Chainlit, React Ability to understand technical, functional, non-functional, security aspects of business requirements and delivering them end-to-end. Ability to adapt quickly with opensource products & tools to integrate with ML Platforms Building and deploying Models (Scikit learn, DataRobots, TensorFlow PyTorch, etc.) Developing and deploying On-Prem & Cloud environments Kubernetes, Tekton, OpenShift, Terraform, Vertex AI Experience in LLM models like PaLM, GPT4, Mistral (open-source models), Work through the complete lifecycle of Gen AI model development, from training and testing to deployment and performance monitoring. Developing and maintaining AI pipelines with multimodalities like text, image, audio etc. Have implemented in real-world Chat bots or conversational agents at scale handling different data sources. Experience in developing Image generation/translation tools using any of the latent diffusion models like stable diffusion, Instruct pix2pix. Expertise in handling large scale structured and unstructured data. Efficiently handled large-scale generative AI datasets and outputs. Familiarity in the use of Docker tools, pipenv/conda/poetry env Comfort level in following Python project management best practices (use of cxzsetup.py, logging, pytests, relative module imports,sphinx docs,etc.,) Familiarity in use of Github (clone, fetch, pull/push,raising issues and PR, etc.,) High familiarity in the use of DL theory/practices in NLP applications Comfort level to code in Huggingface, LangChain, Chainlit, Tensorflow and/or Pytorch, Scikit-learn, Numpy and Pandas Comfort level to use two/more of open source NLP modules like SpaCy, TorchText, fastai.text, farm-haystack, and others Knowledge in fundamental text data processing (like use of regex, token/word analysis, spelling correction/noise reduction in text, segmenting noisy unfamiliar sentences/phrases at right places, deriving insights from clustering, etc.,) Have implemented in real-world BERT/or other transformer fine-tuned models (Seq classification, NER or QA) from data preparation, model creation and inference till deployment Use of GCP services like BigQuery, Cloud function, Cloud run, Cloud Build, VertexAI, Good working knowledge on other open source packages to benchmark and derive summary Experience in using GPU/CPU of cloud and on-prem infrastructures Skillset to leverage cloud platform for Data Engineering, Big Data and ML needs. Use of Dockers (experience in experimental docker features, docker-compose, etc.,) Familiarity with orchestration tools such as airflow, Kubeflow Experience in CI/CD, infrastructure as code tools like terraform etc. Kubernetes or any other containerization tool with experience in Helm, Argoworkflow, etc., Ability to develop APIs with compliance, ethical, secure and safe AI tools. Good UI skills to visualize and build better applications using Gradio, Dash, Streamlit, React, Django, etc., Deeper understanding of javascript, css, angular, html, etc., is a plus. Responsibilities Design NLP/LLM/GenAI applications/products by following robust coding practices, Explore SoTA models/techniques so that they can be applied for automotive industry usecases Conduct ML experiments to train/infer models; if need be, build models that abide by memory & latency restrictions, Deploy REST APIs or a minimalistic UI for NLP applications using Docker and Kubernetes tools Showcase NLP/LLM/GenAI applications in the best way possible to users through web frameworks (Dash, Plotly, Streamlit, etc.,) Converge multibots into super apps using LLMs with multimodalities Develop agentic workflow using Autogen, Agentbuilder, langgraph Build modular AI/ML products that could be consumed at scale. Qualifications Education : Bachelor’s or Master’s Degree in Computer Science, Engineering, Maths or Science Performed any modern NLP/LLM courses/open competitions is also welcomed. Show more Show less

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Gurgaon

On-site

Who We Are BCG partners with clients from the private, public, and not‐for profit sectors in all regions of the globe to identify their highest value opportunities, address their most critical challenges, and transform their enterprises. We work with the most innovative companies globally, many of which rank among the world’s 500 largest corporations. Our global presence makes us one of only a few firms that can deliver a truly unified team for our clients – no matter where they are located. Our ~22,000 employees, located in 90+ offices in 50+ countries, enable us to work in collaboration with our clients, to tailor our solutions to each organization. We value and utilize the unique talents that each of these individuals brings to BCG; the wide variety of backgrounds of our consultants, specialists, and internal staff reflects the importance we place on diversity. Our employees hold degrees across a full range of disciplines – from business administration and economics to biochemistry, engineering, computer science, psychology, medicine, and law. What You'll Do BCG X develops innovative and AI driven solutions for the Fortune 500 in their highest‐value use cases. The BCG X Software group productizes repeat use‐cases, creating both reusable components as well as single‐tenant and multi‐tenant SaaS offerings that are commercialized through the BCG consulting business. BCG X is currently looking for a Software Engineering Architect to drive impact and change for the firms engineering and analytics engine and bring new products to BCG clients globally. This will include: Serving as a leader within BCG X and specifically the KEY Impact Management by BCG X Tribe (Transformation, Post-Merger-Integration related software and data products) overseeing the delivery of high-quality software: driving technical roadmap, architectural decisions and mentoring engineers Influencing and serving as a key decision maker in BCG X technology selection & strategy Active “hands-on” role, building intelligent analytical products to solve problems, write elegant code, and iterate quickly Overall responsibility for the engineering and architecture alignment of all solutions delivered within the tribe. Responsible for technology roadmap of existing and new components delivered. Architecting and implementing backend and frontend solutions primarily using .NET, C#, MS SQL Server, Angular, and other technologies best suited for the goals, including open source i.e. Node, Django, Flask, Python where needed. What You'll Bring 10+ years of technology and software engineering experience in a complex and fast-paced business environment (ideally agile environment) with exposure to a variety of technologies and solutions, with at least 5 year’ experience in Architect role. Experience with a wide range of Application and Data architectures, platforms and tools including: Service Oriented Architecture, Clean Architecture, Software as a Service, Web Services, Object-Oriented Languages (like C# or Java), SQL Databases (like Oracle or SQL Server), Relational, Non-relational Databases, Hands on experience with analytics tools and reporting tools, Data Science experience etc. Thoroughly up to date in technology: o Modern cloud architectures including AWS, Azure, GCP, Kubernetes o Very strong particularly in .NET, C#, MS SQL Server, Angular technologies o Open source stacks including NodeJs, React, Angular, Flask are good to have o CI/CD / DevSecOps / GitOps toolchains and development approaches o Knowledge in machine learning & AI frameworks o Big data pipelines and systems: Spark, Snowflake, Kafka, Redshift, Synapse, Airflow At least Bachelors degree; Master’s degree and/or MBA preferred Team player with excellent work habits and interpersonal skills Care deeply about product quality, reliability, and scalability Passion about the people and culture side of engineering teams Outstanding written and oral communications skills The ability to travel, depending on project requirements. Boston Consulting Group is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, age, religion, sex, sexual orientation, gender identity / expression, national origin, disability, protected veteran status, or any other characteristic protected under national, provincial, or local law, where applicable, and those with criminal histories will be considered in a manner consistent with applicable state and local laws. BCG is an E - Verify Employer. Click here for more information on E-Verify.

Posted 2 weeks ago

Apply

5.0 years

0 - 0 Lacs

India

On-site

Company Introduction: - A dynamic company headquartered in Australia. Multi awards winner, recognized for excellence in telecommunications industry. Financial Times Fastest-growing Company APAC 2023. AFR (Australian Financial Review) Fast 100 Company 2022. Great promotion opportunities that acknowledge and reward your hard work. Young, energetic and innovative team, caring and supportive work environment. About You: We are seeking an experienced and highly skilled Data Warehouse Engineer to join our data and analytics team. Data Warehouse Engineer with an energetic 'can do' attitude to be a part of our dynamic IT team. The ideal candidate will have over 5 years of hands-on experience in designing, building, and maintaining scalable data pipelines and reporting infrastructure. You will be responsible for managing our data warehouse, automating ETL workflows, building dashboards, and enabling data-driven decision-making across the organization. Your Responsibilities will include but is not limited to: • Design, implement, and maintain robust, scalable data pipelines using Apache NiFi, Airflow, or similar ETL tools. Develop and manage efficient data ingestion and transformation workflows, including web data crawling using Python. Create, optimize, and maintain complex SQL queries to support business reporting needs. Build and manage interactive dashboards and visualizations using Apache Superset (preferred), Power BI, or Tableau. Collaborate with business stakeholders and analysts to gather requirements, define KPIs, and deliver meaningful data insights. Ensure data accuracy, completeness, and consistency through rigorous quality assurance processes. Maintain and optimize the performance of the data warehouse, supporting high-availability and fast query response times. Document technical processes and data workflows for maintainability and scalability. To be successful in this role you will ideally possess: 5+ years of experience in data engineering, business intelligence, or a similar role. Strong proficiency in Python, particularly for data crawling, parsing, and automation tasks. Expert in SQL (including complex joins, CTEs, window functions) for reporting and analytics. Hands-on experience with Apache Superset (preferred), or equivalent BI tools like Power BI or Tableau. Proficient with ETL tools such as Apache NiFi, Airflow, or similar data pipeline frameworks. Experience working with cloud-based data warehouse platforms (e.g., Amazon Redshift, Snowflake, BigQuery, or PostgreSQL). Strong understanding of data modeling, warehousing concepts, and performance optimization. Ability to work independently and collaboratively in a fast-paced environment. Preferred Qualifications: Experience with version control (e.g., Git) and CI/CD processes for data workflows. Familiarity with REST APIs and web scraping best practices. Knowledge of data governance, privacy, and security best practices. Background in the telecommunications or ISP industry is a plus. Job Types: Full-time, Permanent Pay: ₹40,000.00 - ₹70,000.00 per month Benefits: Leave encashment Paid sick time Provident Fund Schedule: Day shift Monday to Friday Supplemental Pay: Overtime pay Yearly bonus Work Location: In person

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Equifax is seeking creative, high-energy and driven software engineers with hands-on development skills to work on a variety of meaningful projects. Our software engineering positions provide you the opportunity to join a team of talented engineers working with leading-edge technology. You are ideal for this position if you are a forward-thinking, committed, and enthusiastic software engineer who is passionate about technology. What You’ll Do Demonstrate a deep understanding of cloud native, distributed micro service based architectures Deliver solutions for complex business problems through software standard SDLC Build strong relationships with both internal and external stakeholders including product, business and sales partners Demonstrate excellent communication skills with the ability to both simplify complex problems and also dive deeper if needed Build and manage strong technical teams that deliver complex software solutions that scale Manage teams with cross functional skills that include software, quality, reliability engineers, project managers and scrum masters Provide deep troubleshooting skills with the ability to lead and solve production and customer issues under pressure Leverage strong experience in full stack software development and public cloud like GCP and AWS Mentor, coach and develop junior and senior software, quality and reliability engineers Lead with a data/metrics driven mindset with a maniacal focus towards optimizing and creating efficient solutions Ensure compliance with EFX secure software development guidelines and best practices and responsible for meeting and maintaining QE, DevSec, and FinOps KPIs Define, maintain and report SLA, SLO, SLIs meeting EFX engineering standards in partnership with the product, engineering and architecture teams Collaborate with architects, SRE leads and other technical leadership on strategic technical direction, guidelines, and best practices Drive up-to-date technical documentation including support, end user documentation and run books Lead Sprint planning, Sprint Retrospectives, and other team activity Responsible for implementation architecture decision making associated with Product features/stories, refactoring work, and EOSL decisions Create and deliver technical presentations to internal and external technical and non-technical stakeholders communicating with clarity and precision, and present complex information in a concise format that is audience appropriate What Experience You Need Bachelor's degree or equivalent experience 7+ years of software engineering experience 7+ years experience writing, debugging, and troubleshooting code in mainstream Java, SpringBoot, TypeScript/JavaScript, HTML, CSS 7+ years experience with Cloud technology: GCP, AWS, or Azure 7+ years experience designing and developing cloud-native solutions 7+ years experience designing and developing microservices using Java, SpringBoot, GCP SDKs, GKE/Kubernetes 7+ years experience deploying and releasing software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs What could set you apart Self-starter that identifies/responds to priority shifts with minimal supervision. Strong communication and presentation skills Strong leadership qualities Demonstrated problem solving skills and the ability to resolve conflicts Experience creating and maintaining product and software roadmaps Experience overseeing yearly as well as product/project budgets Working in a highly regulated environment Experience designing and developing big data processing solutions using Dataflow/Apache Beam, Bigtable, BigQuery, PubSub, GCS, Composer/Airflow, and others UI development (e.g. HTML, JavaScript, Angular and Bootstrap) Experience with backend technologies such as JAVA/J2EE, SpringBoot, SOA and Microservices Source code control management systems (e.g. SVN/Git, Github) and build tools like Maven & Gradle. Agile environments (e.g. Scrum, XP) Relational databases (e.g. SQL Server, MySQL) Atlassian tooling (e.g. JIRA, Confluence, and Github) Developing with modern JDK (v1.7+) Automated Testing: JUnit, Selenium, LoadRunner, SoapUI Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 - 0 Lacs

Bhubaneswar, Odisha, India

Remote

Linkedin logo

Experience : 3.00 + years Salary : USD 18000-30000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Indefinite Contract(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Steer Health) What do you need for this opportunity? Must have skills required: Airflow, Kubeflow, LangChain, RAGFlow, TensorFlow, Dialogflow, FastAPI, LLMs, Pytorch, Python Steer Health is Looking for: About The Role Steer Health is seeking a talented **Backend Engineer** with expertise in AI/ML and healthcare technologies to design and implement **AgenticAI workflows** that redefine clinical and operational processes. You’ll build scalable backend systems that integrate FHIR-compliant APIs, LLM-driven automation, and conversational AI to solve real-world healthcare challenges. If you’re passionate about Python, AI workflows, and making a tangible impact in healthcare, this role is for you. Key Responsibilities FastAPI to enable seamless data exchange across EHRs, patient portals, and AI agents. Architect AI-driven workflows using tools like RAGFlow or similar platforms to automate tasks such as clinical documentation, prior authorization, and patient triage. Develop and fine-tune LLM-based solutions (e.g., GPT, Claude) with PyTorch, focusing on healthcare-specific use cases like diagnosis support or patient communication. Integrate Dialogflow for conversational AI agents that power chatbots, voice assistants, and virtual health aides. Collaborate on prompt engineering to optimize LLM outputs for accuracy, compliance, and clinical relevance. Optimize backend systems for performance, scalability, and security in HIPAA-compliant environments. Partner with cross-functional teams (data scientists, product managers, clinicians) to translate healthcare needs into technical solutions. Qualifications 3+ years of backend engineering experience, with expertise in Python and frameworks like FastAPI or Flask. Hands-on experience with **PyTorch/TensorFlow** and deploying ML models in production. Familiarity with AI workflow tools (e.g., RAGFlow, Airflow, Kubeflow) and orchestration of LLM pipelines. Experience integrating Dialogflow or similar platforms for conversational AI. Strong understanding of LLMs (training, fine-tuning, and deployment) and prompt engineering best practices. Knowledge of cloud platforms (AWS/GCP/Azure) and containerization (Docker, Kubernetes). Passion for healthcare innovation and improving patient/provider experiences. Preferred Qualifications Experience in healthcare tech (EHR integrations, HIPAA compliance, HL7/FHIR). Contributions to open-source AI/healthcare projects. Familiarity with **LangChain**, **LlamaIndex**, or agentic workflow frameworks. Why Join Steer Health? Impact: Your work will directly enhance healthcare delivery for millions of patients. Innovation: Build with the latest AI/ML tools in a fast-paced, forward-thinking environment. Growth: Lead projects at the intersection of AI and healthcare, with opportunities for advancement. Culture: Collaborative, mission-driven team with flexible work policies. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 - 0 Lacs

Cuttack, Odisha, India

Remote

Linkedin logo

Experience : 3.00 + years Salary : USD 18000-30000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Indefinite Contract(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Steer Health) What do you need for this opportunity? Must have skills required: Airflow, Kubeflow, LangChain, RAGFlow, TensorFlow, Dialogflow, FastAPI, LLMs, Pytorch, Python Steer Health is Looking for: About The Role Steer Health is seeking a talented **Backend Engineer** with expertise in AI/ML and healthcare technologies to design and implement **AgenticAI workflows** that redefine clinical and operational processes. You’ll build scalable backend systems that integrate FHIR-compliant APIs, LLM-driven automation, and conversational AI to solve real-world healthcare challenges. If you’re passionate about Python, AI workflows, and making a tangible impact in healthcare, this role is for you. Key Responsibilities FastAPI to enable seamless data exchange across EHRs, patient portals, and AI agents. Architect AI-driven workflows using tools like RAGFlow or similar platforms to automate tasks such as clinical documentation, prior authorization, and patient triage. Develop and fine-tune LLM-based solutions (e.g., GPT, Claude) with PyTorch, focusing on healthcare-specific use cases like diagnosis support or patient communication. Integrate Dialogflow for conversational AI agents that power chatbots, voice assistants, and virtual health aides. Collaborate on prompt engineering to optimize LLM outputs for accuracy, compliance, and clinical relevance. Optimize backend systems for performance, scalability, and security in HIPAA-compliant environments. Partner with cross-functional teams (data scientists, product managers, clinicians) to translate healthcare needs into technical solutions. Qualifications 3+ years of backend engineering experience, with expertise in Python and frameworks like FastAPI or Flask. Hands-on experience with **PyTorch/TensorFlow** and deploying ML models in production. Familiarity with AI workflow tools (e.g., RAGFlow, Airflow, Kubeflow) and orchestration of LLM pipelines. Experience integrating Dialogflow or similar platforms for conversational AI. Strong understanding of LLMs (training, fine-tuning, and deployment) and prompt engineering best practices. Knowledge of cloud platforms (AWS/GCP/Azure) and containerization (Docker, Kubernetes). Passion for healthcare innovation and improving patient/provider experiences. Preferred Qualifications Experience in healthcare tech (EHR integrations, HIPAA compliance, HL7/FHIR). Contributions to open-source AI/healthcare projects. Familiarity with **LangChain**, **LlamaIndex**, or agentic workflow frameworks. Why Join Steer Health? Impact: Your work will directly enhance healthcare delivery for millions of patients. Innovation: Build with the latest AI/ML tools in a fast-paced, forward-thinking environment. Growth: Lead projects at the intersection of AI and healthcare, with opportunities for advancement. Culture: Collaborative, mission-driven team with flexible work policies. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 2 weeks ago

Apply

8.0 years

5 - 10 Lacs

Chennai

On-site

Country/Region: IN Requisition ID: 25968 Work Model: Position Type: Salary Range: Location: INDIA - CHENNAI - BIRLASOFT OFFICE Title: Technical Specialist-Data Engg Description: Area(s) of responsibility Responsibilities: 8+ years of hands-on experience in IT industry including all phases such as design, development, testing and implementation. 5+ years of hands-on Python development experience and 3+ years of SQL/PL-SQL experience Extensive hands-on experience required on DASH building building interactive web applications, data visualization dashboards, etc. Design, build, and scale data platforms and products using modern cloud technologies leveraging full-stack Python development, cloud data platforms, and knowledge of modern data modeling and storage formats. Knowledge with DBT handling large data transformations, Dataframes, version control systems, documenting the models. Knowledge with Airflow handling complex workflow management, scheduling and orchestration of data pipelines or workflows. Deploy high-performance Python-based data pipelines for structured, semi-structured, and unstructured data. Develop and implement data processing solutions using batch, real-time, event-driven frameworks, SQL and PL-SQL. In-depth knowledge of Python frameworks and libraries, such as Pandas, Django or Flask Ensure data integrity, lineage, and observability by writing resilient and maintainable code. Implement RESTful APIs and web services for data access and consumption. Knowledge on handling Bigdata, financial domain/data and Strong analytical and problem-solving skills. Collaborating with cross-functional teams, including front-end developers, to design and implement software features Experience with continuous integration/continuous deployment (CI/CD) pipelines and tools Familiarity with version control systems (e.g., Git) Expertise in database systems and the ability to design and optimize SQL. Knowledge on handling Bigdata, financial domain/data and Strong analytical and problem-solving skills. Keeping up-to-date with the latest Python developments and best practices.

Posted 2 weeks ago

Apply

7.0 years

0 - 0 Lacs

Coimbatore

Remote

Sr. Python Developer | 7+ years | Work Timings: 1 PM to 10 PM | Remote Job Description: Core Skill: - Hands on experience with Python Development Key Responsibilities (including, but not limited to): This developer should be proficient in Python programming and possess a strong understanding of data structures, algorithms, and database concepts. They are adept at using relevant Python libraries and frameworks and are comfortable working in a data-driven environment. Responsible for designing, developing, and implementing robust and scalable data parsers, data pipeline solutions and web applications for data visualization. Their core responsibilities include: Data platform related components: Building and maintaining efficient and reliable data pipeline components using Python and related technologies (e.g., Lambda, Airflow). This involves extracting data from various sources, transforming it into usable formats, and loading it into target persistence layers and serving them via API. Data Visualization (Dash Apps): Developing interactive and user-friendly data visualization applications using Plotly Dash. This includes designing dashboards that effectively communicate complex data insights, enabling stakeholders to make data-driven decisions. Data Parsing and Transformation: Implementing data parsing and transformation logic using Python libraries to clean, normalize, and restructure data from diverse formats (e.g., JSON, CSV, XML) into formats suitable for analysis and modeling. Collaboration: Working closely with product leadership and profession services teams to understand product and project requirements, define data solutions, and ensure quality and timely delivery. Software Development Best Practices: Adhering to software development best practices, including version control (Git), testing (unit, integration), and documentation, to ensure maintainable and reliable code. Job Type: Contractual / Temporary Contract length: 6 months Pay: ₹70,000.00 - ₹80,000.00 per month Benefits: Work from home Schedule: Monday to Friday Morning shift UK shift US shift Education: Bachelor's (Preferred) Experience: Python: 7 years (Preferred)

Posted 2 weeks ago

Apply

0 years

0 Lacs

Chennai

On-site

Join us as an AWS/PySpark Engineer at Barclays, where you'll spearhead the evolution of our digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionise our digital offerings, ensuring unparalleled customer experiences. As a part of team of developers, you will deliver technology stack, using strong analytical and problem solving skills to understand the business requirements and deliver quality solutions. To be successful as an AWS/PySpark Engineer you should have experience with: AWS, Glue, Athena, Airflow, ETL, Hadoop PySpark SQL, Unix Scheduling, Data Pipelines, Debugging Skills Some other highly valued skills may include: Abinitio, Unix You may be assessed on key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen, strategic thinking and digital and technology, as well as job-specific technical skills. This role is based out of Chennai. Purpose of the role To build and maintain the systems that collect, store, process, and analyse data, such as data pipelines, data warehouses and data lakes to ensure that all data is accurate, accessible, and secure. Accountabilities Build and maintenance of data architectures pipelines that enable the transfer and processing of durable, complete and consistent data. Design and implementation of data warehoused and data lakes that manage the appropriate data volumes and velocity and adhere to the required security measures. Development of processing and analysis algorithms fit for the intended data complexity and volumes. Collaboration with data scientist to build and deploy machine learning models. Analyst Expectations To perform prescribed activities in a timely manner and to a high standard consistently driving continuous improvement. Requires in-depth technical knowledge and experience in their assigned area of expertise Thorough understanding of the underlying principles and concepts within the area of expertise They lead and supervise a team, guiding and supporting professional development, allocating work requirements and coordinating team resources. If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they develop technical expertise in work area, acting as an advisor where appropriate. Will have an impact on the work of related teams within the area. Partner with other functions and business areas. Takes responsibility for end results of a team’s operational processing and activities. Escalate breaches of policies / procedure appropriately. Take responsibility for embedding new policies/ procedures adopted due to risk mitigation. Advise and influence decision making within own area of expertise. Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to. Deliver your work and areas of responsibility in line with relevant rules, regulation and codes of conduct. Maintain and continually build an understanding of how own sub-function integrates with function, alongside knowledge of the organisations products, services and processes within the function. Demonstrate understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Make evaluative judgements based on the analysis of factual information, paying attention to detail. Resolve problems by identifying and selecting solutions through the application of acquired technical experience and will be guided by precedents. Guide and persuade team members and communicate complex / sensitive information. Act as contact point for stakeholders outside of the immediate function, while building a network of contacts outside team and external to the organisation. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave.

Posted 2 weeks ago

Apply

4.0 years

2 - 10 Lacs

Noida

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. About UHG United Health Group is a leading health care company serving more than 85 million people worldwide. The organization is ranked 5th among Fortune 500 companies. UHG serves its customers through two different platforms – United Health Care (UHC) and Optum. UHC is responsible for providing healthcare coverage and benefits services, while Optum provides information and technology enabled health services. India operations of UHG are aligned to Optum. The Optum Global Analytics Team, part of Optum, is involved in developing broad-based and targeted analytics solutions across different verticals for all lines of business. Primary Responsibilities: Work under supervision of Senior Data Engineers to gather requirements to create Datamodel for Data Science & Business Intelligence projects Engage in client communications for all important functions including data understanding/exploration, strategizing solutions etc. Document the Metadata information about the data sources used in the project & present that information to team members during team meetings Develop Data Marts, De-normalized views & Data Models for projects Develop Data Quality control processes around the data sets used for analysis Should be able to create/analyze/optimize complex SQL queries Lead and Drive Knowledge sharing session within the team Work with Senior team members to develop new capabilities for the team Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelors or 4-year university degree 3+ years of experience Good Understanding of Python programming language Understanding of - Big Data, Hadoop, PySpark, Distributed or Parallel Processing, Map Reduce Good Knowledge of Databricks and Snowflake Knowledge or Experience on Cloud Technologies - Azure or AWS or GCP Understanding Relational Database Model and Entity Relation diagrams Good Knowledge on Relational Databases - SQL Server, Oracle, Teradata Knowledge on Orchestration tool - AirFlow, Data Factory, Databricks Workflows or Jobs Configuration Management - GitHub Preferred Qualifications: Relevant Databricks Certifications Knowledge or experience in messaging Queues - Kafka or ActiveMQ or RabbitMQ Knowledge or experience on CI or CD Tools - GitHub Actions Knowledge or experience in Unix Shell Scripting for automation and scheduling Batch Jobs Knowledge or experience using Microsoft Excel, Power Point Knowledge on Agile or Scrum At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

CACI India, RMZ Nexity, Tower 30 4th Floor Survey No.83/1, Knowledge City Raidurg Village, Silpa Gram Craft Village, Madhapur, Serilingampalle (M), Hyderabad, Telangana 500081, India Req #1097 02 May 2025 CACI International Inc is an American multinational professional services and information technology company headquartered in Northern Virginia. CACI provides expertise and technology to enterprise and mission customers in support of national security missions and government transformation for defense, intelligence, and civilian customers. CACI has approximately 23,000 employees worldwide. Headquartered in London, CACI Ltd is a wholly owned subsidiary of CACI International Inc., a publicly listed company on the NYSE with annual revenue in excess of US $6.2bn. Founded in 2022, CACI India is an exciting, growing and progressive business unit of CACI Ltd. CACI Ltd currently has over 2000 intelligent professionals and are now adding many more from our Hyderabad and Pune offices. Through a rigorous emphasis on quality, the CACI India has grown considerably to become one of the UKs most well-respected Technology centres. About Data Platform The Data Platform will be built and managed “as a Product” to support a Data Mesh organization. The Data Platform focusses on enabling decentralized management, processing, analysis and delivery of data, while enforcing corporate wide federated governance on data, and project environments across business domains. The goal is to empower multiple teams to create and manage high integrity data and data products that are analytics and AI ready, and consumed internally and externally. What does a Data Infrastructure Engineer do? A Data Infrastructure Engineer will be responsible to develop, maintain and monitor the data platform infrastructure and operations. The infrastructure and pipelines you build will support data processing, data analytics, data science and data management across the CACI business. The data platform infrastructure will conform to a zero trust, least privilege architecture, with a strict adherence to data and infrastructure governance and control in a multi-account, multi-region AWS environment. You will use Infrastructure as Code and CI/CD to continuously improve, evolve and repair the platform. You will be able to design architectures and create re-useable solutions to reflect the business needs. Responsibilities Will Include Collaborating across CACI departments to develop and maintain the data platform Building infrastructure and data architectures in Cloud Formation, and SAM. Designing and implementing data processing environments and integrations using AWS PaaS such as Glue, EMR, Sagemaker, Redshift, Aurora and Snowflake Building data processing and analytics pipelines as code, using python, SQL, PySpark, spark, CloudFormation, lambda, step functions, Apache Airflow Monitoring and reporting on the data platform performance, usage and security Designing and applying security and access control architectures to secure sensitive data You Will Have 3+ years of experience in a Data Engineering role. Strong experience and knowledge of data architectures implemented in AWS using native AWS services such as S3, DataZone, Glue, EMR, Sagemaker, Aurora and Redshift. Experience administrating databases and data platforms Good coding discipline in terms of style, structure, versioning, documentation and unit tests Strong proficiency in Cloud Formation, Python and SQL Knowledge and experience of relational databases such as Postgres, Redshift Experience using Git for code versioning, and lifecycle management Experience operating to Agile principles and ceremonies Hands-on experience with CI/CD tools such as GitLab Strong problem-solving skills and ability to work independently or in a team environment. Excellent communication and collaboration skills. A keen eye for detail, and a passion for accuracy and correctness in numbers Whilst not essential, the following skills would also be useful: Experience using Jira, or other agile project management and issue tracking software Experience with Snowflake Experience with Spatial Data Processing More About The Opportunity The Data Engineer is an excellent opportunity, and CACI Services India reward their staff well with a competitive salary and impressive benefits package which includes: Learning: Budget for conferences, training courses and other materials Health Benefits: Family plan with 4 children and parents covered Future You: Matched pension and health care package We understand the importance of getting to know your colleagues. Company meetings are held every quarter, and a training/work brief weekend is held once a year, amongst many other social events. CACI is an equal opportunities employer. Therefore, we embrace diversity and are committed to a working environment where no one will be treated less favourably on the grounds of their sex, race, disability, sexual orientation religion, belief or age. We have a Diversity & Inclusion Steering Group and we always welcome new people with fresh perspectives from any background to join the group An inclusive and equitable environment enables us to draw on expertise and unique experiences and bring out the best in each other. We champion diversity, inclusion and wellbeing and we are supportive of Veterans and people from a military background. We believe that by embracing diverse experiences and backgrounds, we can collaborate to create better outcomes for our people, our customers and our society. Other details Pay Type Salary Apply Now Show more Show less

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies