Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Role description Job Title: Digital Technologist (DevOps) Department: Information Technology Location: Lower Parel, Mumbai Who we are? Axis Asset Management Company Ltd (Axis AMC), founded in 2009, is one of India’s largest and fastest-growing mutual funds. We proudly serve over 1.3 crore customers across 100+ cities with utmost humility. Our success is built on three founding principles: • Long Term Wealth Creation • Customer-Centric Approach • Sustainable Relationships Our investment philosophy emphasizes risk management and encourages partners and investors to move from transactional investing to fulfilling critical life goals. We offer a diverse range of investment solutions to help customers achieve financial independence and a happier tomorrow. What will you Do? As a DevOps Lead, you will play a pivotal role in driving the automation, scalability, and reliability of our development and deployment processes. Key Responsibilities: 1. CI/CD Pipeline Development: Design, implement, and maintain robust CI/CD workflows using Jenkins, Azure Repos, Docker, and PySpark. Ensure seamless integration with AWS services such as Airflow and EKS. 2. Cloud & Infrastructure Management: Architect and manage scalable, fault-tolerant, and cost-effective cloud solutions using AWS services including EC2, RDS, EKS, DynamoDB, Secret Manager, Control Tower, Transit Gateway, and VPC. 3. Security & Compliance: Implement security best practices across the DevOps lifecycle. Utilize tools like SonarQube, Checkmarx, Trivy, and AWS Inspector to ensure secure application deployments. Manage IAM roles, policies, and service control policies (SCPs). 4. Containerization & Orchestration: Lead container lifecycle management using Docker, Amazon ECS, EKS, and AWS Fargate. Implement orchestration strategies including blue-green deployments, Ingress controllers, and ArgoCD. 5. Frontend & Backend CI/CD: Build and manage CI/CD pipelines for frontend applications (Node.js, Angular, React) and backend microservices (Spring Boot) using tools like Maven and Nexus/Azure Artifacts. 6. Infrastructure as Code (IaC): Develop and maintain infrastructure using Terraform or AWS CloudFormation to support repeatable and scalable deployments. 7. Scripting & Automation: Write and maintain automation scripts in Python, Groovy, and Shell/Bash for deployment, monitoring, and system management tasks. 8. Version Control & Artifact Management: Manage source code and artifacts using Git, Azure Repos, Nexus, and Azure Artifacts. 9. Disaster Recovery & High Availability: Design and implement disaster recovery strategies, multi-AZ, and multi-region architectures to ensure business continuity. 10. Collaboration & Leadership: Work closely with development, QA, and operations teams to streamline workflows and mentor junior team members in DevOps best practices.
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
Navi Mumbai, Maharashtra, India
On-site
About Holcim Holcim is the leading partner for sustainable construction, creating value across the built environment from infrastructure and industry to buildings. We offer high-value end-to-end Building Materials and Building Solutions - from foundations and flooring to roofing and walling - powered by premium brands including ECOPlanet, ECOPact and ECOCycle®. More than 45,000 talented Holcim employees in 45 attractive markets - across Europe, Latin America and Asia, Middle East & Africa - are driven by our purpose to build progress for people and the planet, with sustainability and innovation at the core of everything we do. About The Role The Data Engineer will play an important role in enabling business for Data Driven Operations and Decision making in Agile and Product-centric IT environment. Education / Qualification BE / B. Tech from IIT or Tier I / II colleges Certification in Cloud Platforms AWS or GCP Experience Total Experience of 4-8years Hands on experience in python coding is must . Experience in data engineering which includes laudatory account Hands-on experience in Big Data cloud platforms like AWS(redshift, Glue, Lambda), Data Lakes, and Data Warehouses, Data Integration, data pipeline. Experience in SQL, writing code in spark engine using python,pyspark.. Experience in data pipeline and workflow management tools ( such as Azkaban, Luigi, Airflow etc.) Key Personal Attributes Business focused, Customer & Service minded Strong Consultative and Management skills Good Communication and Interpersonal skills
Posted 1 week ago
9.0 - 15.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title- Snowflake Data Architect Experience- 9 to 15 Years Location- Gurugram Job Summary: We are seeking a highly experienced and motivated Snowflake Data Architect & ETL Specialist to join our growing Data & Analytics team. The ideal candidate will be responsible for designing scalable Snowflake-based data architectures, developing robust ETL/ELT pipelines, and ensuring data quality, performance, and security across multiple data environments. You will work closely with business stakeholders, data engineers, and analysts to drive actionable insights and ensure data-driven decision-making. Key Responsibilities: Design, develop, and implement scalable Snowflake-based data architectures. Build and maintain ETL/ELT pipelines using tools such as Informatica, Talend, Apache NiFi, Matillion, or custom Python/SQL scripts. Optimize Snowflake performance through clustering, partitioning, and caching strategies. Collaborate with cross-functional teams to gather data requirements and deliver business-ready solutions. Ensure data quality, governance, integrity, and security across all platforms. Migrate legacy data warehouses (e.g., Teradata, Oracle, SQL Server) to Snowflake. Automate data workflows and support CI/CD deployment practices. Implement data modeling techniques including dimensional modeling, star/snowflake schema, normalization/denormalization. Support and promote metadata management and data governance best practices. Technical Skills (Hard Skills): Expertise in Snowflake: Architecture design, performance tuning, cost optimization. Strong proficiency in SQL, Python, and scripting for data engineering tasks. Hands-on experience with ETL tools: Informatica, Talend, Apache NiFi, Matillion, or similar. Proficient in data modeling (dimensional, relational, star/snowflake schema). Good knowledge of Cloud Platforms: AWS, Azure, or GCP. Familiar with orchestration and workflow tools such as Apache Airflow, dbt, or DataOps frameworks. Experience with CI/CD tools and version control systems (e.g., Git). Knowledge of BI tools such as Tableau, Power BI, or Looker. Certifications (Preferred/Required): ✅ Snowflake SnowPro Core Certification – Required or Highly Preferred ✅ SnowPro Advanced Architect Certification – Preferred ✅ Cloud Certifications (e.g., AWS Certified Data Analytics – Specialty, Azure Data Engineer Associate) – Preferred ✅ ETL Tool Certifications (e.g., Talend, Matillion) – Optional but a plus Soft Skills: Strong analytical and problem-solving capabilities. Excellent communication and collaboration skills. Ability to translate technical concepts into business-friendly language. Proactive, detail-oriented, and highly organized. Capable of multitasking in a fast-paced, dynamic environment. Passionate about continuous learning and adopting new technologies. Why Join Us? Work on cutting-edge data platforms and cloud technologies Collaborate with industry leaders in analytics and digital transformation Be part of a data-first organization focused on innovation and impact Enjoy a flexible, inclusive, and collaborative work culture
Posted 1 week ago
0.0 - 15.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Description Lead the design, development, and implementation of scalable data pipelines and ELT processes using Databricks, DLT, dbt, Airflow, and other tools. Collaborate with stakeholders to understand data requirements and deliver high-quality data solutions. Optimize and maintain existing data pipelines to ensure data quality, reliability, and performance. Develop and enforce data engineering best practices, including coding standards, testing, and documentation. Mentor junior data engineers, providing technical leadership and fostering a culture of continuous learning and improvement. Monitor and troubleshoot data pipeline issues, ensuring timely resolution and minimal disruption to business operations. Stay up to date with the latest industry trends and technologies, and proactively recommend improvements to our data engineering practices. Qualifications Systems (MIS), Data Science or related field. 15 years of experience in data engineering and/or architecture, with a focus on big data technologies. Extensive production experience with Databricks, Apache Spark, and other related technologies. Familiarity with orchestration and ELT tools like Airflow, dbt, etc. Expert SQL knowledge. Proficiency in programming languages such as Python, Scala, or Java. Strong understanding of data warehousing concepts. Experience with cloud platforms such as Azure, AWS, Google Cloud. Excellent problem-solving skills and the ability to work in a fast-paced, collaborative environment. Strong communication and leadership skills, with the ability to effectively mentor and guide Experience with machine learning and data science workflows Knowledge of data governance and security best practices Certification in Databricks, Azure, Google Cloud or related technologies. Job Engineering Primary Location India-Karnataka-Bengaluru Schedule: Full-time Travel: No Req ID: 252684 Job Hire Type Experienced Not Applicable #BMI N/A
Posted 1 week ago
0.0 - 3.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Location Bengaluru, Karnataka, India Job ID R-232528 Date posted 28/07/2025 Job Title: Analyst – Data Engineer Introduction to role: Are you ready to make a difference in the world of data science and advanced analytics? As a Data Engineer within the Commercial Strategic Data Management team, you'll play a pivotal role in transforming data science solutions for the Rare Disease Unit. Your mission will be to craft, develop, and deploy data science solutions that have a real impact on patients' lives. By leveraging cutting-edge tools and technology, you'll enhance delivery performance and data engineering capabilities, creating a seamless platform for the Data Science team and driving business growth. Collaborate closely with the Data Science and Advanced Analytics team, US Commercial leadership, Sales Field Team, and Field Operations to build data science capabilities that meet commercial needs. Are you ready to take on this exciting challenge? Accountabilities: Collaborate with the Commercial Multi-functional team to find opportunities for using internal and external data to enhance business solutions. Work closely with business and advanced data science teams on cross-functional projects, delivering complex data science solutions that contribute to the Commercial Organization. Manage platforms and processes for complex projects using a wide range of data engineering techniques in advanced analytics. Prioritize business and information needs with management; translate business logic into technical requirements, such as creating queries, stored procedures, and scripts. Interpret data, process it, analyze results, present findings, and provide ongoing reports. Develop and implement databases, data collection systems, data analytics, and strategies that optimize data efficiency and quality. Acquire data from primary or secondary sources and maintain databases/data systems. Identify and define new process improvement opportunities. Manage and support data solutions in BAU scenarios, including data profiling, designing data flow, creating business alerts for fields, and query optimization for ML models. Essential Skills/Experience: BS/MS in a quantitative field (Computer Science, Data Science, Engineering, Information Systems, Economics) 5+ years of work experience with DB skills like Python, SQL, Snowflake, Amazon Redshift, MongoDB, Apache Spark, Apache Airflow, AWS cloud and Amazon S3 experience, Oracle, Teradata Good experience in Apache Spark or Talend Administration Center or AWS Lambda, MongoDB, Informatica, SQL Server Integration Services Experience in building ETL pipeline and data integration Build efficient Data Management (Extract, consolidate and store large datasets with improved data quality and consistency) Streamlined data transformation: Convert raw data into usable formats at scale, automate tasks, and apply business rules Good written and verbal skills to communicate complex methods and results to diverse audiences; willing to work in a cross-cultural environment Analytical mind with problem-solving inclination; proficiency in data manipulation, cleansing, and interpretation Experience in support and maintenance projects, including ticket handling and process improvement Setting up Workflow Orchestration (Schedule and manage data pipelines for smooth flow and automation) Importance of Scalability and Performance (handling large data volumes with optimized processing capabilities) Experience with Git Desirable Skills/Experience: Knowledge of distributed computing and Big Data Technologies like Hive, Spark, Scala, HDFS; use these technologies along with statistical tools like Python/R Experience working with HTTP requests/responses and API REST services Familiarity with data visualization tools like Tableau, Qlik, Power BI, Excel charts/reports Working knowledge of Salesforce/Veeva CRM, Data governance, and Data mining algorithms Hands-on experience with EHR, administrative claims, and laboratory data (e.g., Prognos, IQVIA, Komodo, Symphony claims data) Good experience in consulting, healthcare, or biopharmaceuticals When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca's Alexion division, you'll find an environment where your work truly matters. Embrace the opportunity to grow and innovate within a rapidly expanding portfolio. Experience the entrepreneurial spirit of a leading biotech combined with the resources of a global pharma. You'll be part of an energizing culture where connections are built to explore new ideas. As a member of our commercial team, you'll meet the needs of under-served patients worldwide. With tailored development programs designed for skill enhancement and fostering empathy for patients' journeys, you'll align your growth with our mission. Supported by exceptional leaders and peers across marketing and compliance, you'll drive change with integrity in a culture celebrating diversity and innovation. Ready to make an impact? Apply now to join our team! Date Posted 29-Jul-2025 Closing Date 04-Aug-2025 Alexion is proud to be an Equal Employment Opportunity and Affirmative Action employer. We are committed to fostering a culture of belonging where every single person can belong because of their uniqueness. The Company will not make decisions about employment, training, compensation, promotion, and other terms and conditions of employment based on race, color, religion, creed or lack thereof, sex, sexual orientation, age, ancestry, national origin, ethnicity, citizenship status, marital status, pregnancy, (including childbirth, breastfeeding, or related medical conditions), parental status (including adoption or surrogacy), military status, protected veteran status, disability, medical condition, gender identity or expression, genetic information, mental illness or other characteristics protected by law. Alexion provides reasonable accommodations to meet the needs of candidates and employees. To begin an interactive dialogue with Alexion regarding an accommodation, please contact accommodations@Alexion.com. Alexion participates in E-Verify.
Posted 1 week ago
10.0 - 14.0 years
35 - 45 Lacs
Hyderabad
Work from Office
About the Team At DAZN, the Analytics Engineering team is at the heart of turning hundreds of data points into meaningful insights that power strategic decisions across the business. From content strategy to product engagement, marketing optimization to revenue intelligence we enable scalable, accurate, and accessible data for every team. The Role We're looking for a Lead Analytics Engineer to take ownership of our analytics data Pipeline and play a pivotal role in designing and scaling our modern data stack. This is a hands-on technical leadership role where you'll shape the data models in dbt/ Snowflake , orchestrate pipelines using Airflow , and enable high-quality, trusted data for reporting. Key Responsibilities Lead the development and governance of DAZNs semantic data models to support consistent, reusable reporting metrics. Architect efficient, scalable data transformations on Snowflake using SQL/DBT and best practices in data warehousing. Manage and enhance pipeline orchestration with Airflow , ensuring timely and reliable data delivery. Collaborate with stakeholders across Product, Finance, Marketing, and Technology to translate requirements into robust data models. Define and drive best practices in version control, testing, CI/CD for analytics workflows. Mentor and support junior engineers, fostering a culture of technical excellence and continuous improvement. Champion data quality, documentation, and observability across the analytics layer. You'll Need to Have 10+ years of experience in data/analytics engineering, with 2+ years leading or mentoring engineers . Deep expertise in SQL and cloud data warehouses (preferably Snowflake ) and Cloud Services(AWS /GCP/AZURE) Proven experience with dbt for data modeling and transformation. Hands-on experience with Airflow (or similar orchestrators like Prefect, Luigi). Strong understanding of dimensional modeling, ELT best practices, and data governance principles. Ability to balance hands-on development with leadership and stakeholder management. Clear communication skills you can explain technical concepts to both technical and non-technical teams. Nice to Have Experience in the media, OTT, or sports tech domain. Familiarity with BI tools like Looker or PowerBI. Exposure to testing frameworks like dbt tests or Great Expectations
Posted 1 week ago
1.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Overview of 66degrees 66degrees is a leading consulting and professional services company specializing in developing AI-focused, data-led solutions leveraging the latest advancements in cloud technology. With our unmatched engineering capabilities and vast industry experience, we help the world's leading brands transform their business challenges into opportunities and shape the future of work. At 66degrees, we believe in embracing the challenge and winning together. These values not only guide us in achieving our goals as a company but also for our people. We are dedicated to creating a significant impact for our employees by fostering a culture that sparks innovation and supports professional and personal growth along the way. Overview of Role As a Data Engineer specializing in AI/ML, you'll be instrumental in designing, building, and maintaining the data infrastructure crucial for training, deploying, and serving our advanced AI and Machine Learning models. You'll work closely with Data Scientists, ML Engineers, and Cloud Architects to ensure data is accessible, reliable, and optimized for high-performance AI/ML workloads, primarily within the Google Cloud ecosystem. Responsibilities Data Pipeline Development: Design, build, and maintain robust, scalable, and efficient ETL/ELT data pipelines to ingest, transform, and load data from various sources into data lakes and data warehouses, specifically optimized for AI/ML consumption. AI/ML Data Infrastructure: Architect and implement the underlying data infrastructure required for machine learning model training, serving, and monitoring within GCP environments. Google Cloud Ecosystem: Leverage a broad range of Google Cloud Platform (GCP) data services including, BigQuery, Dataflow, Dataproc, Cloud Storage, Pub/Sub, Vertex AI, Composer (Airflow), and Cloud SQL. Data Quality & Governance: Implement best practices for data quality, data governance, data lineage, and data security to ensure the reliability and integrity of AI/ML datasets. Performance Optimization: Optimize data pipelines and storage solutions for performance, cost-efficiency, and scalability, particularly for large-scale AI/ML data processing. Collaboration with AI/ML Teams: Work closely with Data Scientists and ML Engineers to understand their data needs, prepare datasets for model training, and assist in deploying models into production. Automation & MLOps Support: Contribute to the automation of data pipelines and support MLOps initiatives, ensuring seamless integration from data ingestion to model deployment and monitoring. Troubleshooting & Support: Troubleshoot and resolve data-related issues within the AI/ML ecosystem, ensuring data availability and pipeline health. Documentation: Create and maintain comprehensive documentation for data architectures, pipelines, and data models. Qualifications 1-2+ years of experience in Data Engineering, with at least 2-3 years directly focused on building data pipelines for AI/ML workloads. Deep, hands-on experience with core GCP data services such as BigQuery, Dataflow, Dataproc, Cloud Storage, Pub/Sub, and Composer/Airflow. Strong proficiency in at least one relevant programming language for data engineering (Python is highly preferred).SQL skills for complex data manipulation, querying, and optimization. Solid understanding of data warehousing concepts, data modeling (dimensional, 3NF), and schema design for analytical and AI/ML purposes. Proven experience designing, building, and optimizing large-scale ETL/ELT processes. Familiarity with big data processing frameworks (e.g., Apache Spark, Hadoop) and concepts. Exceptional analytical and problem-solving skills, with the ability to design solutions for complex data challenges. Excellent verbal and written communication skills, capable of explaining complex technical concepts to both technical and non-technical stakeholders. 66degrees is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to actual or perceived race, color, religion, sex, gender, gender identity, national origin, age, weight, height, marital status, sexual orientation, veteran status, disability status or other legally protected class.
Posted 1 week ago
7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Teamwork makes the stream work. Roku is changing how the world watches TV Roku is the #1 TV streaming platform in the U.S., Canada, and Mexico, and we've set our sights on powering every television in the world. Roku pioneered streaming to the TV. Our mission is to be the TV streaming platform that connects the entire TV ecosystem. We connect consumers to the content they love, enable content publishers to build and monetize large audiences, and provide advertisers unique capabilities to engage consumers. From your first day at Roku, you'll make a valuable - and valued - contribution. We're a fast-growing public company where no one is a bystander. We offer you the opportunity to delight millions of TV streamers around the world while gaining meaningful experience across a variety of disciplines. About the team Roku runs one of the largest data lakes in the world. We store over 70 PB of data, run 10+M queries per month, scan over 100 PB of data per month. Big Data team is the one responsible for building, running, and supporting the platform that makes this possible. We provide all the tools needed to acquire, generate, process, monitor, validate and access the data in the lake for both streaming data and batch. We are also responsible for generating the foundational data. The systems we provide include Scribe, Kafka, Hive, Presto, Spark, Flink, Pinot, and others. The team is actively involved in the Open Source, and we are planning to increase our engagement over time. About the Role Roku is in the process of modernizing its Big Data Platform. We are working on defining the new architecture to improve user experience, minimize the cost and increase efficiency. Are you interested in helping us build this state-of-the-art big data platform? Are you an expert with Big Data Technologies? Have you looked under the hood of these systems? Are you interested in Open Source? If you answered “Yes” to these questions, this role is for you! What you will be doing You will be responsible for streamlining and tuning existing Big Data systems and pipelines and building new ones. Making sure the systems run efficiently and with minimal cost is a top priority You will be making changes to the underlying systems and if an opportunity arises, you can contribute your work back into the open source You will also be responsible for supporting internal customers and on-call services for the systems we host. Making sure we provided stable environment and great user experience is another top priority for the team We are excited if you have 7+ years of production experience building big data platforms based upon Spark, Trino or equivalent Strong programming expertise in Java, Scala, Kotlin or another JVM language. A robust grasp of distributed systems concepts, algorithms, and data structures Strong familiarity with the Apache Hadoop ecosystem: Spark, Kafka, Hive/Iceberg/Delta Lake, Presto/Trino, Pinot, etc. Experience working with at least 3 of the technologies/tools mentioned here: Big Data / Hadoop, Kafka, Spark, Trino, Flink, Airflow, Druid, Hive, Iceberg, Delta Lake, Pinot, Storm etc Extensive hands-on experience with public cloud AWS or GCP BS/MS degree in CS or equivalent AI Literacy / AI growth mindset Benefits Roku is committed to offering a diverse range of benefits as part of our compensation package to support our employees and their families. Our comprehensive benefits include global access to mental health and financial wellness support and resources. Local benefits include statutory and voluntary benefits which may include healthcare (medical, dental, and vision), life, accident, disability, commuter, and retirement options (401(k)/pension). Our employees can take time off work for vacation and other personal reasons to balance their evolving work and life needs. It's important to note that not every benefit is available in all locations or for every role. For details specific to your location, please consult with your recruiter. The Roku Culture Roku is a great place for people who want to work in a fast-paced environment where everyone is focused on the company's success rather than their own. We try to surround ourselves with people who are great at their jobs, who are easy to work with, and who keep their egos in check. We appreciate a sense of humor. We believe a fewer number of very talented folks can do more for less cost than a larger number of less talented teams. We're independent thinkers with big ideas who act boldly, move fast and accomplish extraordinary things through collaboration and trust. In short, at Roku you'll be part of a company that's changing how the world watches TV. We have a unique culture that we are proud of. We think of ourselves primarily as problem-solvers, which itself is a two-part idea. We come up with the solution, but the solution isn't real until it is built and delivered to the customer. That penchant for action gives us a pragmatic approach to innovation, one that has served us well since 2002. To learn more about Roku, our global footprint, and how we've grown, visit https://www.weareroku.com/factsheet. By providing your information, you acknowledge that you have read our Applicant Privacy Notice and authorize Roku to process your data subject to those terms.
Posted 1 week ago
8.0 years
0 Lacs
Mumbai, Maharashtra, India
Remote
Overview About this role We are looking for an innovative hands-on technology leader and run Global Data Operations for one of the largest global FinTech’s. This is a new role that will transform how we manage and process high quality data at scale and reflects our commitment to invest in an Enterprise Data Platform to unlock our data strategy for BlackRock and our Aladdin Client Community. A technology first mindset, to manage and run a modern global data operations function with high levels of automation and engineering, is essential. This role requires a deep understanding of data, domains, and the associated controls. Key Responsibilities The ideal candidate will be a high-energy, technology and data driven individual who has a track record of leading and doing the day to day operations. Ensure on time high quality data delivery with a single pane of glass for data pipeline observability and support Live and breathe best practices of data ops such as culture, processes and technology Partner cross-functionally to enhance existing data sets, eliminating manual inputs and ensuring high quality, and onboarding new data sets Lead change while ensuring daily operational excellence, quality, and control Build and maintain deep alignment with key internal partners on ops tooling and engineering Foster an agile collaborative culture which is creative open, supportive, and dynamic Knowledge And Experience 8+ years’ experience in hands-on data operations including data pipeline monitoring and engineering Technical expert including experience with data processing, orchestration (Airflow) data ingestion, cloud-based databases/warehousing (Snowflake) and business intelligence tools The ability to operate and monitor large data sets through the data lifecycle, including the tooling and observability required to be ensure data quality and control at scale Experience implementing, monitoring, and operating data pipelines that are fast, scalable, reliable, and accurate Understanding of modern-day data highways, the associated challenges, and effective controls Passionate about data platforms, data quality and everything data Practical and detailed oriented operations leader Inquisitive leader who will bring new ideas that challenge the status quo Ability to navigate a large, highly matrixed organization Strong presence with clients Bachelor’s Degree in Computer Science, Engineering, Mathematics or Statistics Our Benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRock’s hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person – aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their children’s educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment – the one we make in our employees. It’s why we’re dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law.
Posted 1 week ago
6.0 - 7.0 years
15 - 17 Lacs
India
On-site
About The Opportunity This role is within the fast-paced enterprise technology and data engineering sector, delivering high-impact solutions in cloud computing, big data, and advanced analytics. We design, build, and optimize robust data platforms powering AI, BI, and digital products for leading Fortune 500 clients across industries such as finance, retail, and healthcare. As a Senior Data Engineer, you will play a key role in shaping scalable, production-grade data solutions with modern cloud and data technologies. Role & Responsibilities Architect and Develop Data Pipelines: Design and implement end-to-end data pipelines (ingestion → transformation → consumption) using Databricks, Spark, and cloud object storage. Data Warehouse & Data Mart Design: Create scalable data warehouses/marts that empower self-service analytics and machine learning workloads. Database Modeling & Optimization: Translate logical models into efficient physical schemas, ensuring optimal partitioning and performance management. ETL/ELT Workflow Automation: Build, automate, and monitor robust data ingestion and transformation processes with best practices in reliability and observability. Performance Tuning: Optimize Spark jobs and SQL queries through careful tuning of configurations, indexing strategies, and resource management. Mentorship and Continuous Improvement: Provide production support, mentor team members, and champion best practices in data engineering and DevOps methodology. Skills & Qualifications Must-Have 6-7 years of hands-on experience building production-grade data platforms, including at least 3 years with Apache Spark/Databricks. Expert proficiency in PySpark, Python, and advanced SQL with a record of performance tuning distributed jobs. Proven expertise in data modeling, data warehouse/mart design, and managing ETL/ELT pipelines using tools like Airflow or dbt. Hands-on experience with major cloud platforms such as AWS or Azure, and familiarity with modern lakehouse/data-lake patterns. Strong analytical, problem-solving, and mentoring skills with a DevOps mindset and commitment to code quality. Preferred Experience with AWS analytics services (Redshift, Glue, S3) or the broader Hadoop ecosystem. Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Exposure to streaming pipelines (Kafka, Kinesis, Delta Live Tables) and real-time analytics solutions. Familiarity with ML feature stores, MLOps workflows, or data governance frameworks. Relevant certifications (Databricks, AWS, Azure) or active contributions to open source projects. Location: India | Employment Type: Fulltime Skills: agile methodologies,team leadership,performance tuning,sql,elt,airflow,aws,data modeling,apache spark,pyspark,data,hadoop,databricks,python,dbt,big data technologies,etl,azure
Posted 1 week ago
7.0 years
15 - 17 Lacs
India
Remote
Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company A fast-growing enterprise technology consultancy operating at the intersection of cloud computing, big-data engineering and advanced analytics . The team builds high-throughput, real-time data platforms that power AI, BI and digital products for Fortune 500 clients across finance, retail and healthcare. By combining Databricks Lakehouse architecture with modern DevOps practices, they unlock insight at petabyte scale while meeting stringent security and performance SLAs. Role & Responsibilities Architect end-to-end data pipelines (ingestion → transformation → consumption) using Databricks, Spark and cloud object storage. Design scalable data warehouses/marts that enable self-service analytics and ML workloads. Translate logical data models into physical schemas; own database design, partitioning and lifecycle management for cost-efficient performance. Implement, automate and monitor ETL/ELT workflows, ensuring reliability, observability and robust error handling. Tune Spark jobs and SQL queries, optimizing cluster configurations and indexing strategies to achieve sub-second response times. Provide production support and continuous improvement for existing data assets, championing best practices and mentoring peers. Skills & Qualifications Must-Have 6–7 years building production-grade data platforms, including 3 years+ hands-on Apache Spark/Databricks experience. Expert proficiency in PySpark, Python and advanced SQL, with a track record of performance-tuning distributed jobs. Demonstrated ability to model data warehouses/marts and orchestrate ETL/ELT pipelines with tools such as Airflow or dbt. Hands-on with at least one major cloud platform (AWS or Azure) and modern lakehouse / data-lake patterns. Strong problem-solving skills, DevOps mindset and commitment to code quality; comfortable mentoring fellow engineers. Preferred Deep familiarity with the AWS analytics stack (Redshift, Glue, S3) or the broader Hadoop ecosystem. Bachelor’s or Master’s degree in Computer Science, Engineering or a related field. Experience building streaming pipelines (Kafka, Kinesis, Delta Live Tables) and real-time analytics solutions. Exposure to ML feature stores, MLOps workflows and data-governance/compliance frameworks. Relevant professional certifications (Databricks, AWS, Azure) or notable open-source contributions. Benefits & Culture Highlights Remote-first & flexible hours with 25+ PTO days and comprehensive health cover. Annual training budget & certification sponsorship (Databricks, AWS, Azure) to fuel continuous learning. Inclusive, impact-focused culture where engineers shape the technical roadmap and mentor a vibrant data community Skills: data modeling,big data technologies,team leadership,aws,data,sql,agile methodologies,performance tuning,elt,airflow,apache spark,pyspark,hadoop,databricks,python,dbt,etl,azure
Posted 1 week ago
6.0 - 11.0 years
12 - 22 Lacs
Pune, Chennai, Bengaluru
Work from Office
Payroll Company: Compunnel INC Client: Infosys, after 6 months you will work directly with Infosys Experience Required: 6+ years Mode of Work: 5 days of Work from the office Location: Bangalore, Hyderabad, Trivandrum, Chennai, Pune, Chandigarh, Jaipur, Mangalore Job Title: Python Developer Python Primary skill. 6 years of experience. Added and necessary: Airflow, Kubernetes, ELK, Flask / Django / Fast API Almost very important Gen AI experience will be a big plus Experience with DB-to-DB migration, especially MS SQL to any open-source db (Spark / Click House Experience moving code from Java to Python or within Python from 1 framework to another. Please fill in all the essential details which are given below & attach your updated resume, and send it to ralish.sharma@compunnel.com 1. Total Experience: 2. Relevant Experience in Python Development : 3. Experience in Airflow : 4. Experience in Kubernetes : 5. Experience in ELK : 6. Experience in Flask/Django/Fast API : 7. Experience in Gen AI : 8. Current company : 9. Current Designation : 10. Highest Education : 11. Notice Period: 12. Current CTC: 13. Expected CTC: 14. Current Location: 15. Preferred Location: 16. Hometown: 17. Contact No: 18. If you have any offer from some other company, please mention the Offer amount and Offer Location: 19. Reason for looking for change: If the job description is suitable for you, please get in touch with me at the number below: 9910044363 .
Posted 1 week ago
3.0 years
4 Lacs
Delhi
On-site
Job Description: Hadoop & ETL Developer Location: Shastri Park, Delhi Experience: 3+ years Education: B.E./ B.Tech/ MCA/ MSC (IT or CS) / MS Salary: Upto 80k (rest depends on interview and the experience) Notice Period: Immediate joiner to 20 days of joiners Candidates from Delhi/ NCR will only be preferred Job Summary:- We are looking for a Hadoop & ETL Developer with strong expertise in big data processing, ETL pipelines, and workflow automation. The ideal candidate will have hands-on experience in the Hadoop ecosystem, including HDFS, MapReduce, Hive, Spark, HBase, and PySpark, as well as expertise in real-time data streaming and workflow orchestration. This role requires proficiency in designing and optimizing large-scale data pipelines to support enterprise data processing needs. Key Responsibilities Design, develop, and optimize ETL pipelines leveraging Hadoop ecosystem technologies. Work extensively with HDFS, MapReduce, Hive, Sqoop, Spark, HBase, and PySpark for data processing and transformation. Implement real-time and batch data ingestion using Apache NiFi, Kafka, and Airbyte. Develop and manage workflow orchestration using Apache Airflow. Perform data integration across structured and unstructured data sources, including MongoDB and Hadoop-based storage. Optimize MapReduce and Spark jobs for performance, scalability, and efficiency. Ensure data quality, governance, and consistency across the pipeline. Collaborate with data engineering teams to build scalable and high-performance data solutions. Monitor, debug, and enhance big data workflows to improve reliability and efficiency. Required Skills & Experience : 3+ years of experience in Hadoop ecosystem (HDFS, MapReduce, Hive, Sqoop, Spark, HBase, PySpark). Strong expertise in ETL processes, data transformation, and data warehousing. Hands-on experience with Apache NiFi, Kafka, Airflow, and Airbyte. Proficiency in SQL and handling structured and unstructured data. Experience with NoSQL databases like MongoDB. Strong programming skills in Python or Scala for scripting and automation. Experience in optimizing Spark and MapReduce jobs for high-performance computing. Good understanding of data lake architectures and big data best practices. Preferred Qualifications Experience in real-time data streaming and processing. Familiarity with Docker/Kubernetes for deployment and orchestration. Strong analytical and problem-solving skills with the ability to debug and optimize data workflows. If you have a passion for big data, ETL, and large-scale data processing, we’d love to hear from you! Job Types: Full-time, Contractual / Temporary Pay: From ₹400,000.00 per year Work Location: In person
Posted 1 week ago
3.0 years
3 - 6 Lacs
Chennai
On-site
ROLE SUMMARY At Pfizer we make medicines and vaccines that change patients' lives with a global reach of over 1.1 billion patients. Pfizer Digital is the organization charged with winning the digital race in the pharmaceutical industry. We apply our expertise in technology, innovation, and our business to support Pfizer in this mission. Our team, the GSES Team, is passionate about using software and data to improve manufacturing processes. We partner with other Pfizer teams focused on: Manufacturing throughput efficiency and increased manufacturing yield Reduction of end-to-end cycle time and increase of percent release attainment Increased quality control lab throughput and more timely closure of quality assurance investigations Increased manufacturing yield of vaccines More cost-effective network planning decisions and lowered inventory costs In the Senior Associate, Integration Engineer role, you will help implement data capabilities within the team to enable advanced, innovative, and scalable database services and data platforms. You will utilize modern Data Engineering principles and techniques to help the team better deliver value in the form of AI, analytics, business intelligence, and operational insights. You will be on a team responsible for executing on technical strategies, designing architecture, and developing solutions to enable the Digital Manufacturing organization to deliver value to our partners across Pfizer. Most of all, you’ll use your passion for data to help us deliver real value to our global network of manufacturing facilities, changing patient lives for the better! ROLE RESPONSIBILITIES The Senior Associate, Integration Engineer’s responsibilities include, but are not limited to: Maintain Database Service Catalogues Build, maintain and optimize data pipelines Support cross-functional teams with data related tasks Troubleshoot data-related issues, identify root causes, and implement solutions in a timely manner Automate builds and deployments of database environments Support development teams in database related troubleshooting and optimization Document technical specifications, data flows, system architectures and installation instructions for the provided services Collaborate with stakeholders to understand data requirements and translate them into technical solutions Participate in relevant SAFe ceremonies and meetings BASIC QUALIFICATIONS Education: Bachelor’s degree or Master’s degree in Computer Science, Data Engineering, Data Science, or related discipline Minimum 3 years of experience in Data Engineering, Data Science, Data Analytics or similar fields Broad Understanding of data engineering techniques and technologies, including at least 3 of the following: PostgreSQL (or similar SQL database(s)) Neo4J/Cypher ETL (Extract, Transform, and Load) processes Airflow or other Data Pipeline technology Kafka Distributed Event Streaming platform Proficient or better in a scripting language, ideally Python Experience tuning and optimizing database performance Knowledge of modern data integration patterns Strong verbal and written communication skills and ability to work in a collaborative team environment, spanning global time zones Proactive approach and goal-oriented mindset Self-driven approach to research and problem solving with proven analytical skills Ability to manage tasks across multiple projects at the same time PREFERRED QUALIFICATIONS Pharmaceutical Experience Experience working with Agile delivery methodologies (e.g., Scrum) Experience with Graph Databases Experience with Snowflake Familiarity with cloud platforms such as AWS Experience with containerization technologies such as Docker and orchestration tools like Kubernetes PHYSICAL/MENTAL REQUIREMENTS None NON-STANDARD WORK SCHEDULE, TRAVEL OR ENVIRONMENT REQUIREMENTS Job will require working with global teams and applications. Flexible working schedule will be needed on occasion to accommodate planned agile sprint planning and system releases as well as unplanned/on-call level 3 support. Travel requirements are project based. Estimated percentage of travel to support project and departmental activities is less than 10%. Work Location Assignment: Hybrid Pfizer is an equal opportunity employer and complies with all applicable equal employment opportunity legislation in each jurisdiction in which it operates. Information & Business Tech #LI-PFE
Posted 1 week ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
About SAIGroup SAIGroup is a private investment firm that has committed $1 billion to incubate and scale revolutionary AI-powered enterprise software application companies. Our portfolio, a testament to our success, comprises rapidly growing AI companies that collectively cater to over 2,000+ major global customers, approaching $800 million in annual revenue, and employing a global workforce of over 4,000 individuals. SAIGroup invests in new ventures based on breakthrough AI-based products that have the potential to disrupt existing enterprise software markets. SAIGroup’s latest investment, JazzX AI , is a pioneering technology company on a mission to shape the future of work through an AGI platform purpose-built for the enterprise. JazzX AI is not just building another AI tool—it’s reimagining business processes from the ground up, enabling seamless collaboration between humans and intelligent systems. The result is a dramatic leap in productivity, efficiency, and decision velocity, empowering enterprises to become pacesetters who lead their industries and set new benchmarks for innovation and excellence. Job Title: AGI Solutions Engineer (Junior) – GTM Solution Delivery (Full-time Remote-first with periodic travel to client sites & JazzX hubs) Role Overview As an Artificial General Intelligence Engineer you are the hands-on technical force that turns JazzX’s AGI platform into working, measurable solutions for customers. You will: Build and integrate LLM-driven features, vector search pipelines, and tool-calling agents into client environments. Collaborate with solution architects, product, and customer-success teams from discovery through production rollout. Contribute field learnings back to the core platform, accelerating time-to-value across all deployments. You are as comfortable writing production-quality Python as you are debugging Helm charts, and you enjoy explaining your design decisions to both peers and client engineers. Key Responsibilities Focus Area What You’ll Do Solution Implementation Develop and extend JazzX AGI services (LLM orchestration, retrieval-augmented generation, agents) within customer stacks. Integrate data sources, APIs, and auth controls; ensure solutions meet security and compliance requirements. Pair with Solution Architects on design reviews; own component-level decisions. Delivery Lifecycle Drive proofs-of-concept, pilots, and production rollouts with an agile, test-driven mindset. Create reusable deployment scripts (Terraform, Helm, CI/CD) and operational runbooks. Instrument services for observability (tracing, logging, metrics) and participate in on-call rotations. Collaboration & Support Work closely with product and research teams to validate new LLM techniques in real-world workloads. Troubleshoot customer issues, triage bugs, and deliver patches or performance optimisations. Share best practices through code reviews, internal demos, and technical workshops. Innovation & Continuous Learning Evaluate emerging frameworks (e.g., LlamaIndex, AutoGen, WASM inferencing) and pilot promising tools. Contribute to internal knowledge bases and GitHub templates that speed future projects. Qualifications Must-Have 2+ years of professional software engineering experience; 1+ years working with ML or data-intensive systems. Proficiency in Python (or Java/Go) with strong software-engineering fundamentals (testing, code reviews, CI/CD). Hands-on experience deploying containerised services on AWS, GCP, or Azure using Kubernetes & Helm. Practical knowledge of LLM / Gen-AI frameworks (LangChain, LlamaIndex, PyTorch, or TensorFlow) and vector databases. Familiarity integrating REST/GraphQL APIs, streaming platforms (Kafka), and SQL/NoSQL stores. Clear written and verbal communication skills; ability to collaborate with distributed teams. Willingness to travel 10–20 % for key customer engagements. Nice-to-Have Experience delivering RAG or agent-based AI solutions in regulated domains (finance, healthcare, telecom). Cloud or Kubernetes certifications (AWS SA-Assoc/Pro, CKA, CKAD). Exposure to MLOps stacks (Kubeflow, MLflow, Vertex AI) or data-engineering tooling (Airflow, dbt). Attributes Empathy & Ownership: You listen carefully to user needs and take full ownership of delivering great experiences. Startup Mentality: You move fast, learn quickly, and are comfortable wearing many hats. Detail-Oriented Builder: You care about the little things Mission-Driven: You want to solve important, high-impact problems that matter to real people. Team-Oriented: Low ego, collaborative, and excited to build alongside highly capable engineers, designers, and domain experts. Travel This position requires the ability to travel to client sites as needed for on-site deployments and collaboration. Travel is estimated at approximately 20–30% of the time (varying by project), and flexibility is expected to accommodate key client engagement activities. Why Join Us At JazzX AI, you have the opportunity to join the foundational team that is pushing the boundaries of what’s possible to create an autonomous intelligence driven future. We encourage our team to pursue bold ideas, foster continuous learning, and embrace the challenges and rewards that come with building something truly innovative. Your work will directly contribute to pioneering solutions that have the potential to transform industries and redefine how we interact with technology. As an early member of our team, your voice will be pivotal in steering the direction of our projects and culture, offering an unparalleled chance to leave your mark on the future of AI. We offer a competitive salary, equity options, and an attractive benefits package, including health, dental, and vision insurance, flexible working arrangements, and more.
Posted 1 week ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Want to be on a team that full of results-driven individuals who are constantly seeking to innovate? Want to make an impact? At SailPoint, our Engineering team does just that. Our engineering is where high-quality professional engineering meets individual impact. Our team creates products are built on a mature, cloud-native event-driven microservices architecture hosted in AWS. SailPoint is seeking a Backend Software Engineer to help build a new cloud-based SaaS identity analytics product. We are looking for well-rounded backend or full stack engineers who are passionate about building and delivering reliable, scalable microservices and infrastructure for SaaS products. As one of the first members on the team, you will be integral in building this product and will be part of an agile team that is in startup mode. This is a unique opportunity to build something from scratch but have the backing of an organization that has the muscle to take it to market quickly, with a very satisfied customer base. Responsibilities Deliver efficient, maintainable data pipelines Deliver robust, bug free code Java based micro services Build and maintain Data Analytics and Machine Learning features Produce designs and rough estimates, and implement features based on product requirements. Collaborate with peers on designs, code reviews, and testing. Produce unit and end-to-end tests to improve code quality and maximize code coverage for new and existing features. Responsible for on-call production support Requirements 4+ years of professional software development experience Strong Python, SQL, Java experience Great communication skills BS in Computer Science, or a related field Comprehensive experience with object-oriented analysis and design skills Experience with Workflow engines Experience with Continuous Delivery, Source control Experience with Observability platforms for performance metrics collection and monitoring. Preferred Strong Experience in AirFlow, Snowflake, DBT Experience with ML Pipelines (SageMaker) Experience with Continuous Delivery Experience working on a Big Data/Machine Learning product Compensation and benefits Experience a Small-company Atmosphere with Big-company Benefits. Recharge your batteries with a flexible vacation policy and paid holidays. Grow with us with both technical and career growth opportunities. Enjoy a healthy work-life balance with flexible hours, family-friendly company events and charitable work. SailPoint is an equal opportunity employer and we welcome all qualified candidates to apply to join our team. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status, or any other category protected by applicable law. Alternative methods of applying for employment are available to individuals unable to submit an application through this site because of a disability. Contact hr@sailpoint.com or mail to 11120 Four Points Dr, Suite 100, Austin, TX 78726, to discuss reasonable accommodations.
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
This job is with Pfizer, an inclusive employer and a member of myGwork – the largest global platform for the LGBTQ+ business community. Please do not contact the recruiter directly. Role Summary At Pfizer we make medicines and vaccines that change patients' lives with a global reach of over 1.1 billion patients. Pfizer Digital is the organization charged with winning the digital race in the pharmaceutical industry. We apply our expertise in technology, innovation, and our business to support Pfizer in this mission. About Our team, the GSES Team, is passionate about using software and data to improve manufacturing processes. We partner with other Pfizer teams focused on: Manufacturing throughput efficiency and increased manufacturing yield Reduction of end-to-end cycle time and increase of percent release attainment Increased quality control lab throughput and more timely closure of quality assurance investigations Increased manufacturing yield of vaccines More cost-effective network planning decisions and lowered inventory costs In the Senior Associate, Integration Engineer role, you will help implement data capabilities within the team to enable advanced, innovative, and scalable database services and data platforms. You will utilize modern Data Engineering principles and techniques to help the team better deliver value in the form of AI, analytics, business intelligence, and operational insights. You will be on a team responsible for executing on technical strategies, designing architecture, and developing solutions to enable the Digital Manufacturing organization to deliver value to our partners across Pfizer. Most of all, you'll use your passion for data to help us deliver real value to our global network of manufacturing facilities, changing patient lives for the better! Role Responsibilities The Senior Associate, Integration Engineer's responsibilities include, but are not limited to: Maintain Database Service Catalogues Build, maintain and optimize data pipelines Support cross-functional teams with data related tasks Troubleshoot data-related issues, identify root causes, and implement solutions in a timely manner Automate builds and deployments of database environments Support development teams in database related troubleshooting and optimization Document technical specifications, data flows, system architectures and installation instructions for the provided services Collaborate with stakeholders to understand data requirements and translate them into technical solutions Participate in relevant SAFe ceremonies and meetings Basic Qualifications Education: Bachelor's degree or Master's degree in Computer Science, Data Engineering, Data Science, or related discipline Minimum 3 years of experience in Data Engineering, Data Science, Data Analytics or similar fields Broad Understanding of data engineering techniques and technologies, including at least 3 of the following: PostgreSQL (or similar SQL database(s)) Neo4J/Cypher ETL (Extract, Transform, and Load) processes Airflow or other Data Pipeline technology Kafka Distributed Event Streaming platform Proficient or better in a scripting language, ideally Python Experience tuning and optimizing database performance Knowledge of modern data integration patterns Strong verbal and written communication skills and ability to work in a collaborative team environment, spanning global time zones Proactive approach and goal-oriented mindset Self-driven approach to research and problem solving with proven analytical skills Ability to manage tasks across multiple projects at the same time Preferred Qualifications Pharmaceutical Experience Experience working with Agile delivery methodologies (e.g., Scrum) Experience with Graph Databases Experience with Snowflake Familiarity with cloud platforms such as AWS Experience with containerization technologies such as Docker and orchestration tools like Kubernetes Physical/Mental Requirements None Non-standard Work Schedule, Travel Or Environment Requirements Job will require working with global teams and applications. Flexible working schedule will be needed on occasion to accommodate planned agile sprint planning and system releases as well as unplanned/on-call level 3 support. Travel requirements are project based. Estimated percentage of travel to support project and departmental activities is less than 10%. Work Location Assignment: Hybrid Pfizer is an equal opportunity employer and complies with all applicable equal employment opportunity legislation in each jurisdiction in which it operates. Information & Business Tech
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
maharashtra
On-site
You are looking for a Lead Data Engineer with at least 7 years of experience, who is proficient in Python, PySpark, Airflow (Batch Jobs), HPCC, and ECL. Your role will involve driving complex data solutions across various teams. It is essential that you have practical knowledge of data modeling, test-driven development, and familiarity with Agile/Waterfall methodologies. Your responsibilities will include leading projects, working collaboratively with different teams, and transforming business requirements into scalable data solutions following industry best practices in managed services or staff augmentation environments.,
Posted 2 weeks ago
0.0 - 10.0 years
0 Lacs
Delhi
On-site
Job requisition ID :: 84960 Date: Jul 27, 2025 Location: Delhi Designation: Senior Consultant Entity: Deloitte Touche Tohmatsu India LLP Your potential, unleashed. India’s impact on the global economy has increased at an exponential rate and Deloitte presents an opportunity to unleash and realise your potential amongst cutting edge leaders, and organisations shaping the future of the region, and indeed, the world beyond. At Deloitte, your whole self to work, every day. Combine that with our drive to propel with purpose and you have the perfect playground to collaborate, innovate, grow, and make an impact that matters. The team As a member of the Operation, Industry and domain solutions team you will embark on an exciting and fulfilling journey with a group of intelligent and innovative globally aware individuals. We work in conjuncture with various institutions solving key business problems across a broad-spectrum roles and functions, all set against the backdrop of constant industry change. Your work profile Devops Engineer Qualifications: B.E./ B. Tech./ MCA/ M.E./ M. Tech Required Experience: 10 years or more Desirable: Experience in Govt. IT Projects / Govt. Health IT Projects • Rich experience in analyzing enterprise application performance, determining roots cause, and optimizing resources up and down the stack Scaling Application Workloads in Linux VMware Demonstrates Technical Qualification Administering and utilizing Jenkins / Gitlab CI at scale for build managementand continuous integration Very Strong in Kubernetes, Envoy, Consul, Service mesh, API gateway. Substantial Knowledge of Monitoring tools like Zipkin, Kibana, Grafana, Prometheus, SonarQube. Strong in CI/CD experience. Relevant Experience in any cloud platform Creating Docker images and managing Docker Containers Scripting for configuration management. Experience in airflow ELK, dataflow for ETL. Good to have Infrastructure-as-code secrets management, deployment strategies, cloud networking. Familiarity with primitives like deployment and cron job. Scripting experience Supporting highly available open-source production applications and tools How you’ll grow Connect for impact Our exceptional team of professionals across the globe are solving some of the world’s most complex business problems, as well as directly supporting our communities, the planet, and each other. Know more in our Global Impact Report and our India Impact Report. Empower to lead You can be a leader irrespective of your career level. Our colleagues are characterised by their ability to inspire, support, and provide opportunities for people to deliver their best and grow both as professionals and human beings. Know more about Deloitte and our One Young World partnership. Inclusion for all At Deloitte, people are valued and respected for who they are and are trusted to add value to their clients, teams and communities in a way that reflects their own unique capabilities. Know more about everyday steps that you can take to be more inclusive. At Deloitte, we believe in the unique skills, attitude and potential each and every one of us brings to the table to make an impact that matters. Drive your career At Deloitte, you are encouraged to take ownership of your career. We recognise there is no one size fits all career path, and global, cross-business mobility and up / re-skilling are all within the range of possibilities to shape a unique and fulfilling career. Know more about Life at Deloitte. Everyone’s welcome… entrust your happiness to us Our workspaces and initiatives are geared towards your 360-degree happiness. This includes specific needs you may have in terms of accessibility, flexibility, safety and security, and caregiving. Here’s a glimpse of things that are in store for you. Interview tips We want job seekers exploring opportunities at Deloitte to feel prepared, confident and comfortable. To help you with your interview, we suggest that you do your research, know some background about the organisation and the business area you’re applying to. Check out recruiting tips from Deloitte professionals.
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
Phonologies, a leading provider of speech technology and voice bots in India, is seeking individuals to join the team and revolutionize the delivery of conversational AI and voice bots over the phone. Our innovative solutions are seamlessly integrated with top contact center providers, telephony systems, and CRM systems. We are on the lookout for dynamic and skilled specialists to contribute to the development of our cutting-edge customer interaction solutions. As part of our team, you will be involved in developing and implementing machine learning models for real-world applications. Ideal candidates should possess a minimum of 5 years of experience and demonstrate proficiency in Python, scikit-learn, as well as familiarity with tools such as MLFlow, Airflow, and Docker. You will collaborate with engineering and product teams to create and manage ML pipelines, monitor model performance, and uphold ethical and explainable AI practices. Essential skills for this role include strong capabilities in feature engineering, ownership of the model lifecycle, and effective communication. Please note that recent graduates will not be considered for this position. In our welcoming and professional work environment, your role will offer both challenges and opportunities for growth. The position will be based in Pune. If you believe your qualifications align with our requirements, we encourage you to submit your resume (in .pdf format) to Careers@Phonologies.com. Additionally, kindly include a brief introduction about yourself in the email. We are excited to learn more about you and potentially welcome you to our team!,
Posted 2 weeks ago
3.0 - 8.0 years
0 Lacs
delhi
On-site
As a Snowflake Solution Architect, you will be responsible for owning and driving the development of Snowflake solutions and products as part of the COE. Your role will involve working with and guiding the team to build solutions using the latest innovations and features launched by Snowflake. Additionally, you will conduct sessions on the latest and upcoming launches of the Snowflake ecosystem and liaise with Snowflake Product and Engineering to stay ahead of new features, innovations, and updates. You will be expected to publish articles and architectures that can solve business problems for businesses. Furthermore, you will work on accelerators to demonstrate how Snowflake solutions and tools integrate and compare with other platforms such as AWS, Azure Fabric, and Databricks. In this role, you will lead the post-sales technical strategy and execution for high-priority Snowflake use cases across strategic customer accounts. You will also be responsible for triaging and resolving advanced, long-running customer issues while ensuring timely and clear communication. Developing and maintaining robust internal documentation, knowledge bases, and training materials to scale support efficiency will also be a part of your responsibilities. Additionally, you will support with enterprise-scale RFPs focused around Snowflake. To be successful in this role, you should have at least 8 years of industry experience, including a minimum of 3 years in a Snowflake consulting environment. You should possess experience in implementing and operating Snowflake-centric solutions and proficiency in implementing data security measures, access controls, and design specifically within the Snowflake platform. An understanding of the complete data analytics stack and workflow, from ETL to data platform design to BI and analytics tools is essential. Strong skills in databases, data warehouses, data processing, as well as extensive hands-on expertise with SQL and SQL analytics are required. Familiarity with data science concepts and Python is a strong advantage. Knowledge of Snowflake components such as Snowpipe, Query Parsing and Optimization, Snowpark, Snowflake ML, Authorization and Access control management, Metadata Management, Infrastructure Management & Auto-scaling, Snowflake Marketplace for datasets and applications, as well as DevOps & Orchestration tools like Airflow, dbt, and Jenkins is necessary. Possessing Snowflake certifications would be a good-to-have qualification. Strong communication and presentation skills are essential in this role as you will be required to engage with both technical and executive audiences. Moreover, you should be skilled in working collaboratively across engineering, product, and customer success teams. This position is open in all Xebia office locations including Pune, Bangalore, Gurugram, Hyderabad, Chennai, Bhopal, and Jaipur. If you meet the above requirements and are excited about this opportunity, please share your details here: [Apply Now](https://forms.office.com/e/LNuc2P3RAf),
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
ahmedabad, gujarat
On-site
YipitData is a leading market research and analytics firm specializing in the disruptive economy, having recently secured a significant investment from The Carlyle Group valued over $1B. Recognized for three consecutive years as one of Inc's Best Workplaces, we are a rapidly expanding technology company with offices across various locations globally, fostering a culture centered on mastery, ownership, and transparency. As a potential candidate, you will have the opportunity to collaborate with strategic engineering leaders and report directly to the Director of Data Engineering. This role involves contributing to the establishment of our Data Engineering team presence in India and working within a global team framework, tackling challenging big data problems. We are currently in search of a highly skilled Senior Data Engineer with 6-8 years of relevant experience to join our dynamic Data Engineering team. The ideal candidate should possess a solid grasp of Spark and SQL, along with experience in data pipeline development. Successful candidates will play a vital role in expanding our data engineering team, focusing on enhancing reliability, efficiency, and performance within our strategic pipelines. The Data Engineering team at YipitData sets the standard for all other analyst teams, maintaining and developing the core pipelines and tools that drive our products. This team plays a crucial role in supporting the rapid growth of our business and presents a unique opportunity for the first hire to potentially lead and shape the team as responsibilities evolve. This hybrid role will be based in India, with training and onboarding requiring overlap with US working hours initially. Subsequently, standard IST working hours are permissible, with occasional meetings with the US team. As a Senior Data Engineer at YipitData, you will work directly under the Senior Manager of Data Engineering, receiving hands-on training on cutting-edge data tools and techniques. Responsibilities include building and maintaining end-to-end data pipelines, establishing best practices for data modeling and pipeline construction, generating documentation and training materials, and proficiently resolving complex data pipeline issues using PySpark and SQL. Collaboration with stakeholders to integrate business logic into central pipelines and mastering tools like Databricks, Spark, and other ETL technologies is also a key aspect of the role. Successful candidates are likely to have a Bachelor's or Master's degree in Computer Science, STEM, or a related field, with at least 6 years of experience in Data Engineering or similar technical roles. An enthusiasm for problem-solving, continuous learning, and a strong understanding of data manipulation and pipeline development are essential. Proficiency in working with large datasets using PySpark, Delta, and Databricks, aligning data transformations with business needs, and a willingness to acquire new skills are crucial for success. Effective communication skills, a proactive approach, and the ability to work collaboratively with stakeholders are highly valued. In addition to a competitive salary, YipitData offers a comprehensive compensation package that includes various benefits, perks, and opportunities for personal and professional growth. Employees are encouraged to focus on their impact, self-improvement, and skill mastery in an environment that promotes ownership, respect, and trust.,
Posted 2 weeks ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title : Payer Analytics Specialist. Position Summary The Payer Analytics Specialist is responsible for driving insights and supporting decision-making by analyzing healthcare payer data, creating data pipelines, and managing complex analytics projects. This role involves collaborating with cross-functional teams (Operations, Product, IT, and external partners) to ensure robust data integration, reporting, and advanced analytics capabilities. The ideal candidate will have strong technical skills, payer domain expertise, and the ability to manage 3rd-party data sources effectively. Key Responsibilities Data Integration and ETL Pipelines : Develop, maintain, and optimize end-to-end data pipelines, including ingestion, transformation, and loading of internal and external data sources. Collaborate with IT and Data Engineering teams to design scalable, secure, and high-performing data workflows. Implement best practices in data governance, version control, data security, and documentation. Analytics And Reporting Data Analysis : Analyze CPT-level data to identify trends, patterns, and insights relevant to healthcare services and payer rates. Benchmarking : Compare and benchmark rates provided by different health insurance payers within designated zip codes to assess competitive positioning. Build and maintain analytical models for cost, quality, and utilization metrics, leveraging tools such as Python, R, or SQL-based BI tools. Develop dashboards and reports to communicate findings to stakeholders across the organization. 3rd-Party Data Management Ingest and preprocess multiple 3rd party data from multiple sources and transform it into unified structures for analytics and reporting. Ensure compliance with transparency requirements and enable downstream analytics. Design automated workflows to update and validate data, working closely with external vendors and technical teams. Establish best practices for data quality checks (i.e., encounter completeness, claim-level validations) and troubleshooting. Project Management And Stakeholder Collaboration Manage analytics project lifecycles : requirement gathering, project scoping, resource planning, timeline monitoring, and delivery. Partner with key stakeholders (Finance, Operations, Population Health) to define KPIs, data needs, and reporting frameworks. Communicate technical concepts and results to non-technical audiences, providing clear insights and recommendations. Quality Assurance And Compliance Ensure data quality by implementing validation checks, audits, and anomaly detection frameworks. Maintain compliance with HIPAA, HITECH, and other relevant healthcare regulations and data privacy requirements. Participate in internal and external audits of data processes. Continuous Improvement and Thought Leadership. Stay current with industry trends, analytics tools, and regulatory changes affecting payer analytics. Identify opportunities to enhance existing data processes, adopt new technologies, and promote data-driven culture within the organization. Mentor junior analysts and share best practices in data analytics, reporting, and pipeline development. Required Qualifications Education & Experience : Bachelor's degree in Health Informatics, Data Science, Computer Science, Statistics, or a related field (Master's degree a plus). 3-5+ years of experience in healthcare analytics, payer operations, or related fields. Technical Skills Data Integration & ETL : Proficiency in building data pipelines using tools like SQL, Python, R, or ETL platforms (i.e., Talend, Airflow, or Data Factory). Databases & Cloud : Experience working with relational databases (SQL Server, PostgreSQL) and cloud environments (AWS, Azure, GCP). BI & Visualization : Familiarity with BI tools (Tableau, Power BI, Looker) for dashboard creation and data storytelling. MRF, All Claims, & Definitive Healthcare Data : Hands-on experience (or strong familiarity) with healthcare transparency data sets, claims data ingestion strategies, and provider/facility-level data from 3rd-party sources like Definitive Healthcare. Healthcare Domain Expertise Strong understanding of claims data structures (UB-04, CMS-1500), coding systems (ICD, CPT, HCPCS), and payer processes. Knowledge of healthcare regulations (HIPAA, HITECH, transparency rules) and how they impact data sharing and management. Analytical & Problem-Solving Skills Proven ability to synthesize large datasets, pinpoint issues, and recommend data-driven solutions. Comfort with statistical analysis and predictive modeling using Python or R. Soft Skills Excellent communication and presentation skills, with the ability to convey technical concepts to non-technical stakeholders. Strong project management and organizational skills, with the ability to handle multiple tasks and meet deadlines. Collaborative mindset and willingness to work cross-functionally to achieve shared objectives. Preferred/Additional Qualifications Advanced degree (MBA, MPH, MS in Analytics, or similar). Experience with healthcare cost transparency regulations and handling MRF data specifically for compliance. Familiarity with Data Ops or DevOps practices to automate and streamline data pipelines. Certification in BI or data engineering (i.e., Microsoft Certified : Azure Data Engineer, AWS Data Analytics Specialty). Experience establishing data stewardship programs and leading data governance initiatives. Why Join Us Impactful Work - Play a key role in leveraging payer data to reduce costs, improve quality, and shape population health strategies. Innovation - Collaborate on advanced analytics projects using state-of-the-art tools and platforms. Growth Opportunity - Be part of an expanding analytics team where you can lead initiatives, mentor others, and deepen your healthcare data expertise. Supportive Culture - Work in an environment that values open communication, knowledge sharing, and continuous learning. (ref:hirist.tech)
Posted 2 weeks ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
GCP Senior Data Engineer Chennai, India A skilled data engineering professional with 5 years of experience in GCP BigQuery and Oracle PL/SQL , specializing in designing and implementing end-to-end batch data processes in the Google Cloud ecosystem. Strong hands-on expertise with: Core Skills & Tools: Mandatory: GCP, BigQuery Additional Tools: GCS, DataFlow, Cloud Composer, Pub/Sub, GCP Storage, Google Analytics Hub Nice to Have: Apache Airflow, GCP DataProc, GCP DMS, Python Technical Proficiency: Expert in BigQuery , BQL , and DBMS Well-versed in Linux and Python scripting Skilled in Terraform for GCP infrastructure automation Proficient in CI/CD tools such as GitHub , Jenkins , and Nexus Experience with GCP orchestration tools : Cloud Composer, DataFlow, and Pub/Sub Additional Strengths: Strong communication and collaboration skills Capable of building scalable, automated cloud-based solutions Able to work across both data engineering and DevOps environments This profile is well-suited for roles involving cloud-based data architecture , automation , and pipeline orchestration within the GCP environment .
Posted 2 weeks ago
12.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title – Data Platform Operations Lead Preferred Location - Bangalore/Hyderabad, India Full time/Part Time - Full Time Build a career with confidence Carrier Global Corporation, global leader in intelligent climate and energy solutions is committed to creating solutions that matter for people and our planet for generations to come. From the beginning, we've led in inventing new technologies and entirely new industries. Today, we continue to lead because we have a world-class, diverse workforce that puts the customer at the center of everything we do Role Responsibilities Platform Development & Enablement Build and maintain scalable, modular services and frameworks for ELT pipelines, data lakehouse processing, integration orchestration, and infrastructure provisioning. Enable self-service capabilities for data engineers, platform operators, and integration teams through tools, documentation, and reusable patterns. Lead the platform architecture and development of core components such as data pipelines, observability tooling, infrastructure as code (IaC), and DevOps automation. Technical Leadership Champion platform-first thinking—identifying common needs and abstracting solutions into shared services that reduce duplication and accelerate delivery. Own the technical roadmap for platform capabilities across domains such as Apache Iceberg on S3, AWS Glue, Airflow/MWAA, Kinesis, CDK, and Kubernetes-based services. Promote design patterns that support real-time and batch processing, schema evolution, data quality, and integration at scale. Collaboration & Governance Collaborate with Data Engineering, Platform Operations, and Application Integration leaders to ensure consistency, reliability, and scalability across the platform. Contribute to FinOps and data governance initiatives by embedding controls and observability into the platform itself. Work with Architecture and Security to align with cloud, data, and compliance standards. Role Purpose 12+ years of experience in software or data platform engineering, with 2+ years in a team leadership or management role. Strong hands-on expertise with AWS cloud services (e.g., Glue, Kinesis, S3), data lakehouse architectures (Iceberg), and orchestration tools (Airflow, Step Functions). Experience developing infrastructure as code using AWS CDK, Terraform, or CloudFormation. Proven ability to design and deliver internal platform tools, services, or libraries that enable cross-functional engineering teams. Demonstrated expertise in Python for building internal tools, automation scripts, and platform services that support ELT, orchestration, and infrastructure provisioning workflows. Proven experience leading DevOps teams and implementing CI/CD pipelines using tools such as GitHub Actions, CircleCI, or AWS CodePipeline to support rapid, secure, and automated delivery of platform capabilities. Minimum Requirements Experience with Nexla, Kafka, Spark, or Snowflake. Familiarity with data mesh or product-based data architecture principles. Track record of promoting DevOps, automation, and CI/CD best practices across engineering teams. Benefits AWS certifications or equivalent experience preferred. We are committed to offering competitive benefits programs for all of our employees and enhancing our programs when necessary. Have peace of mind and body with our health insurance Make yourself a priority with flexible schedules and leave Policy Drive forward your career through professional development opportunities Achieve your personal goals with our Employee Assistance Program. Our commitment to you Our greatest assets are the expertise, creativity and passion of our employees. We strive to provide a great place to work that attracts, develops and retains the best talent, promotes employee engagement, fosters teamwork and ultimately drives innovation for the benefit of our customers. We strive to create an environment where you feel that you belong, with diversity and inclusion as the engine to growth and innovation. We develop and deploy best-in-class programs and practices, providing enriching career opportunities, listening to employee feedback and always challenging ourselves to do better. This is The Carrier Way. Join us and make a difference. Apply Now! Carrier is An Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or veteran status, age or any other federally protected class. Job Applicant's Privacy Notice Click on this link to read the Job Applicant's Privacy Notice
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40005 Jobs | Dublin
Wipro
19416 Jobs | Bengaluru
Accenture in India
16187 Jobs | Dublin 2
EY
15356 Jobs | London
Uplers
11435 Jobs | Ahmedabad
Amazon
10613 Jobs | Seattle,WA
Oracle
9462 Jobs | Redwood City
IBM
9313 Jobs | Armonk
Accenture services Pvt Ltd
8087 Jobs |
Capgemini
7830 Jobs | Paris,France