Jobs
Interviews

3678 Redshift Jobs - Page 32

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 years

0 Lacs

Delhi, India

On-site

Role & Responsibilities Lead and mentor a team of data engineers, ensuring high performance and career growth. Architect and optimize scalable data infrastructure, ensuring high availability and reliability. Drive the development and implementation of data governance frameworks and best practices. Work closely with cross-functional teams to define and execute a data roadmap. Optimize data processing workflows for performance and cost efficiency. Ensure data security, compliance, and quality across all data platforms. Foster a culture of innovation and technical excellence within the data team. Ideal Candidate 10+ years of experience in software/data engineering, with at least 3+ years in a leadership role. Tier - 1 Colleges candidates preferred. Expertise in backend development with programming languages such as Java, PHP, Python, Node.JS, GoLang, JavaScript, HTML, and CSS.Candidate should be from Tier - 1 Colleges, preferred IIT Proficiency in SQL, Python, and Scala for data processing and analytics. Strong understanding of cloud platforms (AWS, GCP, or Azure) and their data services. Strong foundation and expertise in HLD and LLD , as well as design patterns, preferably using Spring Boot or Google Guice Experience in big data technologies such as Spark, Hadoop, Kafka, and distributed computing frameworks. Hands-on experience with data warehousing solutions such as Snowflake, Redshift, or BigQuery Deep knowledge of data governance, security, and compliance (GDPR, SOC2, etc.). Experience in NoSQL databases like Redis, Cassandra, MongoDB, and TiDB. Familiarity with automation and DevOps tools like Jenkins, Ansible, Docker, Kubernetes, Chef, Grafana, and ELK . Proven ability to drive technical strategy and align it with business objectives. Strong leadership, communication, and stakeholder management skills. Preferred Qualifications: Experience in machine learning infrastructure or MLOps is a plus. Exposure to real-time data processing and analytics. Interest in data structures, algorithm analysis and design, multicore programming, and scalable architecture. Prior experience in a SaaS or high-growth tech company.

Posted 3 weeks ago

Apply

8.0 - 12.0 years

30 - 35 Lacs

Hyderabad

Work from Office

Job Summary We are seeking an experienced Data Architect with expertise in Snowflake, dbt, Apache Airflow, and AWS to design, implement, and optimize scalable data solutions. The ideal candidate will play a critical role in defining data architecture, governance, and best practices while collaborating with cross-functional teams to drive data-driven decision-making. Key Responsibilities Data Architecture & Strategy: Design and implement scalable, high-performance cloud-based data architectures on AWS. Define data modelling standards for structured and semi-structured data in Snowflake. Establish data governance, security, and compliance best practices. Data Warehousing & ETL/ELT Pipelines: Develop, maintain, and optimize Snowflake-based data warehouses. Implement dbt (Data Build Tool) for data transformation and modelling. Design and schedule data pipelines using Apache Airflow for orchestration. Cloud & Infrastructure Management: Architect and optimize data pipelines using AWS services like S3, Glue, Lambda, and Redshift. Ensure cost-effective, highly available, and scalable cloud data solutions. Collaboration & Leadership: Work closely with data engineers, analysts, and business stakeholders to align data solutions with business goals. Provide technical guidance and mentoring to the data engineering team. Performance Optimization & Monitoring: Optimize query performance and data processing within Snowflake. Implement logging, monitoring, and alerting for pipeline reliability. Required Skills & Qualifications 10+ years of experience in data architecture, engineering, or related roles. Strong expertise in Snowflake, including data modeling, performance tuning, and security best practices. Hands-on experience with dbt for data transformations and modeling. Proficiency in Apache Airflow for workflow orchestration. Strong knowledge of AWS services (S3, Glue, Lambda, Redshift, IAM, EC2, etc.). Experience with SQL, Python, or Spark for data processing. Familiarity with CI/CD pipelines, Infrastructure-as-Code (Terraform/CloudFormation) is a plus. Strong understanding of data governance, security, and compliance (GDPR, HIPAA, etc.). Preferred Qualifications Certifications: AWS Certified Data Analytics Specialty, Snowflake SnowPro Certification, or dbt Certification. Experience with streaming technologies (Kafka, Kinesis) is a plus. Knowledge of modern data stack tools (Looker, Power BI, etc.). Experience in OTT streaming could be added advantage.

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Title: Power BI Developer - WFH Location- Permanent Work from Home Experience: 5+ Years Opportunity: Full-Time Shift: USA EST (6PM to 3 AM IST) Company: AlifCloud IT Consulting Notice: One week Application Deadline: 21 July 2025 About Company: At AlifCloud IT Consulting Pvt. Ltd. we are dedicated to delivering exceptional white-labeled services to enhance organizational security and efficiency. We support Managed Service Providers (MSPs) with our white-labeled offerings, providing valuable pre-sales assistance, top-notch engineers, 24/7 support, and comprehensive managed services. Our mission is to empower IT partners by modernizing their technical teams and ensuring round-the-clock availability through agile and responsive engineering support. Job Roles & Responsibilities:- Develop & Design Dashboards: Build Power BI dashboards, reports, and visualizations that align with business needs. Gather & Translate Requirements: Collect user needs and convert them into BI solutions. Data Integration: Connect Power BI to sources like SAP BW/HANA, Oracle, Hyperion, spreadsheets, Snowflake, and Redshift. Modeling & Transformation: Design semantic data models and perform efficient data transformations via Power Query. Optimize Performance: Ensure dashboards are fast, scalable, and user-friendly. Stakeholder Collaboration: Partner with business teams, analysts, and IT to ensure accurate and clear reporting. Data Troubleshooting & Governance: Resolve data quality issues and uphold reporting integrity. Maintenance & Enhancement: Update and improve existing dashboards to accommodate evolving requirements. Documentation & Training: Provide manuals and train end-users, fostering BI self-service. Compliance & Security: Adhere to data governance and security protocols. Job Skills & Requirements:- Data Visualization & UX/UI: Deep understanding of visualization principles and user-centric design. Power BI Expertise: Strong proficiency in DAX, Power Query, and data modeling best practices. Enterprise System Integration: Experience integrating with SAP BW/HANA, Oracle, Hyperion. Cloud & Spreadsheet Data Handling: Skilled in data access/transformation from Excel, Snowflake, Redshift, and other cloud platforms. SQL Proficiency: Strong SQL skills and familiarity with relational and columnar databases. Hierarchical & Financial Modeling: Ability to work with structured planning data (e.g., financial hierarchies). Complex Dataset Handling: Capable of managing large data volumes and delivering intuitive dashboards. Communication Skills: Excellent at explaining BI insights to both technical and non-technical stakeholders. Preferred Qualification:- Microsoft Power BI Certification (PL 300 or DA 100) Experience with Azure Data Factory, Synapse, or equivalent cloud data platforms Experience in Agile/Scrum project delivery Familiarity with other BI suites like Tableau, Qlik, or SAP BO The salary range for this position takes into consideration a variety of factors, including but not limited to skill sets, level of experience, applicable office location, training, licensures and certifications, and other business and organizational needs. The new hire salary range displays the minimum and maximum salary targets for this position across all locations, and the range has not been adjusted for any specific state differentials. It is not typical for a candidate to be hired at or near the top of the range for their role, and compensation decisions are dependent on the unique facts and circumstances regarding each candidate. At Alif, we believe that diversity is our strength and inclusion is our mission. Our team is a vibrant tapestry of different cultures, backgrounds, and perspectives, and we are proud to celebrate this diversity every day. Join us in celebrating the rich diversity that makes AlifCloud IT Consulting a great place to work! Together, we can create a more inclusive and equitable world.

Posted 3 weeks ago

Apply

9.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Company Overview Founded in 2010, we've been recognized as a "Best Places to Work" and have offices in the US (Boulder), UK (London) and India (Chennai). However, we are a remote-first company with employees across the globe! Today, we are a leading B2B marketing provider that offers two distinct solutions: Integrate Lead management & data governance SaaS platform for marketing operations and demand marketers. The Integrate platform makes every lead clean, compliant, and actionable, freeing enterprise B2B marketers from bad data and operational headaches so they can focus on what matters: generating revenue. Pipeline360 Media solutions that combine three powerful demand generation tools: targeted display, content syndication, and a comprehensive marketplace model. Pipeline360 ensures that marketers achieve 100% compliant and marketable leads by effectively engaging with audiences much earlier in the buying cycle, connecting with buyers at every stage of the process, and optimizing programs to drive performance. Our Mission Integrate: exists to make your lead data marketable so you can drive pipeline. Pipeline360: exists to make the unpredictable predictable. Why us? We are an organization of integrity, talent, passion, and vision with a long track record of growth, customer success, and a commitment to driving leading innovation and delivering world-class customer experience. The Role: Integrate's data is treated as a critical corporate asset and is seen as a competitive advantage in our business. As a Lead Data Engineer you will be working in one of the world's largest cloud-based data lakes. You should be skilled in the architecture of data warehouse solutions for the Enterprise using multiple platforms (EMR, RDBMS, Columnar, Cloud, Snowflake). You should have extensive experience in the design, creation, management, and business use of extremely large datasets. You should have excellent business and communication skills to be able to work with business owners to develop and define key business questions, and to build data sets that answer those questions. Above all you should be passionate about working with huge data sets and someone who loves to bring datasets together to answer business questions and drive change. Responsibilities: Design and develop workflows, programs, and ETL to support data ingestion, curation, and provisioning of fragmented data for Data Analytics, Product Analytics and AI. Work closely with Data Scientists, Software Engineers, Product Managers, Product Analysts and other key stakeholders to gather and define requirements for Integrate's data needs. Use Scala, SQL Snowflake, and BI tools to deliver data to customers. Understand MongoDB/PostgreSQL and transactional data workflows. Design data models and build data architecture that enables reporting, analytics, advanced AI/ML and Generative AI solutions. Develop an understanding of the data and build business acumen. Develop and maintain Datawarehouse and Datamart in the cloud using Snowflake. Create reporting dashboards for internal and client stakeholders. Understand the business use cases and customer value behind large sets of data and develop meaningful analytic solutions. Basic Qualifications: Advanced degree in Statistics, Computer Science or related technical/scientific field. 9+ years experience in a Data Engineer development role. Advanced knowledge of SQL, Python, and data processing workflow. Nice to have Spark/Scala, MLFlow, and AWS experience. Strong experience and advanced technical skills writing APIs. Extensive knowledge of Data Warehousing, ETL and BI architectures, concepts, and frameworks. And also strong in metadata definition, data migration and integration with emphasis on both high end OLTP and business Intelligence solutions. Develop complex Stored procedure and queries to provide to the application along with reporting solutions too. Optimize slow-running queries and optimize query performance. Create optimized queries and data migration scripts Leadership skillsets to mentor and train junior team members and stakeholders. Capable of creating long-term and short-term data architecture vision and tactical roadmap to achieve the data architecture vision beginning from the current state Strong data management abilities (i.e., understanding data reconciliations). Capable of facilitating data discovery sessions involving business subject matter experts. Strong communication/partnership skills to gain the trust of stakeholders. Knowledge of professional software engineering practices & best practices for the full software development lifecycle, including coding standards, code reviews, source control management, build processes, testing, and operations. Preferred Qualifications: Industry experience as a Data Engineer or related specialty (e.g., Software Engineer, Business Intelligence Engineer, Data Scientist) with a track record of manipulating, processing, and extracting value from large datasets. Experience building data products incrementally and integrating and managing datasets from multiple sources. Query performance tuning skills using Unix profiling tools and SQL Experience leading large-scale data warehousing and analytics projects, including using AWS technologies – Snowflake, Redshift, S3, EC2, Data-pipeline and other big data technologies. Integrate in the News: Best Tech Startups in Arizona (2018-2021) Integrate Acquires Akkroo Integrate Acquires ListenLoop Why Four MarTech CEO's Bet Big on Integrate

Posted 3 weeks ago

Apply

5.0 years

16 - 30 Lacs

Chennai, Tamil Nadu, India

On-site

This role is for one of Weekday's clients Salary range: Rs 1600000 - Rs 3000000 (ie INR 16-30 LPA) Min Experience: 5 years Location: Chennai JobType: full-time Requirements Our client is a fast-growing, technology-first cross-border payments company headquartered in Singapore with a strong operational presence in Chennai. They are committed to redefining global payments by building secure, scalable, and seamless financial infrastructure using cutting-edge cloud and data technologies. About the Role: We are looking for a Data Engineer - GTP to join our dynamic team in Chennai. In this role, you will be responsible for building and maintaining robust, scalable, and high-performance data pipelines that power real-time analytics and reporting needs across business functions. You will be working closely with data analysts, data scientists, and platform engineers to support key business and product initiatives. This is an exciting opportunity for professionals passionate about working with modern data technologies and driving business impact through data solutions in a mission-critical industry like cross-border payments. Key Responsibilities: Design, develop, and maintain ETL pipelines using AWS Glue (PySpark) for structured and semi-structured data sources. Build and manage data lakes, ensuring high availability, performance, and data integrity. Perform complex data modeling and data transformation to support reporting and analytics needs. Develop and optimize SQL queries for data extraction using AWS Athena and Redshift. Create dynamic, interactive dashboards using AWS Quicksight for internal stakeholders and decision-makers. Collaborate with engineering, product, and business teams to understand data requirements and deliver solutions aligned with business goals. Implement data quality checks, monitoring, and logging systems to maintain reliability and compliance. Ensure proper data access control and governance using AWS IAM and other cloud-native security tools. Assist in building a data platform strategy that supports business scale and agility. Maintain thorough documentation of data flow, architecture, and operational procedures. Required Skills & Qualifications: Bachelor's or Master's degree in Computer Science, IT, or related technical discipline. Minimum 5 years of hands-on experience in data engineering roles. Proficiency in AWS services: S3, Glue (PySpark), Redshift, Lambda, IAM, Athena. Proven experience designing and implementing ETL workflows using AWS Glue (PySpark). Deep understanding of data lake architectures, partitioning, and performance optimization. Experience with SQL and AWS Athena for complex data analysis. Strong experience in dashboarding and visual storytelling using AWS Quicksight. Exposure to CI/CD practices, version control (Git), and Agile development methodologies is a plus. Excellent communication skills, with the ability to convey technical concepts clearly to non-technical stakeholders. Why Join Us? Opportunity to work on large-scale payment systems with global reach. High-impact role with visibility across teams and leadership. Work with a talented, mission-driven team in a modern tech stack. Flexible hybrid work culture and competitive compensation

Posted 3 weeks ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Company Overview: CashKaro is India’s #1 cashback platform, trusted by over 25 million users! We drive more sales for Amazon, Flipkart, Myntra, and Ajio than any other paid channel, including Google and Meta. Backed by legendary investor Ratan Tata and a recent $16 million boost from Affle, we’re on a rocket ship journey—already surpassing ₹300 crore in revenue and racing towards ₹500 crore. EarnKaro, our influencer referral platform, is trusted by over 500,000 influencers and sends more traffic to leading online retailers than any other platform. Whether it’s micro-influencers or top-tier creators, they choose EarnKaro to monetize their networks. BankKaro, our latest venture, is rapidly becoming India’s go-to FinTech aggregator. Join our dynamic team and help shape the future of online shopping, influencer marketing, and financial technology in India! Role Overview: As a Product Analyst, you will play a pivotal role in enabling data-driven product decisions. You will be responsible for deep-diving into product usage data, building dashboards and reports, optimizing complex queries, and driving feature-level insights that directly influence user engagement, retention, and experience. Key Responsibilities: Feature Usage & Adoption Analysis - Analyze event data to understand feature usage, retention trends, and product interaction patterns across web and app. User Journey & Funnel Analysis - Build funnel views and dashboards to identify drop-offs, friction points, and opportunities for UX or product improvements. Product Usage & Retention Analytics - Analyze user behavior, cohort trends, and retention using Redshift and BigQuery datasets. Partner with Product Managers to design and track core product KPIs. SQL Development & Optimization - Write and optimize complex SQL queries across Redshift and BigQuery. Build and maintain views, stored procedures, and data models for scalable analytics. Dashboarding & BI Reporting - Create and maintain high-quality Power BI dashboards to track DAU/WAU/MAU, feature adoption, engagement %, and drop-off trends. Light Data Engineering - Use Python (Pandas/Numpy) for data cleaning, transformation, and quick exploratory analysis. Business Insight Generation - Translate business questions into structured analyses and insights that inform product and business strategy. Must-Have Skills: Expert-level SQL across Redshift and BigQuery, including performance tuning, window functions, and procedure creation. Strong skills in Power BI (or Tableau) with ability to build actionable, intuitive dashboards. Working knowledge of Python (Pandas) for quick data manipulation and ad-hoc analytics. Deep understanding of product metrics – DAU, retention, feature usage, funnel performance. Strong business acumen – ability to connect data with user behavior and product outcomes. Clear communication and storytelling skills to present data insights to cross-functional teams. Good to Have: Experience with mobile product analytics (Android & iOS). Understanding of funnel, cohort, engagement, and retention metrics. Familiarity with A/B testing tools and frameworks. Experience working with Redshift, Big Query, or cloud-based data pipelines. Certifications in Google Analytics, Firebase, or other analytics platforms. Why Join Us? High Ownership: Drive key metrics for products used by millions. Collaborative Culture: Work closely with founders, product, and tech teams. Competitive Package: Best-in-class compensation, ESOPs, and perks. Great Environment: Hybrid work, medical insurance, lunches, and learning budgets. Ensuring a Diverse and Inclusive workplace where we learn from each other is core to CK's value. CashKaro.com and EarnKaro.com are Equal Employment Opportunity and Affirmative Action Employers. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, or disability status. CashKaro.com and EarnKaro.com will not pay any third-party agency or company that does not have a signed agreement with CashKaro.com and EarnKaro.com. Pouring Pounds India Pvt. Ltd. will not pay any third-party agency or company that does not have a signed agreement with CashKaro.com and EarnKaro.com. Visit our Career Page at - https://cashkaro.com/page/careers

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Welcome to Warner Bros. Discovery… the stuff dreams are made of. Who We Are… When we say, “the stuff dreams are made of,” we’re not just referring to the world of wizards, dragons and superheroes, or even to the wonders of Planet Earth. Behind WBD’s vast portfolio of iconic content and beloved brands, are the storytellers bringing our characters to life, the creators bringing them to your living rooms and the dreamers creating what’s next… From brilliant creatives, to technology trailblazers, across the globe, WBD offers career defining opportunities, thoughtfully curated benefits, and the tools to explore and grow into your best selves. Here you are supported, here you are celebrated, here you can thrive. Sr Analytics Engineer - Hyderabad, India . About Warner Bros. Discovery Warner Bros. Discovery, a premier global media and entertainment company, offers audiences the world's most differentiated and complete portfolio of content, brands and franchises across television, film, streaming and gaming. The new company combines Warner Media’s premium entertainment, sports and news assets with Discovery's leading non-fiction and international entertainment and sports businesses. For more information, please visit www.wbd.com. Meet Our Team The Data & Analytics organization is at the forefront of developing and maintaining frameworks, tools, and data products vital to WBD, including flagship streaming product Max and non-streaming products such as Films Group, Sports, News and overall WBD eco-system. Our mission is to foster unified analytics and drive data-driven use cases by leveraging a robust multi-tenant platform and semantic layer. We are committed to delivering innovative solutions that empower teams across the company to catalyze subscriber growth, amplify engagement, and execute timely, informed decisions, ensuring our continued success in an ever-evolving digital landscape Roles & Responsibilities As a Sr Analytics Engineer, you will lead data pipeline, data strategy, and data visualization-related efforts for the Data & Analytics organization at Max. You’re an engineer who not only understands how to use big data in answering complex business questions but also how to design semantic layers to best support self-service vehicles. You will manage projects from requirements gathering to planning to implementation of full-stack data solutions (pipelines to data tables to visualizations). You will work closely with cross-functional partners to ensure that business logic is properly represented in the semantic layer and production environments, where it can be used by the wider Product Analytics team to drive business insights and strategy. Design and implement data models that support flexible querying and data visualization. Partner with Product stakeholders to understand business questions and build out advanced analytical solutions. Advance automation efforts that help the team spend less time manipulating & validating data and more time analyzing. Build frameworks that multiply the productivity of the team and are intuitive for other data teams to leverage. Participate in the creation and support of analytics development standards and best practices. Create systematic solutions for solving data anomalies: identifying, alerting, and root cause analysis. Work proactively with stakeholders to ready data solutions for new product and/or feature releases, with a keen eye for uncovering and troubleshooting any data quality issues or nuances. Identify and explore new opportunities through creative analytical and engineering methods. What To Bring Bachelor's degree, MS or greater in a quantitative field of study (Computer/Data Science, Engineering, Mathematics, Statistics, etc.) 5+ years of relevant experience in business intelligence/data engineering Expertise in writing SQL (clean, fast code is a must) and in data-warehousing concepts such as star schemas, slowly changing dimensions, ELT/ETL, and MPP databases Experience in transforming flawed/changing data into consistent, trustworthy datasets, and in developing DAGs to batch-process millions of records Experience with general-purpose programming (e.g. Python, Java, Go), dealing with a variety of data structures, algorithms, and serialization formats Experience with big-data technologies (e.g. Spark, Kafka, Hive) Advanced ability to build reports and dashboards with BI tools (such as Looker and Tableau) Experience with analytics tools such as Athena, Redshift/BigQuery, Splunk, etc. Proficiency with Git (or similar version control) and CI/CD best practices Experience in managing workflows using Agile practices Ability to write clear, concise documentation and to communicate generally with a high degree of precision Ability to solve ambiguous problems independently Ability to manage multiple projects and time constraints simultaneously Care for the quality of the input data and how the processed data is ultimately interpreted and used Experience with digital products, streaming services, or subscription products is preferred Strong written and verbal communication skills Characteristics & Traits Naturally inquisitive, critical thinker, proactive problem-solver, and detail-oriented. Positive attitude and an open mind Strong organizational skills with the ability to act independently and responsibly Self-starter, comfortable initiating projects from design to execution with minimal supervision Ability to manage and balance multiple (and sometimes competing) priorities in a fast-paced, complex business environment and can manage time effectively to consistently meet deadlines Team player and relationship builder What We Offer A Great Place to work. Equal opportunity employer Fast track growth opportunities How We Get Things Done… This last bit is probably the most important! Here at WBD, our guiding principles are the core values by which we operate and are central to how we get things done. You can find them at www.wbd.com/guiding-principles/ along with some insights from the team on what they mean and how they show up in their day to day. We hope they resonate with you and look forward to discussing them during your interview. Championing Inclusion at WBD Warner Bros. Discovery embraces the opportunity to build a workforce that reflects a wide array of perspectives, backgrounds and experiences. Being an equal opportunity employer means that we take seriously our responsibility to consider qualified candidates on the basis of merit, regardless of sex, gender identity, ethnicity, age, sexual orientation, religion or belief, marital status, pregnancy, parenthood, disability or any other category protected by law. If you’re a qualified candidate with a disability and you require adjustments or accommodations during the job application and/or recruitment process, please visit our accessibility page for instructions to submit your request.

Posted 3 weeks ago

Apply

15.0 years

0 Lacs

Thane, Maharashtra, India

On-site

Key Responsibilities: Platform Stabilization & Operational Excellence: Accountable for stable, reliable, and secure operations across all Datawarehouse applications, ensuring adherence to defined SLAs and KPIs. Assess the current data platform architecture, identify bottlenecks, and implement solutions to ensure high availability, reliability, performance, and scalability. Establish robust monitoring, alerting, and incident management processes for all data pipelines and infrastructure. Drive initiatives to improve data quality, consistency, and trustworthiness across the platform. Oversee the operational health and day-to-day management of existing data systems during the transition period. Manage relationships with strategic vendors across the enterprise applications landscape, ensuring strong performance, innovation contributions, and commercial value. Platform Modernization & Architecture: Define and execute a strategic roadmap for modernizing PerkinElmer's data platform, leveraging cloud-native technologies (AWS, Azure, or GCP) and modern data stack components (e.g., data lakes/lakehouses, Data Fabric/Mesh architectures, streaming platforms like Kafka/Kinesis, orchestration tools like Airflow, ELT/ETL tools, containerization). Lead the design and implementation of a scalable, resilient, and cost-effective data architecture that meets current and future business needs. (DaaS) Champion and implement DataOps principles, including CI/CD, automated testing, and infrastructure-as-code, to improve development velocity and reliability. Stay abreast of emerging technologies and industry trends, evaluating and recommending new tools and techniques to enhance the platform. Leadership & Strategy: Build, mentor, and lead a world-class data engineering team, fostering a culture of innovation, collaboration, and continuous improvement. Develop and manage the data engineering budget, resources, and vendor relationships. Define the overall data engineering vision, strategy, and multi-year roadmap in alignment with PerkinElmer's business objectives. Effectively communicate strategy, progress, and challenges to executive leadership and key stakeholders across the organization. Drive cross-functional collaboration with IT, Security, Enterprise Apps, R&D, and Business Units. Data Monetization Enablement: Partner closely with business leaders, enterprise app teams, and other business teams to understand data needs and identify opportunities for data monetization. Architect data solutions, APIs, and data products that enable the creation of new revenue streams or significant internal efficiencies derived from data assets. Ensure robust data governance, security, and privacy controls are embedded within the platform design and data products, adhering to relevant regulations (e.g., GDPR, HIPAA where applicable). Build the foundational data infrastructure required to support advanced analytics, machine learning, and AI initiatives. Basic Qualifications Required Qualifications & Experience Bachelor's or Master's degree in Computer Science, Engineering, Information Technology, or a related quantitative field. 15+ years of experience in data engineering, data architecture and/or data warehousing. 5+ years of experience in a leadership role, managing data engineering teams and driving large-scale data initiatives. Proven track record of successfully leading the stabilization, modernization, and scaling of complex data platforms. Deep expertise in modern data architecture patterns (Data Lakes, Data Warehouses, Lakehouses, Lambda/Kappa architectures). Extensive hands-on experience with cloud data platforms (AWS, Azure, or GCP – specify preferred if applicable) and their associated data services (e.g., S3/ADLS/GCS, Redshift/Synapse/BigQuery, EMR/Dataproc/Databricks, Kinesis/Kafka/Event Hubs, Glue/Data Factory/Dataflow). Strong experience with big data technologies (e.g., Spark, Hadoop ecosystem) and data processing frameworks. Proficiency with data pipeline orchestration tools (e.g., Airflow, Prefect, Dagster). Solid understanding of SQL and NoSQL databases, data modeling techniques, and ETL/ELT development. Experience with programming languages commonly used in data engineering (e.g., Python, Scala, Java). Excellent understanding of data governance, data security, and data privacy principles and best practices. Exceptional leadership, communication, stakeholder management, and strategic thinking skills. Demonstrated ability to translate business requirements into technical solutions.

Posted 3 weeks ago

Apply

4.0 - 6.0 years

5 - 13 Lacs

Pune

Hybrid

Job Description : This position is for a Cloud Data engineer with a background in Python, DBT, SQL and data warehousing for enterprise level systems. Major Responsibilities: Adhere to standard coding principles and standards. Build and optimize data pipelines for efficient data ingestion, transformation and loading from various sources while ensuring data quality and integrity. Design, develop, and deploy python scripts and ETL processes in ADF environment to process and analyze varying volumes of data. Experience of DWH, Data Integration, Cloud, Design and Data Modelling. Proficient in developing programs in Python and SQL Experience with Data warehouse Dimensional data modeling. Working with event based/streaming technologies to ingest and process data. Working with structured, semi structured and unstructured data. Optimize ETL jobs for performance and scalability to handle big data workloads. Monitor and troubleshoot ADF jobs, identify and resolve issues or bottlenecks. Implement best practices for data management, security, and governance within the Databricks environment. Experience designing and developing Enterprise Data Warehouse solutions. Proficient writing SQL queries and programming including stored procedures and reverse engineering existing process. Perform code reviews to ensure fit to requirements, optimal execution patterns and adherence to established standards. Checking in, checkout and peer review and merging PRs into git Repo. Knowledge of deployment of packages and code migrations to stage and prod environments via CI/CD pipelines. Skills: 3+ years Python coding experience. 5+ years - SQL Server based development of large datasets 5+ years with Experience with developing and deploying ETL pipelines using Databricks Pyspark. Experience in any cloud data warehouse like Synapse, ADF, Redshift, Snowflake. Experience in Data warehousing - OLTP, OLAP, Dimensions, Facts, and Data modeling. Previous experience leading an enterprise-wide Cloud Data Platform migration with strong architectural and design skills. Experience with Cloud based data architectures, messaging, and analytics. Cloud certification(s). Add ons: Any experience with Airflow , AWS lambda, AWS glue and Step functions is a Plus.

Posted 3 weeks ago

Apply

5.0 - 7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Company Description Experian is a global data and technology company, powering opportunities for people and businesses around the world. We help to redefine lending practices, uncover and prevent fraud, simplify healthcare, create marketing solutions, and gain deeper insights into the automotive market, all using our unique combination of data, analytics and software. We also assist millions of people to realize their financial goals and help them save time and money. We operate across a range of markets, from financial services to healthcare, automotive, agribusiness, insurance, and many more industry segments. We invest in people and new advanced technologies to unlock the power of data. As a FTSE 100 Index company listed on the London Stock Exchange (EXPN), we have a team of 22,500 people across 32 countries. Our corporate headquarters are in Dublin, Ireland. Learn more at experianplc.com. Job Description Experian is seeking a highly skilled and motivated Senior Data Engineer to join our dynamic team. In this role, you will be responsible for developing robust data software applications, scalable data platforms, and innovative data products. You will collaborate closely with cross-functional teams—including application engineers, data scientists, and product stakeholders—to design, build, and implement high-impact solutions that drive business value. Key Responsibilities Lead the design, development, testing, and deployment of data solutions and pipelines. Collaborate with stakeholders to ensure technology solutions meet business requirements and optimize resource costs. Contribute to engineering standards, data modeling, and operational readiness. Provide technical leadership and mentorship to peers and junior engineers. Communicate project status and technical concepts clearly to both technical and non-technical audiences based in the U.S. and locally. Ensure effective change management and solution adoption through documentation, training, and knowledge transfer. Manage multiple priorities in a fast-paced, agile environment. Continuously improve processes, tools, and methodologies based on industry best practices. Qualifications 5 to 7 years of experience in data engineering or related roles. Strong expertise in cloud development, particularly with AWS services such as Aurora PostgreSQL RDS, Redshift, S3, EC2, Lambda, SQS, and SNS. Hands-on experience with AWS Glue, Amazon Data Firehose, EMR, and Athena in high-volume environments. Expert in SQL and proficient in PySpark and Python. Deep understanding of data platform paradigms and software/data architecture. Understanding of data modeling principles and data warehousing techniques is essential. Proven ability to troubleshoot and resolve complex performance issues utilizing tools such as DataDog, CloudWatch, Splunk, dBeaver, and DataGrip Strong problem-solving, communication, and time management skills. Experience working in agile, multi-project environments. Preferred Skills Working knowledge of Apache Iceberg, AWS Lake Formation and AWS DataZone. Experience with CI/CD pipelines and DevOps practices. Familiarity with data governance frameworks, security, and compliance standards. Additional Information Our uniqueness is that we celebrate yours. Experian's culture and people are important differentiators. We take our people agenda very seriously and focus on what matters; DEI, work/life balance, development, authenticity, collaboration, wellness, reward & recognition, volunteering... the list goes on. Experian's people first approach is award-winning; World's Best Workplaces™ 2024 (Fortune Top 25), Great Place To Work™ in 24 countries, and Glassdoor Best Places to Work 2024 to name a few. Check out Experian Life on social or our Careers Site to understand why. Experian is proud to be an Equal Opportunity and Affirmative Action employer. Innovation is an important part of Experian's DNA and practices, and our diverse workforce drives our success. Everyone can succeed at Experian and bring their whole self to work, irrespective of their gender, ethnicity, religion, colour, sexuality, physical ability or age. If you have a disability or special need that requires accommodation, please let us know at the earliest opportunity. Experian Careers - Creating a better tomorrow together Find out what its like to work for Experian by clicking here

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Description Are you passionate about building data-driven solutions to drive the profitability of the business? Are you excited about solving complex real world problems? Do you have proven analytical capabilities, exceptional communication, project management skills, and the ability to multi-task and thrive in a fast-paced environment? Join us a Business Intelligence Engineer to deliver analytics solutions for European Payment Products in Amazon. European Payment Products team creates and manages a global portfolio of products, including co-branded credit cards, instalment financing, third party redemptions, and financial services marketplaces. Within this team, we are looking for a Business Intelligence Engineer responsible for performing analysis on large volumes of data which includes building data pipelines, dashboards, synthesizing the analysis into business insights and communicating the findings to various stakeholders. To be eligible, the candidate must possess superior written and verbal communication skills too, in addition to proven analytical skills. Key job responsibilities Identify, develop and execute data analysis to uncover areas of business opportunity Learn and understand a broad range of Amazon’s data resources and know how, when and which data sources to use Deep dive into massive data sets, build data pipelines using SQL and dashboards using QuickSight Present insights and recommendations to key stakeholders in both verbal and written form Manage and execute entire project from start to finish including problem solving, data gathering and manipulation and project management Basic Qualifications Bachelor's degree in BI, finance, engineering, statistics, computer science, mathematics, finance or equivalent quantitative field 5+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Experience with data modeling, warehousing and building ETL pipelines Experience with data visualization using Tableau, Quicksight, or similar tools Experience in Statistical Analysis packages such as R, SAS and Matlab Strong Analytical skills – has ability to start from ambiguous problem statements, identify and access relevant data, make appropriate assumptions, perform insightful analysis and draw conclusion relevant to the business problem Communication skills – Demonstrated ability to communicate complex technical problems in simple plain stories. Ability to present information professionally & concisely with supporting data. Ability to work in a fast-paced business environment and demonstrated track record of project delivery for large, cross-functional projects with evolving requirements Preferred Qualifications Knowledge of data modeling and data pipeline design Knowledge of how to improve code quality and optimizes BI processes (e.g. speed, cost, reliability) Credit or risk management experience Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka Job ID: A2997571

Posted 3 weeks ago

Apply

0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Roles and Responsibilities: Data Pipeline Development: Design, develop, and maintain scalable data pipelines to support ETL (Extract, Transform, Load) processes using tools like Apache Airflow, AWS Glue, or similar. Database Management: Design, optimize, and manage relational and NoSQL databases (such as MySQL, PostgreSQL, MongoDB, or Cassandra) to ensure high performance and scalability. SQL Development: Write advanced SQL queries, stored procedures, and functions to extract, transform, and analyze large datasets efficiently. Cloud Integration: Implement and manage data solutions on cloud platforms such as AWS, Azure, or Google Cloud, utilizing services like Redshift, BigQuery, or Snowflake. Data Warehousing: Contribute to the design and maintenance of data warehouses and data lakes to support analytics and BI requirements. Programming and Automation: Develop scripts and applications in Python or other programming languages to automate data processing tasks. Data Governance: Implement data quality checks, monitoring, and governance policies to ensure data accuracy, consistency, and security. Collaboration: Work closely with data scientists, analysts, and business stakeholders to understand data needs and translate them into technical solutions. Performance Optimization: Identify and resolve performance bottlenecks in data systems and optimize data storage and retrieval. Documentation: Maintain comprehensive documentation for data processes, pipelines, and infrastructure. Stay Current: Keep up-to-date with the latest trends and advancements in data engineering, big data technologies, and cloud services. Required Skills and Qualifications: Education: Bachelor’s or Master’s degree in Computer Science, Information Technology, Data Engineering, or a related field. Technical Skills: Proficiency in SQL and relational databases (PostgreSQL, MySQL, etc.). Experience with NoSQL databases (MongoDB, Cassandra, etc.). Strong programming skills in Python; familiarity with Java or Scala is a plus. Experience with data pipeline tools (Apache Airflow, Luigi, or similar). Expertise in cloud platforms (AWS, Azure, or Google Cloud) and data services (Redshift, BigQuery, Snowflake). Knowledge of big data tools like Apache Spark, Hadoop, or Kafka is a plus. Data Modeling: Experience in designing and maintaining data models for relational and non-relational databases. Analytical Skills: Strong analytical and problem-solving abilities with a focus on performance optimization and scalability. Soft Skills: Excellent verbal and written communication skills to convey technical concepts to non-technical stakeholders. Ability to work collaboratively in cross-functional teams. Certifications (Preferred): AWS Certified Data Analytics, Google Professional Data Engineer, or similar. Mindset: Eagerness to learn new technologies and adapt quickly in a fast-paced environment.

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

India

On-site

𝐑𝐨𝐥𝐞: 𝐀𝐖𝐒 𝐁𝐈 𝐀𝐫𝐜𝐡𝐢𝐭𝐞𝐜𝐭 𝐋𝐨𝐜𝐚𝐭𝐢𝐨𝐧: 𝐇𝐲𝐝𝐞𝐫𝐚𝐛𝐚𝐝/𝐁𝐞𝐧𝐠𝐚𝐥𝐮𝐫𝐮/𝐂𝐡𝐞𝐧𝐧𝐚𝐢/Pune 𝐍𝐨𝐭𝐢𝐜𝐞 𝐏𝐞𝐫𝐢𝐨𝐝: Immediate We are looking for a seasoned AWS BI Architect with 6+ years of experience. 𝐊𝐞𝐲 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬: *Design scalable data platform and analytics architectures. *Lead technical design discussions and ensure successful implementation. *Create clear, detailed documentation and collaborate with stakeholders. 𝐖𝐡𝐚𝐭 𝐖𝐞’𝐫𝐞 𝐋𝐨𝐨𝐤𝐢𝐧𝐠 𝐅𝐨𝐫: *6+ years of experience in BI and data architecture. *Strong hands-on expertise in Amazon Redshift (Serverless and/or Provisioned) *Experience with Redshift performance tuning, RPU sizing, workload isolation, and concurrency scaling. *Familiarity with Redshift Data Sharing and cross-cluster access. *Background in BI/reporting with a strong preference for MicroStrategy. *Excellent communication and documentation. Please share your profiles with us only if you match the JD at hiring@khey-digit.com

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Talworx Is Hiring For Tableau +Redshift Developer Experience Should Be 6 To 9 Years Must have: Experience gathering and analyzing data requirements AWS and Tableau in-depth Knowledge with work exp Good knowledge of ERP ( SAP, Oracle) systems and how data is organized inside ERP systems In depth knowledge of data querying Good in Excel Excellent PPT presentation skill Good working knowledge of the business Working knowledge of data semantic Role & Responsibilities We are looking for a Data Analyst to serve as the interface between the data consumers and the technical teams that will build and maintain the data models in Intel DS / SE INTEL Main responsibilities: Run Support Support business data related issue Data Discovery Identify and locate the data sources that best satisfy the requirements Data Documentation Analyze new data requirements and document the corresponding data model improvements so that they can be implemented by technical teams Design conceptual and logical data models and maintain related data flow maps, glossaries, rulebooks in SE systems Communication Effective verbal and written communication Desing and Preparing PPT for monthly and Quarterly meeting

Posted 3 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

P2-C3-TSTS AWS Data Engineer Design and build scalable data pipelines using AWS services like AWS Glue, Amazon Redshift, and S3. Develop efficient ETL processes for data extraction, transformation, and loading into data warehouses and lakes. Create and manage applications using Python, SQL, Databricks, and various AWS technologies. Automate repetitive tasks and build reusable frameworks to improve efficiency. AWS Data Engineer Design and build scalable data pipelines using AWS services like AWS Glue, Amazon Redshift, and S3. Develop efficient ETL processes for data extraction, transformation, and loading into data warehouses and lakes. Create and manage applications using Python, SQL, Databricks, and various AWS technologies. Automate repetitive tasks and build reusable frameworks to improve efficiency. Skill Proficiency Level expected AWS Data Engineer - AWS Glue, Amazon Redshift, S3 ETL Process , SQl, Databricks

Posted 3 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

WorkMode :Hybrid Work Location : Chennai / Hyderabad / Bangalore / Pune / mumbai / gurgaon Work Timing : 2 PM to 11 PM Primary : Data Engineer Design and build scalable data pipelines using AWS services like AWS Glue, Amazon Redshift, and S3. Develop efficient ETL processes for data extraction, transformation, and loading into data warehouses and lakes. Create and manage applications using Python, SQL, Databricks, and various AWS technologies. Automate repetitive tasks and build reusable frameworks to improve efficiency. AWS Data Engineer Design and build scalable data pipelines using AWS services like AWS Glue, Amazon Redshift, and S3. Develop efficient ETL processes for data extraction, transformation, and loading into data warehouses and lakes. Create and manage applications using Python, SQL, Databricks, and various AWS technologies. Automate repetitive tasks and build reusable frameworks to improve efficiency. Skill Proficiency Level expected AWS Data Engineer - AWS Glue, Amazon Redshift, S3 ETL Process , SQl, Databricks

Posted 3 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

WorkMode :Hybrid Work Location : Chennai / Hyderabad / Bangalore / Pune / mumbai / gurgaon Work Timing : 2 PM to 11 PM Primary : Data Engineer AWS Data Engineer Design and build scalable data pipelines using AWS services like AWS Glue, Amazon Redshift, and S3. Develop efficient ETL processes for data extraction, transformation, and loading into data warehouses and lakes. Create and manage applications using Python, SQL, Databricks, and various AWS technologies. Automate repetitive tasks and build reusable frameworks to improve efficiency. AWS Data Engineer Design and build scalable data pipelines using AWS services like AWS Glue, Amazon Redshift, and S3. Develop efficient ETL processes for data extraction, transformation, and loading into data warehouses and lakes. Create and manage applications using Python, SQL, Databricks, and various AWS technologies. Automate repetitive tasks and build reusable frameworks to improve efficiency. Skill Proficiency Level expected AWS Data Engineer - AWS Glue, Amazon Redshift, S3 ETL Process , SQl, Databricks

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Hyderābād

On-site

Job requisition ID :: 77965 Date: Jul 14, 2025 Location: Hyderabad Designation: Senior Consultant Entity: Deloitte Touche Tohmatsu India LLP Education Bachelor’s degree in relevant field (e.g. Engineering, Analytics or Data Science, Computer Science, Statistics) or equivalent experience. Experience At least 6 years of experience with big data technologies like Azure Data Lake, Synapse, PySpark, Azure Data Factory (ADF), AWS Redshift , S3, SQL Server ,MLOps or their equivalent. Experience in implementing complex ETL pipelines day-to-day operations. Experience on knowledge graphs is a plus. 3+ years of experience in Agile Development and code deployment and CI-CD pipelines. 2+ years of experience in job orchestration using Airflow or equivalent. 2+ years in AI/ML, specially Gen AI concepts on Rag patterns, chunking techniques. Exposure on knowledge graphs is a plus. Build, Design and Deliver enterprise data programs. Proficiency in implementing data quality rules. Proficiency in analytical tools like Tableau, Power BI or equivalent. Experience with security models and development on large data sets. Experience in data quality management tools. Work closely with different stakeholders: Business owners, users, product managers, program managers, architects, engineering managers & developers, etc. to translate business needs and product requirements to well-documented engineering solutions. Ensuring data quality and consistency: Ensure data quality and consistency across various sources. Strong working knowledge on Python. Designing and contributing to best practices: Design and contribute to best practices in Enterprise Data Warehouse (EDW) architecture. Additional Desired Preferences Experience with scientific chemistry nomenclature or prior work experience in life sciences, chemistry, or hard sciences or degree in sciences Experience with pharmaceutical datasets and nomenclature Experience working with knowledge graphs Ability to explain complex technical issues to a non-technical audience Self-directed and able to handle multiple concurrent projects and prioritize tasks independently Able to make tough decisions when trade-offs are required to deliver results Strong communication skills required: Verbal, written, and interpersonal

Posted 3 weeks ago

Apply

5.0 - 7.0 years

7 - 8 Lacs

Hyderābād

On-site

Company Description Experian is a global data and technology company, powering opportunities for people and businesses around the world. We help to redefine lending practices, uncover and prevent fraud, simplify healthcare, create marketing solutions, and gain deeper insights into the automotive market, all using our unique combination of data, analytics and software. We also assist millions of people to realize their financial goals and help them save time and money. We operate across a range of markets, from financial services to healthcare, automotive, agribusiness, insurance, and many more industry segments. We invest in people and new advanced technologies to unlock the power of data. As a FTSE 100 Index company listed on the London Stock Exchange (EXPN), we have a team of 22,500 people across 32 countries. Our corporate headquarters are in Dublin, Ireland. Learn more at experianplc.com. Job Description Experian is seeking a highly skilled and motivated Senior Data Engineer to join our dynamic team. In this role, you will be responsible for developing robust data software applications, scalable data platforms, and innovative data products. You will collaborate closely with cross-functional teams—including application engineers, data scientists, and product stakeholders—to design, build, and implement high-impact solutions that drive business value. Key Responsibilities: Lead the design, development, testing, and deployment of data solutions and pipelines. Collaborate with stakeholders to ensure technology solutions meet business requirements and optimize resource costs. Contribute to engineering standards, data modeling, and operational readiness. Provide technical leadership and mentorship to peers and junior engineers. Communicate project status and technical concepts clearly to both technical and non-technical audiences based in the U.S. and locally. Ensure effective change management and solution adoption through documentation, training, and knowledge transfer. Manage multiple priorities in a fast-paced, agile environment. Continuously improve processes, tools, and methodologies based on industry best practices. Qualifications 5 to 7 years of experience in data engineering or related roles. Strong expertise in cloud development, particularly with AWS services such as Aurora PostgreSQL RDS, Redshift, S3, EC2, Lambda, SQS, and SNS. Hands-on experience with AWS Glue, Amazon Data Firehose, EMR, and Athena in high-volume environments. Expert in SQL and proficient in PySpark and Python. Deep understanding of data platform paradigms and software/data architecture. Understanding of data modeling principles and data warehousing techniques is essential. Proven ability to troubleshoot and resolve complex performance issues utilizing tools such as DataDog, CloudWatch, Splunk, dBeaver, and DataGrip Strong problem-solving, communication, and time management skills. Experience working in agile, multi-project environments. Preferred Skills: Working knowledge of Apache Iceberg, AWS Lake Formation and AWS DataZone. Experience with CI/CD pipelines and DevOps practices. Familiarity with data governance frameworks, security, and compliance standards. Additional Information Our uniqueness is that we celebrate yours. Experian's culture and people are important differentiators. We take our people agenda very seriously and focus on what matters; DEI, work/life balance, development, authenticity, collaboration, wellness, reward & recognition, volunteering... the list goes on. Experian's people first approach is award-winning; World's Best Workplaces™ 2024 (Fortune Top 25), Great Place To Work™ in 24 countries, and Glassdoor Best Places to Work 2024 to name a few. Check out Experian Life on social or our Careers Site to understand why. Experian is proud to be an Equal Opportunity and Affirmative Action employer. Innovation is an important part of Experian's DNA and practices, and our diverse workforce drives our success. Everyone can succeed at Experian and bring their whole self to work, irrespective of their gender, ethnicity, religion, colour, sexuality, physical ability or age. If you have a disability or special need that requires accommodation, please let us know at the earliest opportunity. Experian Careers - Creating a better tomorrow together Find out what its like to work for Experian by clicking here

Posted 3 weeks ago

Apply

5.0 years

6 - 9 Lacs

Hyderābād

On-site

Our vision is to transform how the world uses information to enrich life for all . Micron Technology is a world leader in innovating memory and storage solutions that accelerate the transformation of information into intelligence, inspiring the world to learn, communicate and advance faster than ever. Join Micron’s ambitious Global Facilities SMART Facilities team, where you will play a pivotal role in transforming data into actionable insights to optimize our world-class facilities! We are seeking a dynamic and innovative manager to lead our efforts and develop our team members. Responsibilities: Lead and manage a team of data scientists and data engineers, encouraging a collaborative and innovative environment. Develop and implement data strategies that support the company's global facilities operations. Create, build, and maintain data pipelines to process and analyze large volumes of facilities data. Design and deploy machine learning models and data analytics tools to optimize facilities management and operations. Collaborate with cross-functional teams to integrate data solutions into existing systems. Design, develop, deploy, and maintain AI solutions that provide operational benefits. Minimum Qualifications: Bachelor’s or Master’s degree in Data Science, Computer Science, Engineering, or a related field. Minimum of 5 years of experience in data science and data engineering roles. Proven track record of leading and managing high-performing teams. Excellent communication and interpersonal skills. Strong problem-solving and analytical abilities. Preferred Qualifications: Experience with data warehousing solutions like Amazon Redshift, Google BigQuery, or Snowflake. Proficiency in programming languages such as Python, Java, and SQL. Knowledge of sophisticated analytics and predictive modeling. Familiarity with cloud computing platforms such as AWS, Azure, and Google Cloud. Understanding of big data technologies and frameworks like Hadoop and Spark. This is an outstanding chance to create a significant impact on the efficiency and effectiveness of Micron’s global facilities through the power of data. About Micron Technology, Inc. We are an industry leader in innovative memory and storage solutions transforming how the world uses information to enrich life for all . With a relentless focus on our customers, technology leadership, and manufacturing and operational excellence, Micron delivers a rich portfolio of high-performance DRAM, NAND, and NOR memory and storage products through our Micron® and Crucial® brands. Every day, the innovations that our people create fuel the data economy, enabling advances in artificial intelligence and 5G applications that unleash opportunities — from the data center to the intelligent edge and across the client and mobile user experience. To learn more, please visit micron.com/careers All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. To request assistance with the application process and/or for reasonable accommodations, please contact hrsupport_india@micron.com Micron Prohibits the use of child labor and complies with all applicable laws, rules, regulations, and other international and industry labor standards. Micron does not charge candidates any recruitment fees or unlawfully collect any other payment from candidates as consideration for their employment with Micron. AI alert : Candidates are encouraged to use AI tools to enhance their resume and/or application materials. However, all information provided must be accurate and reflect the candidate's true skills and experiences. Misuse of AI to fabricate or misrepresent qualifications will result in immediate disqualification. Fraud alert: Micron advises job seekers to be cautious of unsolicited job offers and to verify the authenticity of any communication claiming to be from Micron by checking the official Micron careers website in the About Micron Technology, Inc.

Posted 3 weeks ago

Apply

7.0 years

7 - 9 Lacs

Hyderābād

Remote

Working with Us Challenging. Meaningful. Life-changing. Those aren't words that are usually associated with a job. But working at Bristol Myers Squibb is anything but usual. Here, uniquely interesting work happens every day, in every department. From optimizing a production line to the latest breakthroughs in cell therapy, this is work that transforms the lives of patients, and the careers of those who do it. You'll get the chance to grow and thrive through opportunities uncommon in scale and scope, alongside high-achieving teams. Take your career farther than you thought possible. Bristol Myers Squibb recognizes the importance of balance and flexibility in our work environment. We offer a wide variety of competitive benefits, services and programs that provide our employees with the resources to pursue their goals, both at work and in their personal lives. Read more: careers.bms.com/working-with-us . As a Data Engineer based out of our BMS Hyderabad you are part of the Data Platform team along with supporting the larger Data Engineering community, that delivers data and analytics capabilities across different IT functional domains. The ideal candidate will have a strong background in data engineering, DataOps, cloud native services, and will be comfortable working with both structured and unstructured data. Key Responsibilities The Data Engineer will be responsible for designing, building, and maintaining the ETL pipelines, data products, evolution of the data products, and utilize the most suitable data architecture required for our organization's data needs. Responsible for delivering high quality, data products and analytic ready data solution Work with an end-to-end ownership mindset, innovate and drive initiatives through completion. Develop and maintain data models to support our reporting and analysis needs Optimize data storage and retrieval to ensure efficient performance and scalability Collaborate with data architects, data analysts and data scientists to understand their data needs and ensure that the data infrastructure supports their requirements Ensure data quality and integrity through data validation and testing Implement and maintain security protocols to protect sensitive data Stay up-to-date with emerging trends and technologies in data engineering and analytics Closely partner with the Enterprise Data and Analytics Platform team, other functional data teams and Data Community lead to shape and adopt data and technology strategy. Serves as the Subject Matter Expert on Data & Analytics Solutions. Knowledgeable in evolving trends in Data platforms and Product based implementation Has end-to-end ownership mindset in driving initiatives through completion Comfortable working in a fast-paced environment with minimal oversight Mentors other team members effectively to unlock full potential Prior experience working in an Agile/Product based environment Qualifications & Experience 7+ years of hands-on experience working on implementing and operating data capabilities and cutting-edge data solutions, preferably in a cloud environment. Breadth of experience in technology capabilities that span the full life cycle of data management including data lakehouses, master/reference data management, data quality and analytics/AI ML is needed. In-depth knowledge and hands-on experience with ASW Glue services and AWS Data engineering ecosystem. Hands-on experience developing and delivering data, ETL solutions with some of the technologies like AWS data services (Redshift, Athena, lakeformation, etc.), Cloudera Data Platform, Tableau labs is a plus 5+ years of experience in data engineering or software development Create and maintain optimal data pipeline architecture, assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Strong programming skills in languages such as Python, R, PyTorch, PySpark, Pandas, Scala etc. Experience with SQL and database technologies such as MySQL, PostgreSQL, Presto, etc. Experience with cloud-based data technologies such as AWS, Azure, or Google Cloud Platform Strong analytical and problem-solving skills Excellent communication and collaboration skills Functional knowledge or prior experience in Lifesciences Research and Development domain is a plus Experience and expertise in establishing agile and product-oriented teams that work effectively with teams in US and other global BMS site. Initiates challenging opportunities that build strong capabilities for self and team Demonstrates a focus on improving processes, structures, and knowledge within the team. Leads in analyzing current states, deliver strong recommendations in understanding complexity in the environment, and the ability to execute to bring complex solutions to completion. If you come across a role that intrigues you but doesn't perfectly line up with your resume, we encourage you to apply anyway. You could be one step away from work that will transform your life and career. Uniquely Interesting Work, Life-changing Careers With a single vision as inspiring as Transforming patients' lives through science™ , every BMS employee plays an integral role in work that goes far beyond ordinary. Each of us is empowered to apply our individual talents and unique perspectives in a supportive culture, promoting global participation in clinical trials, while our shared values of passion, innovation, urgency, accountability, inclusion and integrity bring out the highest potential of each of our colleagues. On-site Protocol BMS has an occupancy structure that determines where an employee is required to conduct their work. This structure includes site-essential, site-by-design, field-based and remote-by-design jobs. The occupancy type that you are assigned is determined by the nature and responsibilities of your role: Site-essential roles require 100% of shifts onsite at your assigned facility. Site-by-design roles may be eligible for a hybrid work model with at least 50% onsite at your assigned facility. For these roles, onsite presence is considered an essential job function and is critical to collaboration, innovation, productivity, and a positive Company culture. For field-based and remote-by-design roles the ability to physically travel to visit customers, patients or business partners and to attend meetings on behalf of BMS as directed is an essential job function. BMS is dedicated to ensuring that people with disabilities can excel through a transparent recruitment process, reasonable workplace accommodations/adjustments and ongoing support in their roles. Applicants can request a reasonable workplace accommodation/adjustment prior to accepting a job offer. If you require reasonable accommodations/adjustments in completing this application, or in any part of the recruitment process, direct your inquiries to adastaffingsupport@bms.com . Visit careers.bms.com/ eeo -accessibility to access our complete Equal Employment Opportunity statement. BMS cares about your well-being and the well-being of our staff, customers, patients, and communities. As a result, the Company strongly recommends that all employees be fully vaccinated for Covid-19 and keep up to date with Covid-19 boosters. BMS will consider for employment qualified applicants with arrest and conviction records, pursuant to applicable laws in your area. If you live in or expect to work from Los Angeles County if hired for this position, please visit this page for important additional information: https://careers.bms.com/california-residents/ Any data processed in connection with role applications will be treated in accordance with applicable data privacy policies and regulations.

Posted 3 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Role: Data Engineer Work Mode: Hybrid Work timings: 2pm to 11pm Location: Chennai & Hyderabad Primary Skills: AWS Data Engineer Design and build scalable data pipelines using AWS services like AWS Glue, Amazon Redshift, and S3. Develop efficient ETL processes for data extraction, transformation, and loading into data warehouses and lakes. Create and manage applications using Python, SQL, Databricks, and various AWS technologies. Automate repetitive tasks and build reusable frameworks to improve efficiency. AWS Data Engineer Design and build scalable data pipelines using AWS services like AWS Glue, Amazon Redshift, and S3. Develop efficient ETL processes for data extraction, transformation, and loading into data warehouses and lakes. Create and manage applications using Python, SQL, Databricks, and various AWS technologies. Automate repetitive tasks and build reusable frameworks to improve efficiency. Skill Proficiency Level expected AWS Data Engineer - AWS Glue, Amazon Redshift, S3 ETL Process , SQl, Databricks

Posted 3 weeks ago

Apply

2.0 - 3.0 years

0 Lacs

Cochin

On-site

Job Title - Data Engineer Sr.Analyst ACS Song Management Level: Level 10- Sr. Analyst Location: Kochi, Coimbatore, Trivandrum Must have skills: Python/Scala, Pyspark/Pytorch Good to have skills: Redshift Job Summary You’ll capture user requirements and translate them into business and digitally enabled solutions across a range of industries. Your responsibilities will include: Roles and Responsibilities Designing, developing, optimizing, and maintaining data pipelines that adhere to ETL principles and business goals Solving complex data problems to deliver insights that helps our business to achieve their goals. Source data (structured unstructured) from various touchpoints, format and organize them into an analyzable format. Creating data products for analytics team members to improve productivity Calling of AI services like vision, translation etc. to generate an outcome that can be used in further steps along the pipeline. Fostering a culture of sharing, re-use, design and operational efficiency of data and analytical solutions Preparing data to create a unified database and build tracking solutions ensuring data quality Create Production grade analytical assets deployed using the guiding principles of CI/CD. Professional and Technical Skills Expert in Python, Scala, Pyspark, Pytorch, Javascript (any 2 at least) Extensive experience in data analysis (Big data- Apache Spark environments), data libraries (e.g. Pandas, SciPy, Tensorflow, Keras etc.), and SQL. 2-3 years of hands-on experience working on these technologies. Experience in one of the many BI tools such as Tableau, Power BI, Looker. Good working knowledge of key concepts in data analytics, such as dimensional modeling, ETL, reporting/dashboarding, data governance, dealing with structured and unstructured data, and corresponding infrastructure needs. Worked extensively in Microsoft Azure (ADF, Function Apps, ADLS, Azure SQL), AWS (Lambda,Glue,S3), Databricks analytical platforms/tools, Snowflake Cloud Datawarehouse. Additional Information Experience working in cloud Data warehouses like Redshift or Synapse Certification in any one of the following or equivalent AWS- AWS certified data Analytics- Speciality Azure- Microsoft certified Azure Data Scientist Associate Snowflake- Snowpro core- Data Engineer Databricks Data Engineering About Our Company | Accenture (do not remove the hyperlink) Experience: 3.5 -5 years of experience is required Educational Qualification: Graduation (Accurate educational details should capture)

Posted 3 weeks ago

Apply

18.0 years

5 - 10 Lacs

Gurgaon

On-site

Senior Assistant Vice President EXL/SAVP/1393076 ServicesGurgaon Posted On 14 Jul 2025 End Date 28 Aug 2025 Required Experience 18 - 28 Years Basic Section Number Of Positions 1 Band D2 Band Name Senior Assistant Vice President Cost Code D014685 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type New Max CTC 2500000.0000 - 3500000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group Analytics Sub Group Analytics - UK & Europe Organization Services LOB Services SBU Analytics Country India City Gurgaon Center EXL - Gurgaon Center 38 Skills Skill SQL PYTHON Minimum Qualification BCOM Certification No data available Job Description Position Summary: We are seeking a highly experienced and visionary Principal Data Engineer with over 20 years of industry experience to lead the design, development, and optimization of our enterprise-scale data infrastructure. This role is ideal for a seasoned data professional who thrives at the intersection of technology strategy, architecture, and hands-on engineering. You will drive innovation, mentor engineering teams, and collaborate with cross-functional stakeholders to enable data-driven decision-making across the organization. Key Responsibilities: Architecture & Strategy Define and own the data engineering roadmap, architecture, and data platform strategy. Evaluate and implement scalable, secure, and cost-effective data technologies. Lead the transition to modern data platforms (e.g., cloud, data lakehouse, streaming architecture). Engineering Leadership Lead end-to-end design and development of robust ETL/ELT pipelines, data lakes, and real-time streaming systems. Provide hands-on guidance on data modeling, pipeline optimization, and system integration. Oversee deployment and automation of data pipelines using CI/CD, orchestration tools (e.g., Airflow), and infrastructure-as-code. Data Governance & Quality Define best practices for data governance, lineage, privacy, and security. Implement data quality frameworks and monitoring strategies. Cross-Functional Collaboration Work closely with data scientists, analysts, product owners, and engineering leaders to define data requirements. Serve as a thought leader and technical advisor to senior management and stakeholders. Mentorship & Team Development Mentor and coach senior engineers and technical leads. Lead architectural reviews, code reviews, and foster a culture of technical excellence. Required Qualifications: 20+ years of hands-on experience in data engineering, software engineering, or data architecture. Proven track record of building and scaling enterprise data platforms and systems. Deep expertise in SQL, Python/Scala/Java, Spark, and distributed data systems. Strong experience with cloud platforms (AWS, GCP, or Azure), especially cloud-native data tools (e.g., Redshift, BigQuery, Snowflake, Databricks). Experience with real-time streaming technologies (Kafka, Flink, Kinesis). Solid understanding of data modeling (dimensional, normalized, NoSQL). Familiarity with DevOps and MLOps practices in a data environment. Excellent communication skills and ability to influence at the executive level. Preferred Qualifications: Master’s or Ph.D. in Computer Science, Data Engineering, or a related field. Experience in regulated industries (e.g., finance, healthcare). Prior experience in global organizations and managing geographically distributed teams. Workflow Workflow Type Back Office

Posted 3 weeks ago

Apply

3.0 years

6 - 9 Lacs

Ahmedabad, Gujarat

Remote

Job Title: Power BI Developer Location: Ahmedabad, Gujarat (Preferred) Experience Required: 3+ Years Employment Type: Full-time (Immediate Joiners Preferred) About IGNEK: At IGNEK, we specialize in remote staff augmentation and custom development solutions, offering expert teams in technologies like Liferay, AEM, Java, React, and Node.js. We help global clients meet their project goals efficiently by delivering innovative and scalable digital solutions. Job Summary: We’re looking for an experienced Power BI Developer to join our analytics team at IGNEK. The ideal candidate will be responsible for transforming complex data into visually impactful dashboards and providing actionable insights for data-driven decision-making. Key Responsibilities: Develop, maintain, and optimize interactive Power BI dashboards and reports. Write complex SQL queries to extract, clean, and join data from multiple sources including data warehouses and APIs. Understand business requirements and collaborate with cross-functional teams to deliver scalable BI solutions. Ensure data accuracy and integrity across all reporting outputs. Create robust data models and DAX measures within Power BI. Work with data engineers and analysts to streamline data pipelines. Maintain documentation for all dashboards, definitions, and processes. (Optional) Use Python for automation, data manipulation, or API integration. Requirements: 3+ years of experience in BI or Analytics roles. Strong expertise in Power BI , including DAX, Power Query, and data modeling. Advanced SQL skills and experience with relational databases or cloud data warehouses (e.g., SQL Server, Redshift, Snowflake). Understanding of ETL processes and data quality management. Ability to communicate data-driven insights effectively to stakeholders. Bonus: Working knowledge of Python for scripting or automation. Preferred Qualifications: Hands-on experience with Power BI Service , Power BI Gateway , or Azure . Exposure to agile methodologies and collaborative development teams. Familiarity with key business metrics across functions like sales, operations, or finance. How to Apply: Please send your resume and a cover letter detailing your experience to Job Type: Full-time Pay: ₹600,000.00 - ₹900,000.00 per year Benefits: Flexible schedule Leave encashment Provident Fund Work from home Work Location: In person

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies