Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 8.0 years
13 - 23 Lacs
Bengaluru
Hybrid
Job description Primary skillsets 5 years hands on experience in Informatica PWC ETL development 7 years of experience in SQL analytical STAR schema data modeling and Informatica PowerCenter 5 years of Redshift Oracle or comparable database experience with BIDW deployments Secondary skillsets Good to know cloud like AWS Services Must have proven experience with STAR and SNOWFLAKE schema techniques Good to know cloud like AWS Services Proven track record as an ETL developer potentially to grow as an Architect leading development teams to deliver successful business intelligence solutions with complex data sources Strong analytical skills and enjoys solving complex technical problems Knowledge on additional ETL tools Qlik Replicate End to End understanding of data from ingestion to transformation to consumption in Analytics will be great benefits
Posted 1 week ago
5.0 - 7.0 years
10 - 15 Lacs
Bengaluru
Work from Office
We are looking for a skilled Data Analyst with excellent communication skills and deep expertise in SQL, Tableau, and modern data warehousing technologies. This role involves designing data models, building insightful dashboards, ensuring data quality, and extracting meaningful insights from large datasets to support strategic business decisions. Key Responsibilities: Write advanced SQL queries to retrieve and manipulate data from cloud data warehouses such as Snowflake, Redshift, or BigQuery. Design and develop data models that support analytics and reporting needs. Build dynamic, interactive dashboards and reports using tools like Tableau, Looker, or Domo. Perform advanced analytics techniques including cohort analysis, time series analysis, scenario analysis, and predictive analytics. Validate data accuracy and perform thorough data QA to ensure high-quality output. Investigate and troubleshoot data issues; perform root cause analysis in collaboration with BI or data engineering teams. Communicate analytical insights clearly and effectively to stakeholders. Required Skills & Qualifications: Excellent communication skills are mandatory for this role. 5+ years of experience in data analytics, BI analytics, or BI engineering roles. Expert-level skills in SQL, with experience writing complex queries and building views. Proven experience using data visualization tools like Tableau, Looker, or Domo. Strong understanding of data modeling principles and best practices. Hands-on experience working with cloud data warehouses such as Snowflake, Redshift, BigQuery, SQL Server, or Oracle. Intermediate-level proficiency with spreadsheet tools like Excel, Google Sheets, or Power BI, including functions, pivots, and lookups. Bachelor's or advanced degree in a relevant field such as Data Science, Computer Science, Statistics, Mathematics, or Information Systems. Ability to collaborate with cross-functional teams, including BI engineers, to optimize reporting solutions. Experience in handling large-scale enterprise data environments. Familiarity with data governance, data cataloging, and metadata management tools (a plus but not required).
Posted 1 week ago
4.0 years
0 Lacs
Kochi, Kerala, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Preferred Education Master's Degree Required Technical And Professional Expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred Technical And Professional Experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Technology @Dream11: Technology is at the core of everything we do. Our technology team helps us deliver a mobile-first experience across platforms (Android & iOS) while managing over 700 million rpm (requests per minute) at peak with a user concurrency of over 16.5 million. We have over 190+ micro-services written in Java and backed by a Vert.x framework. These work with isolated product features with discrete architectures to cater to the respective use cases. We work with terabytes of data, the infrastructure for which is built on top of Kafka, Redshift, Spark, Druid, etc. and it powers a number of use cases like Machine Learning and Predictive Analytics. Our tech stack is hosted on AWS, with distributed systems like Cassandra, Aerospike, Akka, Voltdb, Ignite, etc. Your Role: Analyze requirements and design software solutions basis first design principles (e.g. Object Oriented Design and Analysis, E-R Modeling) Build resilient, event-driven microservices using reactive Java based framework, sql and no-sql datastores, caches, messaging and big-data processing frameworks Deploy and configure cloud-native software services on public cloud Operate and support software services in production based on on-call schedules, using observability tools such as Datadog for logging, alerting, monitoring Qualifiers: 3+ years coding experience with at least one object oriented programming language, preferably Java, relational databases, database modeling (E-R modeling), SQL Familiarity with no-SQL databases and caching frameworks preferred Working experience of messaging frameworks such as Kafka or MQ Familiarity with object oriented design patterns Working experience with AWS or any cloud infrastructure About Dream Sports: Dream Sports is India’s leading sports technology company with 250 million users, housing brands such as Dream11 , the world’s largest fantasy sports platform, FanCode , a premier sports content & commerce platform and DreamSetGo , a sports experiences platform. Dream Sports is based in Mumbai and has a workforce of close to 1,000 ‘Sportans’. Founded in 2008 by Harsh Jain and Bhavit Sheth, Dream Sports’ vision is to ‘Make Sports Better’ for fans through the confluence of sports and technology. For more information: https://dreamsports.group/ Dream11 is the world’s largest fantasy sports platform with 230 million users playing fantasy cricket, football, basketball & hockey on it. Dream11 is the flagship brand of Dream Sports, India’s leading Sports Technology company and has partnerships with several national & international sports bodies and cricketers. Show more Show less
Posted 1 week ago
3.0 - 6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Responsibilities: Design, develop, and manage databases on the AWS cloud platform Develop and maintain automation scripts or jobs to perform routine database tasks such as provisioning, backups, restores, and data migrations. Build and maintain automated testing frameworks for database changes and upgrades to minimize the risk of introducing errors. Implement self-healing mechanisms to automatically recover from database failures or performance degradation. Integrate database automation tools with CI/CD pipelines to enable continuous delivery and deployment of database changes. Collaborate with cross-functional teams to understand their data requirements and ensure that the databases meet their needs Implement and manage database security policies, including access control, data encryption, and backup and recovery procedures Ensure that database backups and disaster recovery procedures are in place and tested regularly Develop and maintain database documentation, including data dictionaries, data models, and technical specifications Stay up-to-date with the latest cloud technologies and trends and evaluate new tools and products that could improve database performance and scalability. Requirements: (Postgres/MySQL/SQL Server, AWS CloudFormation/CDK, Python) Bachelor's degree in Computer Science, Information Technology, or a related field Minimum of 3-6 years of experience in designing, building, and administering databases on the AWS cloud platform Strong experience with Infra as Code (CloudFormation/AWS CDK) and automation experience in Python In-depth knowledge of AWS database services such as Amazon RDS, EC2, S3, Amazon Aurora, and Amazon Redshift and Postgres/Mysql/SqlServer Strong understanding of database design principles, data modelling, and normalisation Experience with database migration to AWS cloud platform Strong understanding of database security principles and best practices Excellent troubleshooting and problem-solving skills Ability to work independently and in a team environment Good to have : AWS certifications such as AWS Certified Solutions Architect, AWS Certified DevOps Engineer, or AWS Certified Database Specialty are a plus. Show more Show less
Posted 1 week ago
8.0 - 12.0 years
25 - 40 Lacs
Chennai
Work from Office
We are seeking a highly skilled Data Architect to design and implement robust, scalable, and secure data solutions on AWS Cloud. The ideal candidate should have expertise in AWS services, data modeling, ETL processes, and big data technologies, with hands-on experience in Glue, DMS, Python, PySpark, and MPP databases like Snowflake, Redshift, or Databricks. Key Responsibilities: Architect and implement data solutions leveraging AWS services such as EC2, S3, IAM, Glue (Mandatory), and DMS for efficient data processing and storage. Develop scalable ETL pipelines using AWS Glue, Lambda, and PySpark to support data transformation, ingestion, and migration. Design and optimize data models following Medallion architecture, Data Mesh, and Enterprise Data Warehouse (EDW) principles. Implement data governance, security, and compliance best practices using IAM policies, encryption, and data masking. Work with MPP databases such as Snowflake, Redshift, or Databricks, ensuring performance tuning, indexing, and query optimization. Collaborate with cross-functional teams, including data engineers, analysts, and business stakeholders, to design efficient data integration strategies. Ensure high availability and reliability of data solutions by implementing monitoring, logging, and automation in AWS. Evaluate and recommend best practices for ETL workflows, data pipelines, and cloud-based data warehousing solutions. Troubleshoot performance bottlenecks and optimize query execution plans, indexing strategies, and data partitioning. Job Requirement Required Qualifications & Skills: Strong expertise in AWS Cloud Services: Compute (EC2), Storage (S3), and security (IAM). Proficiency in programming languages: Python, PySpark, and AWS Lambda. Mandatory experience in ETL tools: AWS Glue and DMS for data migration and transformation. Expertise in MPP databases: Snowflake, Redshift, or Databricks; knowledge of RDBMS (Oracle, SQL Server) is a plus. Deep understanding of data modeling techniques: Medallion architecture, Data Mesh, EDW principles. Experience in designing and implementing large-scale, high-performance data solutions. Strong analytical and problem-solving skills, with the ability to optimize data pipelines and storage solutions. Excellent communication and collaboration skills, with experience working in agile environments. Preferred Qualifications: AWS Certification (AWS Certified Data Analytics, AWS Certified Solutions Architect, or equivalent). Experience with real-time data streaming (Kafka, Kinesis, or similar). Familiarity with Infrastructure as Code (Terraform, CloudFormation). Understanding of data governance frameworks and compliance standards (GDPR, HIPAA, etc.
Posted 1 week ago
5.0 - 8.0 years
15 - 27 Lacs
Bengaluru
Work from Office
Strong experience with Python, SQL, pySpark, AWS Glue. Good to have - Shell Scripting, Kafka Good knowledge of DevOps pipeline usage (Jenkins, Bitbucket, EKS, Lightspeed) Experience of AWS tools (AWS S3, EC2, Athena, Redshift, Glue, EMR, Lambda, RDS, Kinesis, DynamoDB, QuickSight etc.). Orchestration using Airflow Good to have - Streaming technologies and processing engines, Kinesis, Kafka, Pub/Sub and Spark Streaming Good debugging skills Should have strong hands-on design and engineering background in AWS, across a wide range of AWS services with the ability to demonstrate working on large engagements. Strong experience and implementation of Data lakes, Data warehousing, Data Lakehouse architectures. Ensure data accuracy, integrity, privacy, security, and compliance through quality control procedures. Monitor data systems performance and implement optimization strategies. Leverage data controls to maintain data privacy, security, compliance, and quality for allocated areas of ownership. Demonstrable knowledge of applying Data Engineering best practices (coding practices to DS, unit testing, version control, code review). Experience in Insurance domain preferred.
Posted 1 week ago
5.0 - 8.0 years
10 - 20 Lacs
Hyderabad
Work from Office
Business Data Analyst - HealthCare Job Summary We are seeking an experienced and results-driven Business Data Analyst with 5+ years of hands-on experience in data analytics, visualization, and business insight generation. This role is ideal for someone who thrives at the intersection of business and datatranslating complex data sets into compelling insights, dashboards, and strategies that support decision-making across the organization. You will collaborate closely with stakeholders across departments to identify business needs, design and build analytical solutions, and tell compelling data stories using advanced visualization tools. Key Responsibilities Data Analytics & Insights Analyze large and complex data sets to identify trends, anomalies, and opportunities that help drive business strategy and operational efficiency. • Dashboard Development & Data Visualization Design, develop, and maintain interactive dashboards and visual reports using tools like Power BI, Tableau, or Looker to enable data-driven decisions. • Business Stakeholder Engagement Collaborate with cross-functional teams to understand business goals, define metrics, and convert ambiguous requirements into concrete analytical deliverables. • KPI Definition & Performance Monitoring Define, track, and report key performance indicators (KPIs), ensuring alignment with business objectives and consistent measurement across teams. • Data Modeling & Reporting Automation Work with data engineering and BI teams to create scalable, reusable data models and automate recurring reports and analysis processes. • Storytelling with Data Communicate findings through clear narratives supported by data visualizations and actionable recommendations to both technical and non-technical audiences. • Data Quality & Governance Ensure accuracy, consistency, and integrity of data through validation, testing, and documentation practices. Required Qualifications Bachelor’s or Master’s degree in Business, Economics, Statistics, Computer Science, Information Systems, or a related field. • 5+ years of professional experience in a data analyst or business analyst role with a focus on data visualization and analytics. • Proficiency in data visualization tools: Power BI, Tableau, Looker (at least one). • Strong experience in SQL and working with relational databases to extract, manipulate, and analyze data. • Deep understanding of business processes, KPIs, and analytical methods. • Excellent problem-solving skills with attention to detail and accuracy. • Strong communication and stakeholder management skills with the ability to explain technical concepts in a clear and business-friendly manner. • Experience working in Agile or fast-paced environments. Preferred Qualifications Experience working with cloud data platforms (e.g., Snowflake, BigQuery, Redshift). • Exposure to Python or R for data manipulation and statistical analysis. • Knowledge of data warehousing, dimensional modeling, or ELT/ETL processes. • Domain experience in Healthcare is a plus.
Posted 1 week ago
0.0 - 3.0 years
0 Lacs
India
On-site
Description GroundTruth is an advertising platform that turns real-world behavior into marketing that drives in-store visits and other real business results. We use observed real-world consumer behavior, including location and purchase data, to create targeted advertising campaigns across all screens, measure how consumers respond, and uncover unique insights to help optimize ongoing and future marketing efforts. With this focus on media, measurement, and insights, we provide marketers with tools to deliver media campaigns that drive measurable impact, such as in-store visits, sales, and more. Learn more at groundtruth.com. We believe that innovative technology starts with the best talent and have been ranked one of Ad Age’s Best Places to Work in 2021, 2022, 2023 & 2025! Learn more about the perks of joining our team here. A Bit About Team GroundTruth seeks a Data Engineering Associate Software Engineer to join our Integration team. The Integration Team connects and consolidates data pipelines across Avails & Inventory Forecast, Identity Graph, and POS Integration systems to ensure accurate, timely insights. We engineer seamless data flows that fuel reliable analytics and decision-making using big data technologies, such as MapReduce, Spark, and Glue. We take pride in building an Engineering Team composed of strong communicators who collaborate with multiple business and engineering stakeholders to find compromises and solutions. Our engineers are organised and detail-oriented team players who are problem solvers with a maker mindset. As an Associate Software Engineer (ASE) on our Integration Team, you will build solutions that add new capabilities to our platform. You Will Create and maintain various data pipelines for the GroundTruth platform. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies. Work with stakeholders, including the Product, Analytics and Client Services teams, to assist with data-related technical issues and support their data infrastructure needs. Prepare detailed specifications and low-level design. Participate in code reviews. Test the product in controlled, real situations before going live. Maintain the application once it is live. Contribute ideas to improve the location platform. You Have B.Tech./B.E./M.Tech./MCA or equivalent in computer science 0-3 years of experience in Data Engineering Experience with AWS Stack used for Data engineering EC2, S3, Athena, Redshift, EMR, ECS, Lambda, and Step functions Experience in Hadoop, MapReduce, Pig, Spark, and Glue Hands-on experience with Java/Python for the orchestration of data pipelines and Data engineering tasks Experience in writing analytical queries using SQL Experience in Airflow Experience in Docker Proficient in Git How can you impress us? Knowledge of REST APIs Any experience with big data technologies like Hadoop, MapReduce, and Pig is a plus Knowledge of shell scripting. Experience with BI tools like Looker. Experience with DB maintenance Experience with Amazon Web Services and Docker Configuration management and QA practices Benefits At GroundTruth, we want our employees to be comfortable with their benefits so they can focus on doing the work they love. Parental leave- Maternity and Paternity Flexible Time Offs (Earned Leaves, Sick Leaves, Birthday leave, Bereavement leave & Company Holidays) In Office Daily Catered Breakfast, Lunch, Snacks and Beverages Health cover for any hospitalization. Covers both nuclear family and parents Tele-med for free doctor consultation, discounts on health checkups and medicines Wellness/Gym Reimbursement Pet Expense Reimbursement Childcare Expenses and reimbursements Employee referral program Education reimbursement program Skill development program Cell phone reimbursement (Mobile Subsidy program). Internet reimbursement/Postpaid cell phone bill/or both. Birthday treat reimbursement Employee Provident Fund Scheme offering different tax saving options such as Voluntary Provident Fund and employee and employer contribution up to 12% Basic Creche reimbursement Co-working space reimbursement National Pension System employer match Meal card for tax benefit Special benefits on salary account Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Junior Developer – Commodities Location: Mumbai, India Team: Commodities SPM Millennium is seeking a Junior Data Engineer to join the Commodities team in India. This position will help bridge the time zone gap between regions as the team expands its coverage into Asia. As part of this role, you will work closely with a global team based in Jersey and the United States. The ideal candidate will have strong programming skills, experience with Linux systems, and familiarity with data management and cloud technologies. This role is an excellent opportunity to contribute to innovative solutions in the Commodities domain while working in a dynamic and collaborative environment. Primary Responsibilities Develop robust and scalable software solutions using Python. For example, data pipelines and parallel computing infrastructure. Collaborate with the team via CI/CD and Git-based workflow. Assist in building tools and reports for Commodities trading and analysis. Gradually taking on data science and modelling projects Required Skills Python: Strong fundamentals, comfortable with advanced concepts like polymorphism and metaclasses, and capable of designing scalable software architectures. Linux: Proficient in Linux system administration. DevOps: Experience with Git and CI/CD tools for version control and collaboration. SQL: Expertise in database management and switching between dialects such as Redshift, Postgres, and DuckDB. Experience with ELT systems (Orchestrator, Scheduler, …) and data pipelines in general. Preferred Skills Web Development: Knowledge of web technologies and frameworks. Cloud Technologies: Experience with AWS products such as S3, Batch and EKS and managing them via Terraform. Monitoring Tools: Familiarity with Datadog for performance monitoring and troubleshooting. Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field. 3–5 years of experience in software development related to data management, ideally in a Commodities or related industry. Strong collaboration skills with the ability to work effectively with globalteams. Show more Show less
Posted 1 week ago
7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description: Lead Data Engineer Experience Level: 7-10 years WFO-Chennai Required Skills: AI /ML-skill mandatory · AWS Glue, EMR (Spark), Redshift, Step Functions: Deep experience in using AWS tools for data processing and pipeline orchestration. · Python, SQL, PySpark/Scala: Advanced skills in Python and SQL, as well as experience in Spark-based data processing frameworks. · Data Pipeline Performance: Expertise in tuning and optimizing data pipelines for performance and scalability. · Monitoring & Troubleshooting: Experience with monitoring data systems and troubleshooting performance bottlenecks. Job Description (JD): The Lead Data Engineer will oversee the design, development, and optimization of data pipelines that serve as the backbone for data processing and analytics. The candidate should have a strong command of cloud-based data engineering tools and the ability to lead teams in building scalable, high-performance data systems. You will collaborate closely with data scientists, analysts, and product teams to deliver reliable and efficient data architectures on AWS, ensuring they meet both current and future business needs. Roles and Responsibilities: · Lead Data Pipeline Development: Architect and develop scalable, secure, and optimized data pipelines using AWS Glue, Redshift, EMR (Spark), and Step Functions. · Data Performance Tuning: Optimize data pipelines for performance, ensuring minimal latency and high throughput. · Collaborate with Stakeholders: Work with data scientists, business analysts, and other engineers to understand requirements and deliver effective solutions. · Data Storage & Management: Ensure efficient management of data storage in Redshift, Glue, and other AWS services. · Team Leadership & Mentorship: Guide and mentor a team of data engineers, ensuring best practices for data engineering are followed. · System Monitoring & Troubleshooting: Set up monitoring for all data pipelines and perform proactive troubleshooting to minimize downtime. · Continuous Improvement: Stay up-to-date with emerging technologies and improve existing data pipelines to enhance performance and scalability. Qualifications: · Bachelor’s degree in Computer Science, Engineering, or a related field. · 7+ years of experience in data engineering with expertise in AWS technologies. · Advanced skills in Python, SQL, and Spark. · Solid understanding of data engineering principles, including ETL processes, performance optimization, and scalability. · Experience in leading teams and mentoring junior engineers. · Excellent communication skills and ability to collaborate with cross-functional teams. Preferred Skills: · Experience with containerization (Docker, Kubernetes). · Familiarity with Apache Kafka, Kinesis, or other streaming technologies. · Knowledge of machine learning frameworks and their integration into data pipelines. · Familiarity with infrastructure as code tools such as Terraform. Skills PRIMARY COMPETENCY : Data Engineering PRIMARY SKILL : DBA / Data Modeling / Data Engineering PRIMARY SKILL PERCENTAGE : 100 Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Who we are About Stripe Stripe is a financial infrastructure platform for businesses. Millions of companies - from the world’s largest enterprises to the most ambitious startups - use Stripe to accept payments, grow their revenue, and accelerate new business opportunities. Our mission is to increase the GDP of the internet, and we have a staggering amount of work ahead. That means you have an unprecedented opportunity to put the global economy within everyone's reach while doing the most important work of your career. About The Team The Seller Systems team is dedicated to empowering sellers and stakeholders at Stripe by streamlining and optimizing the selling processes. We achieve this by fostering enhanced collaboration among various critical functions, including contracting, pricing, billing, and other partner teams. Our mission is to create a cohesive and efficient environment that allows our sellers to thrive, enabling them to focus on what they do best: serving our customers and driving business success. By leveraging innovative tools and fostering strong teamwork, we aim to elevate the entire selling experience at Stripe, ensuring that every stakeholder is equipped with the resources and support necessary to excel in their roles. What you’ll do As a software engineer in the seller systems team , you will design and build platforms, and system solutions that are configurable and scalable around the globe. You will partner with many functions at Stripe, with the opportunity to both work on financial platform systems, as well as direct seller-facing business impact. Responsibilities Build the services, APIs, and systems that empower Stripe’s sales teams to be successful. Create seamless experiences for Stripe merchants through contracting, onboarding, and activation. Unlock the value of Stripe’s data to improve sales processes and merchant experience. Work with engineers across the company to build new features at large-scale. Maintain a collaborative environment, engaging in discussions and decision-making processes with stakeholders within various domains at Stripe. Who you are We are looking for a backend software engineer who meets the minimum requirements for this role.While preferred qualifications are a plus, they are not essential. We value individuals who are passionate about simplifying complexity to address real-world business challenges. Minimum Requirements 4+ years of experience in delivering, extending, and maintaining large scale distributed systems. Think about systems, services, and platforms, and write high quality code. We work mostly in Java and Ruby. Design and build integration Pipeline and API services. You enjoy exploring new datasets, particularly in systems such as Redshift or Presto/Trino. You possess exceptional product taste and a proven ability to address complex problems with elegant solutions. Hold yourself and others to a high bar when working with production systems. The skills to build holistically – from specs and documentation to implementation, testing, deployment, and measuring impact You are capable of working in ambiguous fast-moving environments and have a curiosity to learn the domain to a deep level. Enjoy working with a diverse group of people with different expertise. Eager to learn and effective at giving and receiving constructive feedback to/from peer engineers Preferred Qualifications Familiarity with large scale distributed systems. Experience working in high-growth teams similar to Stripe. Knowledge of CRM platforms like Salesforce . Strong written and verbal communication skills for different audiences (leadership, users, stakeholders etc.). Enjoy being a generalist working on both the frontend, backend, and anything it takes to solve problems and delight users both internally and externally If you meet the minimum requirements, we encourage you to apply. Preferred qualifications are beneficial but not mandatory. In-office expectations Office-assigned Stripes in most of our locations are currently expected to spend at least 50% of the time in a given month in their local office or with users. This expectation may vary depending on role, team and location. For example, Stripes in Stripe Delivery Center roles in Mexico City, Mexico and Bengaluru, India work 100% from the office. Also, some teams have greater in-office attendance requirements, to appropriately support our users and workflows, which the hiring manager will discuss. This approach helps strike a balance between bringing people together for in-person collaboration and learning from each other, while supporting flexibility when possible. Pay and benefits Stripe does not yet include pay ranges in job postings in every country. Stripe strongly values pay transparency and is working toward pay transparency globally. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Job Purpose ICE Mortgage Technology® is driving value to every customer through our effort to automate everything that can be automated in the residential mortgage industry. Our integrated solutions touch each aspect of the loan lifecycle, from the borrower's "point of thought" through e-Close and secondary solutions. Drive real automation that reduces manual workflows, increases productivity, and decreases risk. You will be working in a dynamic product development team while collaborating with other developers, management, and customer support teams. You will have an opportunity to participate in designing and developing services utilized across product lines. The ideal candidate should possess a product mentality, have a strong sense of ownership, and strive to be a good steward of his or her software. More than any concrete experience with specific technology, it is critical for the candidate to have a strong sense of what constitutes good software; be thoughtful and deliberate in picking the right technology stack; and be always open-minded to learn (from others and from failures). Responsibilities Develop high quality data processing infrastructure and scalable services that are capable of ingesting and transforming data at huge scale coming from many different sources on schedule. Turn ideas and concepts into carefully designed and well-authored quality code. Articulate the interdependencies and the impact of the design choices. Develop APIs to power data driven products and external APIs consumed by internal and external customers of data platform. Collaborate with QA, product management, engineering, UX to achieve well groomed, predictable results. Improve and develop new engineering processes & tools. Knowledge And Experience 3+ years of building Enterprise Software Products. Experience in object-oriented design and development with languages such as Java. J2EE and related frameworks. Experience building REST based micro services in a distributed architecture along with any cloud technologies. (AWS preferred) Knowledge in Java/J2EE frameworks like Spring Boot, Microservice, JPA, JDBC and related frameworks is must. Built high throughput real-time and batch data processing pipelines using Kafka, on AWS environment with AWS services like S3, Kinesis, Lamdba, RDS, DynamoDB or Redshift. (Should know basics at least) Experience with a variety of data stores for unstructured and columnar data as well as traditional database systems, for example, MySQL, Postgres Proven ability to deliver working solutions on time Strong analytical thinking to tackle challenging engineering problems. Great energy and enthusiasm with a positive, collaborative working style, clear communication and writing skills. Experience with working in DevOps environment - “you build it, you run it” Demonstrated ability to set priorities and work in a fast-paced, dynamic team environment within a start-up culture. Experience with big data technologies and exposure to Hadoop, Spark, AWS Glue, AWS EMR etc (Nice to have) Experience with handling large data sets using technologies like HDFS, S3, Avro and Parquet (Nice to have) Show more Show less
Posted 1 week ago
5.0 - 10.0 years
10 - 15 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
We are looking for a skilled Data Analyst with excellent communication skills and deep expertise in SQL, Tableau, and modern data warehousing technologies. This role involves designing data models, building insightful dashboards, ensuring data quality, and extracting meaningful insights from large datasets to support strategic business decisions. Key Responsibilities: Write advanced SQL queries to retrieve and manipulate data from cloud data warehouses such as Snowflake, Redshift, or BigQuery. Design and develop data models that support analytics and reporting needs. Build dynamic, interactive dashboards and reports using tools like Tableau, Looker, or Domo. Perform advanced analytics techniques including cohort analysis, time series analysis, scenario analysis, and predictive analytics. Validate data accuracy and perform thorough data QA to ensure high-quality output. Investigate and troubleshoot data issues; perform root cause analysis in collaboration with BI or data engineering teams. Communicate analytical insights clearly and effectively to stakeholders. Required Skills & Qualifications: Excellent communication skills are mandatory for this role. 5+ years of experience in data analytics, BI analytics, or BI engineering roles. Expert-level skills in SQL, with experience writing complex queries and building views. Proven experience using data visualization tools like Tableau, Looker, or Domo. Strong understanding of data modeling principles and best practices. Hands-on experience working with cloud data warehouses such as Snowflake, Redshift, BigQuery, SQL Server, or Oracle. Intermediate-level proficiency with spreadsheet tools like Excel, Google Sheets, or Power BI, including functions, pivots, and lookups. Bachelor's or advanced degree in a relevant field such as Data Science, Computer Science, Statistics, Mathematics, or Information Systems. Ability to collaborate with cross-functional teams, including BI engineers, to optimize reporting solutions. Experience in handling large-scale enterprise data environments. Familiarity with data governance, data cataloging, and metadata management tools (a plus but not required). Location : - Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote
Posted 1 week ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description When you attract people who have the DNA of pioneers and the DNA of explorers, you build a company of like-minded people who want to invent. And that’s what they think about when they get up in the morning: how are we going to work backwards from customers and build a great service or a great product” – Jeff Bezos Amazon.com’s success is built on a foundation of customer obsession. Have you ever thought about what it takes to successfully deliver millions of packages to Amazon customers seamlessly every day like a clock work? In order to make that happen, behind those millions of packages, billions of decision gets made by machines and humans. What is the accuracy of customer provided address? Do we know exact location of the address on Map? Is there a safe place? Can we make unattended delivery? Would signature be required? If the address is commercial property? Do we know open business hours of the address? What if customer is not home? Is there an alternate delivery address? Does customer have any special preference? What are other addresses that also have packages to be delivered on the same day? Are we optimizing delivery associate’s route? Does delivery associate know locality well enough? Is there an access code to get inside building? And the list simply goes on. At the core of all of it lies quality of underlying data that can help make those decisions in time. The person in this role will be a strong influencer who will ensure goal alignment with Technology, Operations, and Finance teams. This role will serve as the face of the organization to global stakeholders. This position requires a results-oriented, high-energy, dynamic individual with both stamina and mental quickness to be able to work and thrive in a fast-paced, high-growth global organization. Excellent communication skills and executive presence to get in front of VPs and SVPs across Amazon will be imperative. Key Strategic Objectives: Amazon is seeking an experienced leader to own the vision for quality improvement through global address management programs. As a Business Intelligence Engineer of Amazon last mile quality team, you will be responsible for shaping the strategy and direction of customer-facing products that are core to the customer experience. As a key member of the last mile leadership team, you will continually raise the bar on both quality and performance. You will bring innovation, a strategic perspective, a passionate voice, and an ability to prioritize and execute on a fast-moving set of priorities, competitive pressures, and operational initiatives. You will partner closely with product and technology teams to define and build innovative and delightful experiences for customers. You must be highly analytical, able to work extremely effectively in a matrix organization, and have the ability to break complex problems down into steps that drive product development at Amazon speed. You will set the tempo for defect reduction through continuous improvement and drive accountability across multiple business units in order to deliver large scale high visibility/ high impact projects. You will lead by example to be just as passionate about operational performance and predictability as you will be about all other aspects of customer experience. The Successful Candidate Will Be Able To Effectively manage customer expectations and resolve conflicts that balance client and company needs. Develop process to effectively maintain and disseminate project information to stakeholders. Be successful in a delivery focused environment and determining the right processes to make the team successful. This opportunity requires excellent technical, problem solving, and communication skills. The candidate is not just a policy maker/spokesperson but drives to get things done. Possess superior analytical abilities and judgment. Use quantitative and qualitative data to prioritize and influence, show creativity, experimentation and innovation, and drive projects with urgency in this fast-paced environment. Partner with key stakeholders to develop the vision and strategy for customer experience on our platforms. Influence product roadmaps based on this strategy along with your teams. Support the scalable growth of the company by developing and enabling the success of the Operations leadership team. Serve as a role model for Amazon Leadership Principles inside and outside the organization Actively seek to implement and distribute best practices across the operation Basic Qualifications 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with scripting language (e.g., Python, Java, or R) Experience building and maintaining basic data artifacts (e.g., ETL, data models, queries) Experience applying basic statistical methods (e.g. regression) to difficult business problems Experience gathering business requirements, using industry standard business intelligence tool(s) to extract data, formulate metrics and build reports Track record of generating key business insights and collaborating with stakeholders Preferred Qualifications Knowledge of how to improve code quality and optimizes BI processes (e.g. speed, cost, reliability) Knowledge of data modeling and data pipeline design Experience in designing and implementing custom reporting systems using automation tools Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2967543 Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
ABOUT THE TEAM Join our dynamic team of expert engineers at Creditsafe, where we are revolutionizing the data ecosystem through strategic innovation and cutting-edge architectural modernization. As a System & Product Architect , you will lead the transformation of our systems and product architecture on AWS, managing billions of data objects with daily increments exceeding 25 million. Your expertise will be pivotal in ensuring high availability, data integrity, and outstanding performance, powering our APIs and file delivery systems to deliver seamless data experiences to our global clients. Be at the forefront of data innovation and make an impact on a global scale. ABOUT THE ROLE This role places you at the center of Creditsafe's transformation journey. You will define architectural standards, design patterns, and technical roadmaps that guide our shift to a modern cloud infrastructure. Collaborating with technologies such as Python, Linux, Airflow, AWS DynamoDB, S3, Glue, Athena, Redshift, Lambda, API Gateway, Terraform, and CI/CD pipelines, you will ensure our platform is scalable, resilient, and ready for the future. KEY DUTIES AND RESPONSIBILITIES Drive the technical vision, architecture, and design principles for system replatforming and migration. Design scalable, distributed architecture patterns that optimize for throughput, resilience, and maintainability. Create and maintain system architecture documentation, including diagrams, data flows, and design decisions. Establish governance frameworks for technical debt management and architectural compliance. Design event-driven architectures for distributed data processing using AWS technologies. Work with team to support & build APIs capable of supporting 1000+ transactions per second. Mentor engineers on architectural best practices and system design principles. Partner with security teams to ensure architectures meet compliance requirements. Contribute to technical roadmap aligned with company’s vision & Product roadmap. SKILLS AND QUALIFICATIONS 8+ years of software engineering experience, with at least 4 years in system architecture. Proven track record in large-scale replatforming and system modernization initiatives. Cloud-native architecture expertise, particularly with AWS services (Redshift, S3, DynamoDB, Lambda, API Gateway). Solid understanding of data platforms, ETL/ELT pipelines, and data warehousing. Experience with serverless architectures, microservices, and event-driven design patterns. Strong technical skills with Python, Terraform and modern DevOps practices. Experience designing high-throughput, low-latency API solutions. Demonstrated technical leadership and mentoring abilities. Clear communication skills, with the ability to translate complex technical concepts. Strategic thinker, love white-boarding, and keen on mentoring engineers. Desirable: Experience with AI and machine learning architecture patterns. AWS Solution Architect – Pro certification. Show more Show less
Posted 1 week ago
15.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. Those in artificial intelligence and machine learning at PwC will focus on developing and implementing advanced AI and ML solutions to drive innovation and enhance business processes. Your work will involve designing and optimising algorithms, models, and systems to enable intelligent decision-making and automation. Years of Experience: Candidates with 15+ years of hands on experience Preferred Skills Experience in GCP and one more cloud ( AWS/Azure) platform specifically in data migration experience Use an analytical and data driven approach to drive a deep understanding of fast changing business Have familiarity with data technologies such as snowflake, databricks, redshift, synapse Leading large-scale data modernization and governance initiatives emphasizing the strategy, design and development of Platform-as-a-Service, Infrastructure-as-a-Service that extend to private and public cloud deployment models Experience in designing, architecting, implementation, and managing data lakes/warehouses. Experience with complex environment delivering application migration to cloud platform Understanding of Agile, SCRUM and Continuous Delivery methodologies Hands-on experience with Docker and Kubernetes or other container orchestration platforms Strong Experience in data management with an understanding of analytics and reporting Understanding of emerging technologies and latest data engineering providers Experience in implementing enterprise data concepts such as Master Data Management and Enterprise Data Warehouse, experience with MDM standards. Educational Background BE / B.Tech / MCA / M.Sc / M.E / M.Tech / MBA Show more Show less
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The HiLabs Story HiLabs is a leading provider of AI-powered solutions to clean dirty data, unlocking its hidden potential for healthcare transformation. HiLabs is committed to transforming the healthcare industry through innovation, collaboration, and a relentless focus on improving patient outcomes. HiLabs Team Multidisciplinary industry leaders Healthcare domain experts AI/ML and data science experts Professionals hailing from the worlds best universities, business schools, and engineering institutes including Harvard, Yale, Carnegie Mellon, Duke, Georgia Tech, Indian Institute of Management (IIM), and Indian Institute of Technology (IIT). Be a part of a team that harnesses advanced AI, ML, and big data technologies to develop cutting-edge healthcare technology platform, delivering innovative business solutions. Job Title : Data Engineer I/II Job Location : Pune, Maharashtra, India Job summary: We are a leading Software as a Service (SaaS) company that specializes in the transformation of data in the US healthcare industry through cutting-edge Artificial Intelligence (AI) solutions. We are looking for Software Developers, who should continually strive to advance engineering excellence and technology innovation. The mission is to power the next generation of digital products and services through innovation, collaboration, and transparency. You will be a technology leader and doer who enjoys working in a dynamic, fast-paced environment. Responsibilities Design, develop, and maintain robust and scalable ETL/ELT pipelines to ingest and transform large datasets from various sources. Optimize and manage databases (SQL/NoSQL) to ensure efficient data storage, retrieval, and manipulation for both structured and unstructured data. Collaborate with data scientists, analysts, and engineers to integrate data from disparate sources and ensure smooth data flow between systems. Implement and maintain data validation and monitoring processes to ensure data accuracy, consistency, and availability. Automate repetitive data engineering tasks and optimize data workflows for performance and scalability. Work closely with cross-functional teams to understand their data needs and provide solutions that help scale operations. Ensure proper documentation of data engineering processes, workflows, and infrastructure for easy maintenance and scalability Desired Profile Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field. 3-5 years of hands-on experience as a Data Engineer or in a related data-driven role. Strong experience with ETL tools like Apache Airflow, Talend, or Informatica. Expertise in SQL and NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB, Cassandra). Strong proficiency in Python, Scala, or Java for data manipulation and pipeline development. Experience with cloud-based platforms (AWS, Google Cloud, Azure) and their data services (e.g., S3, Redshift, BigQuery). Familiarity with big data processing frameworks such as Hadoop, Spark, or Flink. Experience in data warehousing concepts and building data models (e.g., Snowflake, Redshift). Understanding of data governance, data security best practices, and data privacy regulations (e.g., GDPR, HIPAA). Familiarity with version control systems like Git.. HiLabs is an equal opportunity employer (EOE). No job applicant or employee shall receive less favorable treatment or be disadvantaged because of their gender, marital or family status, color, race, ethnic origin, religion, disability, or age; nor be subject to less favorable treatment or be disadvantaged on any other basis prohibited by applicable law. HiLabs is proud to be an equal opportunity workplace dedicated to pursuing and hiring a diverse and inclusive workforce to support individual growth and superior business results. Thank you for reviewing this opportunity with HiLabs! If this position appears to be a good fit for your skillset, we welcome your application. HiLabs Total Rewards Competitive Salary, Accelerated Incentive Policies, H1B sponsorship, Comprehensive benefits package that includes ESOPs, financial contribution for your ongoing professional and personal development, medical coverage for you and your loved ones, 401k, PTOs & a collaborative working environment, Smart mentorship, and highly qualified multidisciplinary, incredibly talented professionals from highly renowned and accredited medical schools, business schools, and engineering institutes. CCPA disclosure notice - https://www.hilabs.com/privacy Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Description Alexa+ is our next-generation assistant powered by generative AI. Alexa+ is more conversational, smarter, personalized, and gets things done. Our goal is make Alexa+ an instantly familiar personal assistant that is always ready to help or entertain on any device. At the core of this vision is 'Alexa AI Developer Tech', a close-knit team that’s dedicated to providing software developers with the tools, primitives, and services they need to easily create engaging customer experiences that expand the wealth of information, products and services available on Alexa+. You will join a growing organization working on top technology using Generative AI and have an enormous opportunity to make an impact on the design, architecture, and implementation of products used every day, by people you know. We’re working hard, having fun, and making history; come join us! Key job responsibilities Work with a team of product and program managers, engineering leaders, and business leaders to build data architectures and platforms to support business Design, develop, and operate high-scalable, high-performance, low-cost, and accurate data pipelines in distributed data processing platforms Recognize and adopt best practices in data processing, reporting, and analysis: data integrity, test design, analysis, validation, and documentation Keep up to date with big data technologies, evaluate and make decisions around the use of new or existing software products to design the data architecture Design, build and own all the components of a high-volume data warehouse end to end. Provide end-to-end data engineering support for project lifecycle execution (design, execution and risk assessment) Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers Interface with other technology teams to extract, transform, and load (ETL) data from a wide variety of data sources Own the functional and nonfunctional scaling of software systems in your ownership area. *Implement big data solutions for distributed computing. About The Team Alexa AI Developer Tech is an organization within Alexa on a mission to empower developers to create delightful and engaging experiences by making Alexa more natural, accurate, conversational, and personalized. Basic Qualifications 3+ years of data engineering experience 4+ years of SQL experience Experience with data modeling, warehousing and building ETL pipelines Preferred Qualifications Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Maharashtra Job ID: A3005594 Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Role Overview We are seeking a highly skilled and forward-thinking professional to lead our Data Engineering and Data Science initiatives. As a Lead – DE + DS , you will play a critical role in designing and scaling data pipelines, architecting data platforms, and developing predictive models that drive strategic decision-making across the organization. This is a hybrid leadership role combining hands-on technical expertise with people management and stakeholder engagement. Key Responsibilities Data Engineering: Architect and manage scalable and secure data pipelines and ETL/ELT processes using cloud-based platforms (e.g., AWS, Azure, GCP) Design and maintain data lake/data warehouse structures and ensure data quality, availability, and governance Collaborate with DevOps and platform teams to automate data workflows and deploy pipelines in production Data Science Lead the development, deployment, and monitoring of machine learning models for business use cases (e.g., forecasting, recommendation engines, anomaly detection) Drive experimentation and advanced analytics using statistical, machine learning, and deep learning methods Translate business problems into data-driven solutions and actionable insights Leadership & Collaboration Lead and mentor a team of data engineers and data scientists, fostering skill development and collaboration Partner with business stakeholders, product owners, and engineering teams to align on data strategies and deliver impactful outcomes Define and enforce best practices in data architecture, coding standards, and model lifecycle management Required Skills & Qualifications Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, Mathematics, or a related field 8+ years of relevant experience in data engineering and/or data science, with at least 2 years in a technical leadership role Proficiency in SQL, Python, Spark, and distributed data processing frameworks Experience with data warehousing (Snowflake, Redshift, BigQuery), and data pipeline tools (Airflow, dbt, etc.) Strong understanding of ML frameworks (Scikit-learn, TensorFlow, PyTorch) and model deployment practices Solid grasp of data governance, MLOps, and CI/CD practices in a cloud environment Excellent communication and stakeholder management skills Preferred Qualifications Experience in Agile delivery environments Certifications in cloud platforms (e.g., AWS Certified Data Analytics, GCP Professional Data Engineer) Exposure to real-time data streaming (Kafka, Kinesis, etc.) Familiarity with visualization tools like Power BI, Tableau, or Looker Show more Show less
Posted 1 week ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description Have you ever thought about what it takes to detect and prevent fraudulent activity among hundreds of millions of e-commerce transactions across the globe? What would you do to increase trust in an online marketplace where millions of buyers and sellers transact? How would you build systems that evolve over time to proactively identify and neutralize new and emerging fraud threats? Our mission in Buyer Risk Prevention is to make Amazon the safest place to transact online. Buyer Risk Prevention safeguards every financial transaction across all Amazon sites, while striving to ensure that these efforts are transparent to our legitimate customers. As such, Buyer Risk Prevention designs and builds the software systems, risk models and operational processes that minimize risk and maximize trust in Amazon.com. Have you ever thought about what it takes to detect and prevent fraudulent activity among hundreds of millions of e-commerce transactions across the globe? What would you do to increase trust in an online marketplace where millions of buyers and sellers transact? How would you build systems that evolve over time to proactively identify and neutralize new and emerging fraud threats? Our mission in Buyer Risk Prevention is to make Amazon the safest place to transact online. Buyer Risk Prevention safeguards every financial transaction across all Amazon sites, while striving to ensure that these efforts are transparent to our legitimate customers. As such, Buyer Risk Prevention designs and builds the software systems, risk models and operational processes that minimize risk and maximize trust in Amazon.com. Within BRP, we are looking for a leader for Payment Risk Operations. Our ideal candidate will be an experienced people leader who can thrive in an ambiguous and fast paced business landscape. You are passionate about working with complex datasets and are someone who loves to dive deep, analyze and turn data into insights. You will be responsible for analyzing terabytes of data to identify specific instances of risk, broader risk trends and points of customer friction, developing scalable solutions for prevention. You will be leader of leaders within the PRO Analytics teams leading a team of Business Analyst, MIS and BA managers. You should have deep expertise in analytic view of business questions, building up & refining metrics framework to measure business operation and translating data into meaningful insights using a breadth of tools and terabytes of data. In this role, you will have ownership of end-to-end analytics development to complex questions and you’ll play an integral role in strategic decision making. You should have excellent business and communication skills to be able to work with business owners to understand business challenges & opportunities, and to drive data-driven decision into process & tool improvement together with business team. You will need to collaborate effectively with business and product leaders within PRO and cross-functional teams across BRP to solve problems, create operational efficiencies, and deliver successfully against high organizational standards. In addition, you will be responsible for building a robust set of operational and business metrics and will utilize metrics to determine improvement opportunities. This is a high impact role with goals that directly impacts the bottom line of the business. Key job responsibilities Build and execute the strategy for Payment Risk Operations Analytics team Hire, manage, coach and lead a high performing team of Business Analysts Develop inferences using statistical rigor to simplify and inform the larger team of noteworthy findings that impact the business Build datasets, metrics, and KPIs supporting business Design and develop highly available dashboards and metrics using SQL and Excel/Quicksight or other BI reporting tools Perform business analysis and data queries using scripting languages like R, Python etc Design, implement and support end-to-end analytical solutions that are highly available, reliable, secure, and scale economically Collaborate cross-functionally to recognize and help adopt best practices in reporting and analysis, data integrity, test design, analysis, validation, and documentation Proactively identify problems and opportunities and perform root cause analysis/diagnosis leading to significant business impact Work closely with internal stakeholders such as Operations, Program Managers, Workforce, Capacity planning, machine learning, finance teams and partner teams to align them with respect to your focus area Own the delivery and backup of periodic metrics, dashboards and other reports to the leadership team Manage all aspects of BI projects such as project planning, requirements definition, risk management, communication, and implementation planning. Execute high priority (i.e. cross functional, high impact) projects to create robust, scalable analytics solutions and frameworks with the help of Analytics/BIE managers Basic Qualifications 7+ years of business intelligence and analytics experience 5+ years of delivering results managing a business intelligence or analytics team, including sprint planning, roadmap planning, employee development and performance management experience Experience creating complex SQL queries joining multiple datasets, ETL DW concepts Experience with Excel 5+ years using data visualization tools like Tableau, Quicksight or similar tools Experience with R, Python or other statistical/machine learning tools Experience demonstrating problem solving and root cause analysis Experience using databases with a large-scale data set Bachelor's degree in engineering, analytics, mathematics, statistics or a related technical or quantitative field Detail-oriented and must have an aptitude for solving unstructured problems. The role will require the ability to extract data from various sources and to design/construct/execute complex analyses to finally come up with data/reports that help solve the business problem Analytical mindset and ability to see the big picture and influence others Good oral, written and presentation skills combined with the ability to be part of group discussions with leadership and explaining complex solutions to non-tech audience Preferred Qualifications Experience in Amazon Redshift and other AWS technologies Experience scripting for automation (e.g., Python, Perl, Ruby) Experience in e-commerce / on-line companies in fraud / risk control functions Ability to apply analytical, computer, statistical and quantitative problem solving skills Ability to work effectively in a multi-task, high volume environment Ability to be adaptable and flexible in responding to deadlines and workflow fluctuations Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - BLR 14 SEZ Job ID: A2966814 Show more Show less
Posted 1 week ago
3.0 - 6.0 years
0 Lacs
Greater Kolkata Area
On-site
Summary Position Summary Strategy & Analytics AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the Strategy practice, our Strategy & Analytics portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements AWS Consultant The position is suited for individuals who have the ability to work in a constantly challenging environment and deliver effectively and efficiently. The individual will need to be adaptive and able to react quickly to changing business needs. Work you’ll do Planning, designing and developing cloud-based applications Work in tandem with engineering team to identify and implement the most optimal cloud-based solutions Design and deploy enterprise-wide scalable operations on Cloud Platforms Deploy and debug cloud applications in accordance with best practices throughout the development lifecycle Provides administration for cloud deployments and assures the environments are appropriately configured and maintained. Monitors the environment stability and responds to any issues or service requests for the environment. Educate teams on the implementation of new cloud-based initiatives, providing associated training as required Exceptional problem-solving skills, with the ability to see and solve issues Building and designing web services in the cloud, along with implementing the set-up of geographically redundant services. Orchestrating and automating cloud-based platforms Continuously monitor the system effectiveness and performance and identify the areas for improvement, collaborating with key stakeholders Provide guidance and coaching to the team members as required and also contribute to documenting cloud operations playbook and providing thought leadership in development automation, CI/CD Involve in providing insights for optimization of cloud computing costs Qualifications Required: 3-6 Years of technology Consulting experience A minimum of 2 Years of experience in Cloud Operations High degree of knowledge using AWS services like lambda, GLUE, S3, Redshift, SNS, SQS and more. Strong scripting experience with python and ability to write SQL queries and string analytical skills. Experience working on CICD/DevOps is nice to have. Proven experience with agile/iterative methodologies implementing Cloud projects. Ability to translate business requirements and technical requirements into technical design. Good knowledge of end to end project delivery methodology implementing Cloud projects. Strong UNIX operating system concepts and shell scripting knowledge Good knowledge of cloud computing technologies and current computing trends. Effective communication skills (written and verbal) to properly articulate complicated cloud reports to management and other IT development partners. Ability to operate independently with clear focus on schedule and outcomes. Experience with algorithm development, including statistical and probabilistic analysis, clustering, recommendation systems, natural language processing, and performance analysis. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 301813 Show more Show less
Posted 1 week ago
6.0 - 9.0 years
0 Lacs
Greater Kolkata Area
On-site
Summary Position Summary Strategy & Analytics AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the Strategy practice, our Strategy & Analytics portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements AWS Senior Consultant The position is suited for individuals who have the ability to work in a constantly challenging environment and deliver effectively and efficiently. The individual will need to be adaptive and able to react quickly to changing business needs. Work you’ll do Planning, designing and developing cloud-based applications Work in tandem with engineering team to identify and implement the most optimal cloud-based solutions Design and deploy enterprise-wide scalable operations on Cloud Platforms Deploy and debug cloud applications in accordance with best practices throughout the development lifecycle Provides administration for cloud deployments and assures the environments are appropriately configured and maintained. Monitors the environment stability and responds to any issues or service requests for the environment. Educate teams on the implementation of new cloud-based initiatives, providing associated training as required Exceptional problem-solving skills, with the ability to see and solve issues Building and designing web services in the cloud, along with implementing the set-up of geographically redundant services. Orchestrating and automating cloud-based platforms Continuously monitor the system effectiveness and performance and identify the areas for improvement, collaborating with key stakeholders Provide guidance and coaching to the team members as required and also contribute to documenting cloud operations playbook and providing thought leadership in development automation, CI/CD Involve in providing insights for optimization of cloud computing costs Qualifications Required: 6-9 Years of technology Consulting experience A minimum of 3 Years of experience in Cloud Operations High degree of knowledge using AWS services like lambda, GLUE, S3, Redshift, SNS, SQS and more. Strong scripting experience with python and ability to write SQL queries and string analytical skills. Experience working on CICD/DevOps is nice to have. Proven experience with agile/iterative methodologies implementing Cloud projects. Ability to translate business requirements and technical requirements into technical design. Good knowledge of end to end project delivery methodology implementing Cloud projects. Strong UNIX operating system concepts and shell scripting knowledge Good knowledge of cloud computing technologies and current computing trends. Effective communication skills (written and verbal) to properly articulate complicated cloud reports to management and other IT development partners. Ability to operate independently with clear focus on schedule and outcomes. Experience with algorithm development, including statistical and probabilistic analysis, clustering, recommendation systems, natural language processing, and performance analysis. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 304410 Show more Show less
Posted 1 week ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description At Amazon, we strive to be the most innovative and customer centric company on the planet. Come work with us to develop innovative products, tools and research driven solutions in a fast-paced environment by collaborating with smart and passionate leaders, program managers and software developers. This role is based out of our Hyderabad corporate office and is for an passionate, dynamic, analytical, innovative, hands-on, and customer-obsessed Business analyst. Key job responsibilities The ideal candidate will have experience working with large datasets and distributed computing technologies. The candidate relishes working with large volumes of data, enjoys the challenge of highly complex technical contexts, and, is passionate about data and analytics. He/she should be an expert with data modeling, ETL design and business intelligence tools, has hand-on knowledge on columnar databases such as Redshift and other related AWS technologies. He/she passionately partners with the customers to identify strategic opportunities in the field of data engineering. He/she should be a self-starter, comfortable with ambiguity, able to think big (while paying careful attention to detail) and enjoys working in a fast-paced team that continuously learns and evolves on a day to day basis. A day in the life Key Job Responsibilities This role primarily focuses on deep-dives, creating dashboards for the business, working with different teams to develop and track metrics and bridges. Design, develop and maintain scalable, automated, user-friendly systems, reports, dashboards, etc. that will support our analytical and business needs In-depth research of drivers of the Localization business Analyze key metrics to uncover trends and root causes of issues Suggest and build new metrics and analysis that enable better perspective on business Capture the right metrics to influence stakeholders and measure success Develop domain expertise and apply to operational problems to find solution Work across teams with different stakeholders to prioritize and deliver data and reporting Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation Basic Qualifications Bachelor's degree or equivalent 2+ years of experience in business analysis Technical skills – Advanced proficiency in SQL/ETL, Microsoft Excel, and statistical analysis tools and techniques. Experience with data visualization using QuickSight or similar tools Experience in defining requirements and using data and metrics to draw business insights Strong Analytical skills – has ability to start from ambiguous problem statements, identify and access relevant data, make appropriate assumptions, perform insightful analysis and draw conclusion relevant to the business problem. Ability to work effectively & independently in a fast-paced environment with tight deadlines. Preferred Qualifications Experience using very large datasets Demonstrated ability to communicate complex technical problems in simple plain stories. 2+ years experience in a business analyst, data analyst or statistical analyst role 2+ years of experience with SQL, Excel macros, Python and statistical techniques Experience with AWS services like S3, Redshift, and Andes etc.. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2986909 Show more Show less
Posted 1 week ago
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Summary: We are seeking a skilled Data Engineer to join our dynamic team. In this role, will be responsible for implementing and maintaining scalable data pipelines and infrastructure on AWS cloud platform. The ideal candidate will have experience with AWS services, particularly in the realm of big data processing and analytics. The role involves working closely with cross-functional teams to support data-driven decision-making and focus on delivering business objectives while improving efficiency and ensuring high service quality. Key Responsibilities: Design, develop, and maintain large-scale data pipelines that can handle large datasets from multiple sources. Knowledge of real-time data replication and batch processing of data using distributed computing platforms like Spark, Kafka, etc. Optimize performance of data processing jobs and ensure system scalability and reliability. Collaborate with DevOps teams to manage infrastructure, including cloud environments like AWS Collaborate with data scientists, analysts, and business stakeholders to develop tools and platforms that enable advanced analytics and reporting. Lead and mentor junior data engineers, providing guidance on best practices, code reviews, and technical solutions. Evaluating and implementing new frameworks, tools for data engineering Strong analytical and problem-solving skills with attention to detail. To maintain a healthy working relationship with the business partners/users and other MLI departments Responsible for overall performance, cost and delivery of technology solutions Key Technical competencies/skills required: Hands-on experience with AWS services such as S3, DMS, Lambda, EMR, Glue, Redshift,RDS (Postgres) Athena, Kinesics, etc. Expertise in data modelling and knowledge of modern file and table formats. Proficiency in programming languages such as Python, PySpark, SQL/PLSQL for implementing data pipelines and ETL processes. Experience data architecting or deploying Cloud/Virtualization solutions (Like Data Lake, EDW, Mart ) in enterprise Knowledge of modern data stack and keeping the technology stack refreshed. Knowledge of DevOps to perform CI/CD for data pipelines. Knowledge of Data Observability, automated data lineage and metadata management would be an added advantage. Cloud/hybrid cloud (preferable AWS) solution for data strategy for Data lake, BI and Analytics Set-up logging, monitoring, alerting, dashboards for cloud solution and data solution Experience with data warehousing concepts. Desired qualifications and experience: Bachelor’s degree in Computer Science, Engineering, or related field (Master’s preferred). Proven experience of 7+ years as a Data Engineer or similar role with a strong focus on AWS cloud Strong analytical and problem-solving skills with attention to detail. Excellent communication and collaboration skills. AWS certifications (e.g., AWS Certified Big Data - Specialty) are a plus Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The job market for redshift professionals in India is growing rapidly as more companies adopt cloud data warehousing solutions. Redshift, a powerful data warehouse service provided by Amazon Web Services, is in high demand due to its scalability, performance, and cost-effectiveness. Job seekers with expertise in redshift can find a plethora of opportunities in various industries across the country.
The average salary range for redshift professionals in India varies based on experience and location. Entry-level positions can expect a salary in the range of INR 6-10 lakhs per annum, while experienced professionals can earn upwards of INR 20 lakhs per annum.
In the field of redshift, a typical career path may include roles such as: - Junior Developer - Data Engineer - Senior Data Engineer - Tech Lead - Data Architect
Apart from expertise in redshift, proficiency in the following skills can be beneficial: - SQL - ETL Tools - Data Modeling - Cloud Computing (AWS) - Python/R Programming
As the demand for redshift professionals continues to rise in India, job seekers should focus on honing their skills and knowledge in this area to stay competitive in the job market. By preparing thoroughly and showcasing their expertise, candidates can secure rewarding opportunities in this fast-growing field. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.