Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5.0 - 10.0 years
25 - 30 Lacs
Bengaluru
Work from Office
Here's a comprehensive description and job description (JD) for a Data Analyst Lead role, suitable for a growth-oriented organizationespecially in tech, product, or edtech environments: Role: Data Analyst Lead Role Overview We are seeking a Data Analyst Lead who will take ownership of our analytics strategy, leading a team of analysts and working closely with stakeholders across product, marketing, operations, and engineering. The ideal candidate is not only strong in SQL, data modeling, and visualization but also highly business-savvycapable of turning data into actionable insights that drive growth and efficiency. You will be responsible for building scalable dashboards, running deep-dive analyses, mentoring junior analysts, and enabling data-driven decisions across the company. Key Responsibilities Lead & Mentor : Manage a small team of data analysts. Set direction, review work, provide mentorship, and grow team capabilities. Business Partnering : Collaborate with stakeholders across product, marketing, sales, finance, and ops to understand their data needs and translate them into impactful analytics projects. Data Strategy : Define and drive data quality, consistency, and accessibility across departments. Build scalable data pipelines in collaboration with engineering teams if needed. Analysis & Reporting : Design, implement, and maintain dashboards (using tools like QuickSight, Metabase). Perform exploratory data analysis (EDA), cohort analysis, funnel drop-off analysis, retention modeling, etc. Provide insights on KPIs like customer acquisition cost (CAC), LTV, churn, NPS, and usage metrics. Experimentation : Support A/B testing strategies by defining metrics, evaluating results, and communicating outcomes clearly. Drive Insights : Proactively identify business trends, anomalies, and opportunities for growth or efficiency using data. Tools & Tech : Oversee adoption of BI tools, ensure SQL standards, and guide best practices in dashboard design and analytics workflows. Requirements Must-Have 5+ years of experience in analytics, with at least 1–2 years leading a team or cross-functional projects. Advanced SQL skills and experience working with large datasets (PostgreSQL, Redshift, etc.). Strong skills in dashboarding and visualization (Quicksight, Metabase). Experience with statistical or data modeling tools (Python, R, or similar). Proven experience in designing KPIs and performance dashboards across teams. Business acumen and the ability to convert ambiguous problems into clear data-backed narratives. Nice-to-Have Experience in a high-growth startup or tech/edtech environment. Exposure to data warehousing (e.g., AWS Glue, Redshift, Snowflake). Familiarity with experimentation frameworks or product analytics tools (Mixpanel, Amplitude, GA4). Understanding of data privacy and governance best practices.
Posted 1 week ago
7.0 - 12.0 years
16 - 30 Lacs
Chennai
Hybrid
Responsibilities: Working with other members of the Database Services and external development teams to deliver projects specified in our company roadmap Owning, tracking and resolving database related incidents and requests across the following database platforms (PgSQL, SQL Server, Opensearch, Redshift) Fulfilling requests and resolving incidents within SLAs. Reviewing service-related reports (e.g. database backups, maintenance, monitoring) on a daily basis to ensure operational issues are identified and resolved promptly. Responding to database related alerts and escalations and working with database engineering to come up with strategic solutions to recurring problems. Identifying opportunities for process improvement including enhancing automation for database platform provisioning Enhancing database monitoring & alerting platforms to ensure proactive alerting is always achieved. Promotion of database engineering best practices within our cross-functional development teams. Coaching / mentoring junior engineers. Where youll be working: This hybrid role will have a defined work location that includes work from home and assigned office days as set by the manager. You’ll need to have: experience with management & operations of Database technologies & services with the majority of your recent experience in the AWS platform. Strong analytical & problem solving skills In-depth experience in one or more of the following Technologies operating in a high-volume, high-throughput environment. PgSQL Opensearch/ElasticSearch Redshift SQL Server Experience with the above technologies to include but not limited to the following:- General Database Administration Tasks Database Troubleshooting & Performance Reviews Database & Index Design & Maintenance Design & Maintenance of Partitioning Database Upgrades High Availability & Disaster Recovery Options Performance Tuning & optimisation Security Hardening & Access Provisioning Monitoring & Alerting Experience in managing database platforms which operate regulatory controls such as GDPR/CCPA/CPRA, HIPPA, SOCII etc. Experiencing in designing, development & maintaining CI/CD pipelines for AWS Infrastructure & Database Code deployments (using services such as GIT, Bamboo, Powershell, RedGate Flyway, Cloud Formation Templates, etc.) Ability to troubleshoot software, hardware & service related issues. Excellent organizational skills & attention to detail Excellent written and oral communication skills Ability to coach, mentor, and influence team members through the necessary database disciplines
Posted 1 week ago
5.0 - 10.0 years
9 - 19 Lacs
Kolkata, Hyderabad, Pune
Work from Office
• Minimum 5 years + working exp as Databricks Developer • Minimum 3 years + working exp on Redshift, Python, PySpark, and AWS • Associate should hold Databrick Certification and willing to join within 30 Days
Posted 1 week ago
1 - 6 years
3 - 6 Lacs
Mumbai Suburbs, Navi Mumbai, Mumbai (All Areas)
Work from Office
JOB DESCRIPTION DATA ENGINEER Department: Technology Location: Mumbai - Lower Parel Employment Type: Internship / Full time Roles & Responsibilities Assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Work with stakeholders including the Executive, Product, Data and Design teams to assist with data- related technical issues and support their data infrastructure needs. Key Skills / Requirements We are looking for a candidate in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. Advanced SQL knowledge and knowledge in relational/document databases, query authoring (SQL) as well as working familiarity with a variety of databases. Designing and implementing efficient and scalable data pipelines to extract, transform, and load (ETL) data from various sources. Building and maintaining robust data architectures, including databases and data warehouses, to ensure data availability and integrity. Strong analytic skills related to working with unstructured datasets. Knowledge in AWS cloud services: EC2, EMR, RDS, Redshift Knowledge in Python is must Benefits Competitive salary packages and bonuses. Mediclaim plans for you and your dependents
Posted 2 months ago
1 - 3 years
2 - 4 Lacs
Chennai
Work from Office
Key Responsibilities: Design and implement scalable, secure, and high-performance data architectures. Define and enforce data modelling standards, best practices, and data governance policies. Develop data strategies that align with business objectives and future growth. Design, optimize, and maintain relational and NoSQL databases (e.g., PostgreSQL, MySQL, ClickHouse, MongoDB). Implement and manage data warehouses and data lakes for analytics and reporting (e.g., Snowflake, BigQuery, Redshift). Ensure efficient ETL/ELT processes for data integration and transformation. Define and enforce data security policies, access controls, and compliance with regulations (GDPR, HIPAA, etc.). Implement data lineage, data cataloging, and metadata management solutions. Work closely with data engineers, analysts, and business teams to understand data requirements. Provide technical guidance and mentorship to data teams. Collaborate with IT and DevOps teams to ensure seamless integration of data solutions. Optimize query performance, indexing strategies, and storage solutions. Evaluate and integrate emerging technologies such as AI/ML-driven data processing and real-time analytics.
Posted 2 months ago
8 - 12 years
9 - 19 Lacs
Bengaluru
Hybrid
Role & Job Description • Experienced Data management specialist responsible for developing, overseeing, organizing, storing, and analyzing data and data systems • Participate in all aspects of the software development lifecycle for Snowflake solutions, including planning, requirements, development, testing, and quality assurance • Work in tandem with our engineering team to identify and implement the most optimal solutions • Ensure platform performance, uptime, and scale, maintaining high standards for code quality and thoughtful design • Troubleshoot incidents, identify root causes, fix and document problems, and implement preventive measures • Able to manage deliverables in fast paced environments Areas of Expertise • At least 8-10 years of experience designing and development of data solutions in enterprise environment • At least 5+ years experience on Snowflake Platform • Strong hands on SQL and Python development • Experience with designing and development data warehouses in Snowflake • A minimum of three years experience in developing production-ready data ingestion and processing pipelines using Spark, Scala • Strong hands-on experience with Orchestration Tools e.g. Airflow, Informatica, Automic • Good understanding on Metadata and data lineage • Hands on knowledge on SQL Analytical functions • Strong knowledge and hands-on experience in Shell scripting, Java Scripting • Able to demonstrate experience with software engineering practices including CI/CD, Automated testing and Performance Engineering. • Good understanding and exposure to Git, Confluence and Jira • Good problem solving and troubleshooting skills. • Team player, collaborative approach and excellent communication skills responsibilities Preferred candidate profile Perks and benefits
Posted 2 months ago
5 - 8 years
7 - 15 Lacs
Chennai
Work from Office
5+ years of experience in database engineering and administration. SQL development, query optimization, SSIS (ETL) and SSRS (reporting). Redshift, MongoDB, and DynamoDB. Experience of Kafka, cloud-based database solutions on AWS.
Posted 3 months ago
3 - 5 years
5 - 12 Lacs
Chennai
Work from Office
3+ years of experience in data engineering, database development, and ETL processes. Seeking a skilled Data Engineer with expertise in SQL, SSIS, SSRS, Redshift, DynamoDB, and Mongo DB, ETL processes, database management
Posted 3 months ago
10 - 15 years
20 - 25 Lacs
Bengaluru
Work from Office
PySpark with experience in architecting high throughput data lakes. Understanding of CDC and respective tools like Debezium, DMS. Workflows – Airflow or Step Functions, or Glue Workflow. Glue ecosystem – Glue Jobs, Data Catalog, Bookmarks, Crawlers.
Posted 3 months ago
6 - 11 years
12 - 20 Lacs
Chennai, Bengaluru, Hyderabad
Hybrid
Dear Candidate, Greeting for the day! Job description Responsible for delivering highly available heterogeneous database servers on multiple technology platforms. Strong in SQL Knowledge in Python with ETL. Lead all database maintenance and tuning activities, ensuring continued availability, performance and capacity of all database services across every business application and system. Is expected to consider current practices to develop innovative and reliable solutions that will continuously improve the service quality to the business. Create/Update/Maintain data warehouse for business/product reporting needs. Refine physical database design to meet system performance requirements. Identify inefficiencies in current databases and investigate solutions. Diagnose and resolve database access and performance issues. Develop, implement, and maintain change control and testing processes for modifications to databases. Ensure all database systems meet business and performance requirements. Coordinate and work with other technical staff to develop relational databases and data warehouses. Advocates and implements standards, best practices, technical troubleshooting processes, and quality assurance Develop and maintain database documentation, including data standards, procedures, and definitions for the data dictionary. Produce ad-hoc queries and develop reports to support business needs. Creation and maintenance of technical documentation. Perform other management assigned tasks as required. Qualifications Must Haves Bachelors or Masters degree in Computer Science, Mathematics or other STEM discipline 3+ years of experience in working with relational databases. (e.g. Redshift, PostgreSQL, Oracle, MySQL) 2+ years of experience with NoSQL database solutions (e.g. MongoDB, DynamoDB, Hadoop/HBase etc.) 3+ years of experience with ETL/ELT tools (e.g. Talend/ Informatica, AWS Data Pipeline. Preferably on Talend.) Strong Knowledge on Data warehousing Basics and relational database management Systems and Dimensional modelling (Star schema and Snowflake schema) Configuration of ETL ecosystems and perform regular data maintenance activities such as data loads, data fixes, schema updates, database copies, etc. Experienced in data cleansing, enterprise data architecting, data quality and data governance Good understanding of Redshift Database design using distribution style, sorting, encoding features Working experience with cloud computing technologies AWS EC2, RDS, Data Migration Service (DMS), Schema Conversion Tool (SCT), AWS Glue Well versed in advanced query development and design using SQL, PL/SQL, Query Optimization, performance and tuning of applications on various databases Supporting multiple DBMS platforms in Production/QA/UAT/DEV in both on premise and AWS cloud environments Strong Pluses Experience with database partitioning strategies on various databases (PostgreSQL, Oracle) Experience in migrating, automating and supporting a variety of AWS hosted (Both RDBMS & NoSQL) databases in RDS, EC2 using CFT Experience with Big Data Technology Stack: Hadoop/Spark/Hive/MapR/Storm/Pig/Oozie/Kafka, etc. Experience with shell scripting for process automation Experience with source code versioning with Git and Stash Ability to work across multiple projects simultaneously Strong experience in all aspects of the software lifecycle including design, testing, and delivery Ability to understand and start projects quickly Ability and willingness to work with teams located in multiple time zones Regards, Sushma A
Posted 3 months ago
5 - 10 years
15 - 25 Lacs
Chennai, Bengaluru, Bangalore Rural
Work from Office
5+ years of data engineering experience. Strong skills in Azure, Python, or Scala. Expertise in Apache Spark, Databricks, and SQL. Build scalable data pipelines and optimize workflows. Migrate Spark/Hive scripts to Databricks.
Posted 3 months ago
6 - 10 years
35 - 45 Lacs
Bengaluru
Hybrid
What the job involves You will be joining a fast-growing team of motivated and talented engineers, helping us build and enhance a suite of innovative products that are transforming the mobile marketing industry. Our solutions enable clients to measure the effectiveness of their data in a completely novel way. This role is fairly independent and offers significant autonomy. You should be comfortable working with a geographically dispersed team and driving your tasks to completion. You will take ownership of your work and may be responsible for supporting one or more projects simultaneously based on business needs. Additionally, you will collaborate closely with our existing team of software engineers and data scientists, contributing to the continuous improvement of our product and solution suite. Who you are Experience & Proficiency: You bring 6-10 years of commercial software engineering experience, with a strong command of backend development using two or more backend languages (GoLang/Python/Javascript Technologies/SQL). You have experience developing and deploying microservices, particularly in cloud environments (AWS or GCP). Developing and managing database interface software libraries Microservices & Performance: Youve worked with microservice architectures, appreciating the importance of performance, data security, and quality requirements. Your knowledge of concurrency and multi-threaded code ensures efficient performance and system scalability. Backend & Cloud Technologies: You have hands-on experience with various storage technologies, including Amazon Redshift, Bigquery, Spark and MySQL, and are comfortable working with cloud-native services. You are familiar with Kubernetes and containerization technologies, with an understanding of their best practices. Problem-Solving & Initiative: You are motivated to solve complex problems and handle the technical challenges. You dont need micromanagement; youre proactive, asking for help when needed but working independently to develop solutions. Documentation & Support: You are diligent in documenting new solutions and maintaining up-to-date records of existing systems. You also provide on-call support to ensure any issues with the onboarding process are quickly resolved. Continuous Learning & Collaboration: You collaborate effectively with cross-functional teams and are committed to continuously broadening your skill set. You embrace new techniques and technologies to stay current with industry standards. Job Responsibilities Research, design, develop and test ingestion pipelines Be a core member of the team creating leading edge big data processing, analytics and AI tools focusing on deriving value out of customer data.. Balance a pace of delivery schedule with a focus on quality and resilience. Maintain and optimize legacy systems while developing new, scalable solutions. Research, design, develop, and test ingestion pipelines to ensure high performance and data accuracy. Profile and optimize CPU usage, memory consumption, and I/O operations to enhance system performance. Document new solutions and keep existing documentation up to date. Bonus Points Strong understanding of software testing methodologies and experience with automated testing frameworks. Experience with Kubernetes and cloud-based orchestration systems. Hands-on experience in deploying AI solutions, particularly Large Language Models (LLMs), in production environments. Proficiency with BigQuery, Spark, and Redshift for data processing and analytics. Expertise in infrastructure as code (IaC) using Terraform.
Posted 3 months ago
5 - 10 years
5 - 15 Lacs
Mumbai, Bengaluru, Bangalore Rural
Hybrid
Job Description : Data Engineer Position Overview : Role Overview We are seeking a skilledPython Data Engineerwith expertise in designing and implementing data solutions using theAWS cloud platform. The ideal candidate will be responsible for building and maintaining scalable, efficient, and secure data pipelines while leveraging Python and AWS services to enable robust data analytics and decision-making processes. Key Responsibilities Design, develop, and optimize data pipelines using Python and AWS services such asGlue, Lambda, S3, EMR, Redshift, Athena, and Kinesis. Implement ETL/ELT processes to extract, transform, and load data from various sources into centralized repositories (e.g., data lakes or data warehouses). Collaborate with cross-functional teams to understand business requirements and translate them into scalable data solutions. Monitor, troubleshoot, and enhance data workflows for performance and cost optimization. Ensure data quality and consistency by implementing validation and governance practices. Work on data security best practices in compliance with organizational policies and regulations. Automate repetitive data engineering tasks using Python scripts and frameworks. Leverage CI/CD pipelines for deployment of data workflows on AWS. Required Skills and Qualifications Professional Experience:5+ years of experiencein data engineering or a related field. Programming: Strong proficiency inPython, with experience in libraries likepandas,pySpark,orboto3. AWS Expertise: Hands-on experience with core AWS services for data engineering, such as: AWS Gluefor ETL/ELT. S3for storage. RedshiftorAthenafor data warehousing and querying. Lambdafor serverless compute. KinesisorSNS/SQSfor data streaming. IAM Rolesfor security. Databases: Proficiency in SQL and experience withrelational(e.g., PostgreSQL, MySQL) andNoSQL(e.g., DynamoDB) databases. Data Processing: Knowledge of big data frameworks (e.g., Hadoop, Spark) is a plus. DevOps: Familiarity with CI/CD pipelines and tools like Jenkins, Git, and CodePipeline. Version Control: Proficient with Git-based workflows. Problem Solving: Excellent analytical and debugging skills. Optional Skills Knowledge ofdata modelinganddata warehouse designprinciples. Experience withdata visualization tools(e.g., Tableau, Power BI). Familiarity with containerization (e.g., Docker) and orchestration (e.g., Kubernetes). Exposure to other programming languages like Scala or Java. Education Bachelors or Master’s degree in Computer Science, Engineering, or a related field. Why Join Us? Opportunity to work on cutting-edge AWS technologies. Collaborative and innovative work environment.
Posted 3 months ago
3 - 6 years
7 - 14 Lacs
Gurgaon
Hybrid
Data Analytics Engineer Job Description Job Title: Data Analytics Engineer Experience: 3 to 6 years Location: Gurgaon (Hybrid) Employment Type: Full-time Job Description: We are seeking a highly skilled Data Analytics Engineer with expertise in Qlik Replicate, Qlik Compose, and Data Warehousing to build and maintain robust data pipelines. The ideal candidate will have hands-on experience with Change Data Capture (CDC) pipelines from various sources, an understanding of Bronze, Silver, and Gold data layers , SQL querying for data warehouses like Amazon Redshift , and experience with Data Lakes using S3 . A foundational understanding of Apache Parquet and Python is also desirable. Key Responsibilities: 1. Data Pipeline Development & Maintenance Design, develop, and maintain ETL/ELT pipelines using Qlik Replicate and Qlik Compose. Ensure seamless data replication and transformation across multiple systems. Implement and optimize CDC-based data pipelines from various source systems. 2. Data Layering & Warehouse Management Implement Bronze, Silver, and Gold layer architectures to optimize data workflows. Design and manage data pipelines for structured and unstructured data . Ensure data integrity and quality within Redshift and other analytical data stores. 3. Database Management & SQL Development Write, optimize, and troubleshoot complex SQL queries for data warehouses like Redshift . Design and implement data models that support business intelligence and analytics use cases. 4. Data Lakes & Storage Optimization Work with AWS S3-based Data Lakes to store and manage large-scale datasets. Optimize data ingestion and retrieval using Apache Parquet . 5. Data Integration & Automation Integrate diverse data sources into a centralized analytics platform. Automate workflows to improve efficiency and reduce manual effort. Leverage Python for scripting, automation, and data manipulation where necessary. 6. Performance Optimization & Monitoring Monitor data pipelines for failures and implement recovery strategies. Optimize data flows for better performance, scalability, and cost-effectiveness . Troubleshoot and resolve ETL and data replication issues proactively. Technical Expertise Required: 3 to 6 years of experience in Data Engineering, ETL Development, or related roles. Hands-on experience with Qlik Replicate & Qlik Compose for data integration. Strong SQL expertise, with experience in writing and optimizing queries for Redshift . Experience working with Bronze, Silver, and Gold layer architectures . Knowledge of Change Data Capture (CDC) pipelines from multiple sources. Experience working with AWS S3 Data Lakes . Experience working with Apache Parquet for data storage optimization. Basic understanding of Python for automation and data processing. Experience in cloud-based data architectures (AWS, Azure, GCP) is a plus. Strong analytical and problem-solving skills. Ability to work in a fast-paced, agile environment . Preferred Qualifications: Experience in performance tuning and cost optimization in Redshift. Familiarity with big data technologies such as Spark or Hadoop. Understanding of data governance and security best practices . Exposure to data visualization tools such as Qlik Sense, Tableau, or Power BI.
Posted 3 months ago
5 - 8 years
20 - 25 Lacs
Noida
Hybrid
Expert in Redshift administration and AWS services like EC2,IAM ,CLoudwatch ,VPC.Manage and support 24x7 production as well as development databases to ensure maximum availability for various applications. Perform DBA related tasks in key areas of Cluster resizing ,Snapshot ,backup,restore object from snapshot ,ESSO SAML setup, Cloudwatch Monitoring , Performance Management. Work in key areas of User Management,Access rights management , Security Management . Knowledge of day to day database operations,deployments and developments Good in designing ,implementing WLM/AUTO WLM by analyzing customer needs.
Posted 3 months ago
5 - 10 years
25 - 30 Lacs
Bengaluru, Noida
Work from Office
6+ years(6 year) of professional software engineering mostly focused on the following: • Developing ETL pipelines involving big data. • Developing data processing\analytics applications primarily using PySpark. • Experience of developing applications on cloud(AWS) mostly using services related to storage, compute, ETL, DWH, Analytics and streaming. • Clear understanding and ability to implement distributed storage, processing and scalable applications. • Experience of working with SQL and NoSQL database. • Ability to write and analyze SQL, HQL and other query languages for NoSQL databases. • Proficiency is writing disitributed & scalable data processing code using PySpark, Python and related libraries. Data Engineer AEP Competency • Experience of developing applications that consume the services exposed as ReST APIs.
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2