Jobs
Interviews

95 Redshift Aws Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 11.0 years

0 - 2 Lacs

Chennai

Work from Office

Requirement 1: Skills: AWS Redshift dev with Apache Airflow Location: Chennai Experience: 8+ Years Work Mode: Hybrid. Role & responsibilities: Senior Data Engineer AWS Redshift & Apache Airflow Location: Chennai Experience Required: 8+ Years Job Summary We are seeking a highly experienced Senior Data Engineer to lead the design, development, and optimization of scalable data pipelines using AWS Redshift and Apache Airflow. The ideal candidate will have deep expertise in cloud-based data warehousing, workflow orchestration, and ETL processes, with a strong background in SQL and Python. Key Responsibilities Design, build, and maintain robust ETL/ELT pipelines using Apache Airflow. Integrate data from various sources into AWS Redshift for analytics and reporting. Develop and optimize Redshift schemas, tables, and queries. Monitor and tune Redshift performance for large-scale data operations. Implement and manage DAGs in Airflow for scheduling and monitoring data workflows. Ensure reliability and fault tolerance in data pipelines. Work closely with data scientists, analysts, and business stakeholders to understand data requirements. Translate business needs into technical solutions. Enforce data quality, integrity, and security best practices. Implement access controls and audit mechanisms using AWS IAM and related tools. Mentor junior engineers and promote best practices in data engineering. Stay updated with emerging technologies and recommend improvements. Required Skills & Qualifications Bachelors or Master’s degree in Computer Science, Information Technology, or related field. 8+ years of experience in data engineering, with a focus on AWS Redshift and Apache Airflow. Strong proficiency in AWS Services: Redshift, S3, Lambda, Glue, IAM. Proficient in Programming Languages: SQL, Python. Experience with ETL Tools & Frameworks: Apache Airflow, DBT (preferred). Experience with data modeling, performance tuning, and large-scale data processing. Familiarity with CI/CD pipelines and version control (Git). Excellent problem-solving and communication skills. Preferred Skills Experience with big data technologies (Spark, Hadoop). Knowledge of NoSQL databases (DynamoDB, Cassandra). AWS certification (e.g., AWS Certified Data Analytics – Specialty). Preferred candidate profile Regards, Bhavani Challa Sr. Talent Acquisition E: bhavani.challa@arisetg.com M: 9063067791 www.arisetg.com

Posted 1 month ago

Apply

7.0 - 12.0 years

15 - 30 Lacs

Gurugram, Delhi / NCR

Work from Office

Job Description We are seeking a highly skilled Senior Data Engineer with deep expertise in AWS data services, data wrangling using Python & PySpark, and a solid understanding of data governance, lineage, and quality frameworks. The ideal candidate will have a proven track record of delivering end-to-end data pipelines for logistics, supply chain, enterprise finance, or B2B analytics use cases. Role & responsibilities. Design, build, and optimize ETL pipelines using AWS Glue 3.0+ and PySpark. Implement scalable and secure data lakes using Amazon S3, following bronze/silver/gold zoning. Write performant SQL using AWS Athena (Presto) with CTEs, window functions, and aggregations. Take full ownership from ingestion transformation validation metadata documentation dashboard-ready output. Build pipelines that are not just performant, but audit-ready and metadata-rich from the first version. Integrate classification tags and ownership metadata into all columns using AWS Glue Catalog tagging conventions. Ensure no pipeline moves to QA or BI team without validation logs and field-level metadata completed. Develop job orchestration workflows using AWS Step Functions integrated with EventBridge or CloudWatch. Manage schemas and metadata using AWS Glue Data Catalog. Take full ownership from ingestion transformation validation metadata documentation dashboard-ready output. Ensure no pipeline moves to QA or BI team without validation logs and field-level metadata completed. Enforce data quality using Great Expectations, with checks for null %, ranges, and referential rules. Ensure data lineage with OpenMetadata or Amundsen and add metadata classifications (e.g., PII, KPIs). Collaborate with data scientists on ML pipelines, handling JSON/Parquet I/O and feature engineering. Must understand how to prepare flattened, filterable datasets for BI tools like Sigma, Power BI, or Tableau. Interpret business metrics such as forecasted revenue, margin trends, occupancy/utilization, and volatility. Work with consultants, QA, and business teams to finalize KPIs and logic. Build pipelines that are not just performant, but audit-ready and metadata-rich from the first version. Integrate classification tags and ownership metadata into all columns using AWS Glue Catalog tagging conventions. Preferred candidate profile Strong hands-on experience with AWS: Glue, S3, Athena, Step Functions, EventBridge, CloudWatch, Glue Data Catalog. Programming skills in Python 3.x, PySpark, and SQL (Athena/Presto). Proficient with Pandas and NumPy for data wrangling, feature extraction, and time series slicing. Strong command over data governance tools like Great Expectations, OpenMetadata / Amundsen. Familiarity with tagging sensitive metadata (PII, KPIs, model inputs). Capable of creating audit logs for QA and rejected data. Experience in feature engineering rolling averages, deltas, and time-window tagging. BI-readiness with Sigma, with exposure to Power BI / Tableau (nice to have).

Posted 1 month ago

Apply

7.0 - 12.0 years

15 - 30 Lacs

Gurugram

Hybrid

Job Description We are seeking a highly skilled Senior Data Engineer with deep expertise in AWS data services, data wrangling using Python & PySpark, and a solid understanding of data governance, lineage, and quality frameworks. The ideal candidate will have a proven track record of delivering end-to-end data pipelines for logistics, supply chain, enterprise finance, or B2B analytics use cases. Role & responsibilities Design, build, and optimize ETL pipelines using AWS Glue 3.0+ and PySpark. Implement scalable and secure data lakes using Amazon S3, following bronze/silver/gold zoning. Write performant SQL using AWS Athena (Presto) with CTEs, window functions, and aggregations. Take full ownership from ingestion transformation validation metadata documentation dashboard-ready output. Build pipelines that are not just performant, but audit-ready and metadata-rich from the first version. Integrate classification tags and ownership metadata into all columns using AWS Glue Catalog tagging conventions. Ensure no pipeline moves to QA or BI team without validation logs and field-level metadata completed. Develop job orchestration workflows using AWS Step Functions integrated with EventBridge or CloudWatch. Manage schemas and metadata using AWS Glue Data Catalog. Take full ownership from ingestion transformation validation metadata documentation dashboard-ready output. Ensure no pipeline moves to QA or BI team without validation logs and field-level metadata completed. Enforce data quality using Great Expectations, with checks for null %, ranges, and referential rules. Ensure data lineage with OpenMetadata or Amundsen and add metadata classifications (e.g., PII, KPIs). Collaborate with data scientists on ML pipelines, handling JSON/Parquet I/O and feature engineering. Must understand how to prepare flattened, filterable datasets for BI tools like Sigma, Power BI, or Tableau. Interpret business metrics such as forecasted revenue, margin trends, occupancy/utilization, and volatility. Work with consultants, QA, and business teams to finalize KPIs and logic. Build pipelines that are not just performant, but audit-ready and metadata-rich from the first version. Integrate classification tags and ownership metadata into all columns using AWS Glue Catalog tagging conventions. Preferred candidate profile Strong hands-on experience with AWS: Glue, S3, Athena, Step Functions, EventBridge, CloudWatch, Glue Data Catalog. Programming skills in Python 3.x, PySpark, and SQL (Athena/Presto). Proficient with Pandas and NumPy for data wrangling, feature extraction, and time series slicing. Strong command over data governance tools like Great Expectations, OpenMetadata / Amundsen. Familiarity with tagging sensitive metadata (PII, KPIs, model inputs). Capable of creating audit logs for QA and rejected data. Experience in feature engineering rolling averages, deltas, and time-window tagging. BI-readiness with Sigma, with exposure to Power BI / Tableau (nice to have).

Posted 1 month ago

Apply

3.0 - 6.0 years

15 - 20 Lacs

Hyderabad

Hybrid

Hello, Urgent job openings for Data Engineer role @ GlobalData(Hyd). Job Description given below please go through to understand the requirement. if requirement is matching to your profile & interested to apply please share your updated resume @ mail id (m.salim@globaldata.com). Mention Subject Line :- Applying for Data Engineer @ GlobalData(Hyd) Share your details in the mail :- Full Name : Mobile # : Qualification : Company Name : Designation : Total Work Experience Years : How many years of experience working on Snowflake/Google BigQuery : Current CTC : Expected CTC : Notice Period : Current Location/willing to relocate to Hyd? : Office Address : 3rd Floor, Jyoti Pinnacle Building, Opp to Prestige IVY League Appt, Kondapur Road, Hyderabad, Telangana-500081. Job Description :- We are looking for a skilled and experienced Data Delivery Specification (DDS) Engineer to join our data team. The DDS Engineer will be responsible for designing, developing, and maintaining robust data pipelines and delivery mechanisms, ensuring timely and accurate data delivery to various stakeholders. This role requires strong expertise in cloud data platforms such as AWS, Snowflake, and Google BigQuery, along with a deep understanding of data warehousing concepts. Key Responsibilities Design, develop, and optimize data pipelines for efficient data ingestion, transformation, and delivery from various sources to target systems. Implement and manage data delivery solutions using cloud platforms like AWS (S3, Glue, Lambda, Redshift), Snowflake, and Google BigQuery. Collaborate with data architects, data scientists, and business analysts to understand data requirements and translate them into technical specifications. Develop and maintain DDS documents, outlining data sources, transformations, quality checks, and delivery schedules. Ensure data quality, integrity, and security throughout the data lifecycle. Monitor data pipelines, troubleshoot issues, and implement solutions to ensure continuous data flow. Optimize data storage and query performance on cloud data warehouses. Implement automation for data delivery processes and monitoring. Stay current with new data technologies and best practices in data engineering and cloud platforms. Required Skills & Qualifications Bachelors or Master’s degree in Computer Science, Data Engineering, or a related quantitative field. 4+ years of experience in data engineering, with a focus on data delivery and warehousing. Proven experience with cloud data platforms, specifically: AWS: S3, Glue, Lambda, Redshift, or other relevant data services. Snowflake: Strong experience with data warehousing, SQL, and performance optimization. Google BigQuery: Experience with data warehousing, SQL, and data manipulation. Proficient in SQL for complex data querying, manipulation, and optimization. Experience with scripting languages (e.g., Python) for data pipeline automation. Solid understanding of data warehousing concepts, ETL/ELT processes, and data modeling. Experience with version control systems (e.g., Git). Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills, with the ability to work effectively with cross-functional teams Thanks & Regards, Salim (Human Resources)

Posted 1 month ago

Apply

5.0 - 10.0 years

14 - 24 Lacs

Mumbai, Hyderabad, Bengaluru

Hybrid

Primary Skills:AWS,Redshift,Python,Pyspark Location:- Hyderabad,Banaglore,Pune,Mumbai,Chennai

Posted 1 month ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Bengaluru

Work from Office

Hiring for a FAANG company. Note: This position is part of a program designed to support women professionals returning to the workforce after a career break (9+ months career gap) About the Role Join a high-impact global business team that is building cutting-edge B2B technology solutions. As part of a structured returnship program, this role is ideal for experienced professionals re-entering the workforce after a career break. Youll work on mission-critical data infrastructure in one of the worlds largest cloud-based environments, helping transform enterprise procurement through intelligent architecture and scalable analytics. This role merges consumer-grade experience with enterprise-grade features to serve businesses worldwide. Youll collaborate across engineering, sales, marketing, and product teams to deliver scalable solutions that drive measurable value. Key Responsibilities: Design, build, and manage scalable data infrastructure using modern cloud technologies Develop and maintain robust ETL pipelines and data warehouse solutions Partner with stakeholders to define data needs and translate them into actionable solutions Curate and manage large-scale datasets from multiple platforms and systems Ensure high standards for data quality, lineage, security, and governance Enable data access for internal and external users through secure infrastructure Drive insights and decision-making by supporting sales, marketing, and outreach teams with real-time and historical data Work in a high-energy, fast-paced environment that values curiosity, autonomy, and impact Who You Are: 5+ years of experience in data engineering or related technical roles Proficient in SQL and familiar with relational database management Skilled in building and optimizing ETL pipelines Strong understanding of data modeling and warehousing Comfortable working with large-scale data systems and distributed computing Able to work independently, collaborate with cross-functional teams, and communicate clearly Passionate about solving complex problems through data Preferred Qualifications: Hands-on experience with cloud technologies including Redshift, S3, AWS Glue, EMR, Lambda, Kinesis, and Firehose Familiarity with non-relational databases (e.g., object storage, document stores, key-value stores, column-family DBs) Understanding of cloud access control systems such as IAM roles and permissions Returnship Benefits: Dedicated onboarding and mentorship support Flexible work arrangements Opportunity to work on meaningful, global-scale projects while rebuilding your career momentum Supportive team culture that encourages continuous learning and professional development Top 10 Must-Have Skills: SQL ETL Development Data Modeling Cloud Data Warehousing (e.g., Redshift or equivalent) Experience with AWS or similar cloud platforms Working with Large-Scale Datasets Data Governance & Security Awareness Business Communication & Stakeholder Collaboration Automation with Python/Scala (for ETL pipelines) Familiarity with Non-Relational Databases

Posted 1 month ago

Apply

8.0 - 10.0 years

7 - 16 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Mandatory skills: AWS Python

Posted 1 month ago

Apply

7.0 - 12.0 years

5 - 15 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Role & responsibilities Managing incidents Troubleshooting issues Contributing to development Collaborating with another team Suggesting improvements Enhancing system performance Training new employees

Posted 1 month ago

Apply

7.0 - 12.0 years

0 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Managing incidents Troubleshooting issues Contributing to development Collaborating with another team Suggesting improvements Enhancing system performance Training new employees AWS Redshift PLSQL 1A Unix

Posted 1 month ago

Apply

8.0 - 13.0 years

0 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

AWS Redshift Ops + PLSQL + Unix Managing incidents Troubleshooting issues Contributing to development Collaborating with another team Suggesting improvements Enhancing system performance Training new employees

Posted 1 month ago

Apply

4.0 - 6.0 years

14 - 18 Lacs

Chennai

Work from Office

Responsibilities: * Design, implement, optimize Redshift solutions using AWS Glue. * Ensure data security compliance. * Collaborate with cross-functional teams on project delivery. * Maintain and troubleshoot existing systems.

Posted 1 month ago

Apply

5.0 - 8.0 years

15 - 20 Lacs

Hyderabad

Work from Office

We are seeking a skilled and motivated Lead Data Analyst to oversee the successful implementation and ongoing support of KloudGin's Business Intelligence (BI) solutions for all customers. Leveraging AWS Redshift and Amazon QuickSight, this role will ensure seamless BI integration, data accuracy, ETL optimization, and manage release upgrades, daily operations, and internal reporting needs. Responsibilities: Lead and manage the end-to-end implementation of KloudGin's BI solutions across all customer projects, ensuring timely and high-quality delivery. Oversee and optimize ETL processes to ensure efficient data handling, storage, and retrieval within the BI framework. Manage daily operations, including bug tracking, prioritizing enhancements, and coordinating resolutions with both the technical team and customer contacts. Develop and manage internal reports and dashboards to support decision-making, providing insights into KPIs and operational metrics for key stakeholders. Coordinate with customers to plan and execute BI-related activities for each new release, ensuring minimal disruption and seamless upgrades. Oversee the health and performance of BI systems, working with the infrastructure team to ensure data integrity, speed, and reliability. Work closely with Product Engineering and Customer Success teams to understand requirements, align on priorities, and provide feedback for product improvements. Troubleshoot technical issues and provide solutions to ensure smooth operation of QuickSight dashboards. Identify opportunities to improve data analysis processes and enhance overall data analytics capabilities. Ensure comprehensive documentation and knowledge transfer, providing training and guidance to internal teams and end-users as needed. Requirements: Bachelors or Master’s degree in a Relevant Field such as Data Science, Business Analytics, Information Systems, or Computer Science. 6+ years of progressively increasing professional experience in Data Analytics, with at least 3+ years in a senior-level role. 3+ years’ experience in Amazon QuickSight and SQL. Strong experience in building visualizations, dashboards, and reports using QuickSight. Strong SQL skills for data extraction, transformation, and loading from various data sources. Experience working closely with stakeholders to understand needs and define analytic requirements. Experience with data warehousing concepts and ETL processes. Cloud Computing and AWS Experience. Excellent Communication and Collaboration, Analytical and Problem-Solving Skills.

Posted 1 month ago

Apply

7.0 - 12.0 years

15 - 30 Lacs

Hyderabad

Hybrid

Job Title: Lead Data Engineer Job Summary The Lead Data Engineer will provide technical expertise in analysis, design, development, rollout and maintenance of data integration initiatives. This role will contribute to implementation methodologies and best practices, as well as work on project teams to analyse, design, develop and deploy business intelligence / data integration solutions to support a variety of customer needs. This position oversees a team of Data Integration Consultants at various levels, ensuring their success on projects, goals, trainings and initiatives though mentoring and coaching. Provides technical expertise in needs identification, data modelling, data movement and transformation mapping (source to target), automation and testing strategies, translating business needs into technical solutions with adherence to established data guidelines and approaches from a business unit or project perspective whilst leveraging best fit technologies (e.g., cloud, Hadoop, NoSQL, etc.) and approaches to address business and environmental challenges Works with stakeholders to identify and define self-service analytic solutions, dashboards, actionable enterprise business intelligence reports and business intelligence best practices. Responsible for repeatable, lean and maintainable enterprise BI design across organizations. Effectively partners with client team. Leadership not only in the conventional sense, but also within a team we expect people to be leaders. Candidate should elicit leadership qualities such as Innovation, Critical thinking, optimism/positivity, Communication, Time Management, Collaboration, Problem-solving, Acting Independently, Knowledge sharing and Approachable. Responsibilities: Design, develop, test, and deploy data integration processes (batch or real-time) using tools such as Microsoft SSIS, Azure Data Factory, Databricks, Matillion, Airflow, Sqoop, etc. Create functional & technical documentation e.g. ETL architecture documentation, unit testing plans and results, data integration specifications, data testing plans, etc. Provide a consultative approach with business users, asking questions to understand the business need and deriving the data flow, conceptual, logical, and physical data models based on those needs. Perform data analysis to validate data models and to confirm ability to meet business needs. May serve as project or DI lead, overseeing multiple consultants from various competencies Stays current with emerging and changing technologies to best recommend and implement beneficial technologies and approaches for Data Integration Ensures proper execution/creation of methodology, training, templates, resource plans and engagement review processes Coach team members to ensure understanding on projects and tasks, providing effective feedback (critical and positive) and promoting growth opportunities when appropriate. Coordinate and consult with the project manager, client business staff, client technical staff and project developers in data architecture best practices and anything else that is data related at the project or business unit levels Architect, design, develop and set direction for enterprise self-service analytic solutions, business intelligence reports, visualisations and best practice standards. Toolsets include but not limited to: SQL Server Analysis and Reporting Services, Microsoft Power BI, Tableau and Qlik. Work with report team to identify, design and implement a reporting user experience that is consistent and intuitive across environments, across report methods, defines security and meets usability and scalability best practices. Required Qualifications: 10 Years industry implementation experience with data integration tools such as AWS services Redshift, Athena, Lambda, Glue, S3, ETL, etc. 5-8 years of management experience required 5-8 years consulting experience preferred Minimum of 5 years of data architecture, data modelling or similar experience Bachelor’s degree or equivalent experience, Master’s Degree Preferred Strong data warehousing, OLTP systems, data integration and SDLC Strong experience in orchestration & working experience cloud native / 3rd party ETL data load orchestration Understanding and experience with major Data Architecture philosophies (Dimensional, ODS, Data Vault, etc.) Understanding of on premises and cloud infrastructure architectures (e.g. Azure, AWS, GCP) Strong experience in Agile Process (Scrum cadences, Roles, deliverables) & working experience in either Azure DevOps, JIRA or Similar with Experience in CI/CD using one or more code management platforms Strong databricks experience required to create notebooks in pyspark Experience using major data modelling tools (examples: ERwin, ER/Studio, PowerDesigner, etc.) Experience with major database platforms (e.g. SQL Server, Oracle, Azure Data Lake, Hadoop, Azure Synapse/SQL Data Warehouse, Snowflake, Redshift etc.) Strong experience in orchestration & working experience in either Data Factory or HDInsight or Data Pipeline or Cloud composer or Similar Understanding and experience with major Data Architecture philosophies (Dimensional, ODS, Data Vault, etc.) Understanding of modern data warehouse capabilities and technologies such as real-time, cloud, Big Data. Understanding of on premises and cloud infrastructure architectures (e.g. Azure, AWS, GCP) Strong experience in Agile Process (Scrum cadences, Roles, deliverables) & working experience in either Azure DevOps, JIRA or Similar with Experience in CI/CD using one or more code management platforms 3-5 years’ development experience in decision support / business intelligence environments utilizing tools such as SQL Server Analysis and Reporting Services, Microsoft’s Power BI, Tableau, looker etc. Preferred Skills & Experience: Knowledge and working experience with Data Integration processes, such as Data Warehousing, EAI, etc. Experience in providing estimates for the Data Integration projects including testing, documentation, and implementation Ability to Analyse business requirements as they relate to the data movement and transformation processes, research, evaluation and recommendation of alternative solutions. Ability to provide technical direction to other team members including contractors and employees. Ability to contribute to conceptual data modelling sessions to accurately define business processes, independently of data structures and then combines the two together. Proven experience leading team members, directly or indirectly, in completing high-quality major deliverables with superior results Demonstrated ability to serve as a trusted advisor that builds influence with client management beyond simply EDM. Can create documentation and presentations such that the they “stand on their own” Can advise sales on evaluation of Data Integration efforts for new or existing client work. Can contribute to internal/external Data Integration proof of concepts. Demonstrates ability to create new and innovative solutions to problems that have previously not been encountered. Ability to work independently on projects as well as collaborate effectively across teams Must excel in a fast-paced, agile environment where critical thinking and strong problem solving skills are required for success Strong team building, interpersonal, analytical, problem identification and resolution skills Experience working with multi-level business communities Can effectively utilise SQL and/or available BI tool to validate/elaborate business rules. Demonstrates an understanding of EDM architectures and applies this knowledge in collaborating with the team to design effective solutions to business problems/issues. Effectively influences and, at times, oversees business and data analysis activities to ensure sufficient understanding and quality of data. Demonstrates a complete understanding of and utilises DSC methodology documents to efficiently complete assigned roles and associated tasks. Deals effectively with all team members and builds strong working relationships/rapport with them. Understands and leverages a multi-layer semantic model to ensure scalability, durability, and supportability of the analytic solution. Understands modern data warehouse concepts (real-time, cloud, Big Data) and how to enable such capabilities from a reporting and analytic stand-point. Demonstrated ability to serve as a trusted advisor that builds influence with client management beyond simply EDM.

Posted 1 month ago

Apply

8.0 - 12.0 years

16 - 27 Lacs

Chennai, Bengaluru

Work from Office

Role & responsibilities Design, develop, and optimize scalable ETL pipelines using PySpark and AWS data services Work with structured and semi-structured data from various sources and formats (CSV, JSON, Parquet) Build reusable data transformations using Spark DataFrames, RDDs, and Spark SQL Implement data validation, quality checks, and ensure schema evolution across data sources Manage deployment and monitoring of Spark jobs using AWS EMR, Glue, Lambda, and CloudWatch Collaborate with product owners, architects, and data scientists to deliver robust data workflows Tune job performance, manage partitioning strategies, and reduce job latency/cost Contribute to version control, CI/CD processes, and production support Preferred candidate profile Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. 5+ years of experience in PySpark, Spark SQL, RDDs, UDFs, and Spark optimization Strong experience in building ETL workflows for large-scale data processing Solid understanding of AWS cloud ecosystem, especially S3, EMR, Glue, Lambda, Athena Proficiency in Python, SQL, and shell scripting Experience with data lakes, partitioning strategies, and file formats (e.g., Parquet, ORC) Familiarity with Git, Jenkins, and automated testing frameworks (e.g., PyTest) Experience with Redshift, Snowflake, or other DW platforms Exposure to data governance, cataloging, or DQ frameworks Terraform or infrastructure-as-code experience Understanding of Spark internals, DAGs, and caching strategies

Posted 1 month ago

Apply

6.0 - 8.0 years

12 - 15 Lacs

Pune, Chennai

Work from Office

Required Skills: Min. 6 Years of Experience in Data Engineering / Backend Data Processing. Strong Hands-On Experience with Python for Data Processing. Expertise In Apache Spark (Pyspark Preferred) Advanced Proficiency in Sql

Posted 1 month ago

Apply

3.0 - 6.0 years

20 - 25 Lacs

Bengaluru

Hybrid

Job Description: We are looking for a talented and motivated Data Analyst / BI Developer with 3-5 years of experience to join our team. The ideal candidate will have a strong background in SQL, experience with dashboard creation using Tableau, and hands-on knowledge of either AWS Redshift (or other AWS cloud data warehouse services) or Databricks . A problem-solver with excellent solution-finding abilities and a proactive, independent work ethic is essential. As a key contributor to the team, you will work with various business stakeholders to deliver actionable insights, and drive data-driven decision-making within the organization. A strong understanding of US healthcare ecosystem will be an added advantage. Key Responsibilities: Develop, design, and maintain dashboards and reports using Tableau to support business decision-making. Write and optimize complex SQL queries to extract, manipulate, and analyze data from multiple sources. Collaborate with cross-functional teams to understand business needs and translate them into effective data solutions. Work with AWS Redshift and/or Databricks for data extraction, transformation, and loading (ETL) processes. Proactively identify and resolve data issues, acting as a solution finder to overcome challenges and drive improvements. Work independently, taking ownership of tasks and ensuring high-quality deliverables within deadlines. Be a strong team player, contributing to team knowledge sharing and fostering a collaborative environment. Apply knowledge of US healthcare systems to help build relevant data solutions and insights. Required Skills & Qualifications: Minimum 3 years of experience in data analysis, business intelligence, or related roles. Strong expertise in SQL for data querying and manipulation. Extensive experience creating dashboards and reports using Tableau and Power BI . Hands-on experience working with AWS Redshift and/or Databricks . Proven problem-solving skills with a focus on providing actionable data solutions. Self-motivated and able to work independently, while being a proactive team player. Experience or strong understanding of US healthcare systems and data-related needs will be a plus. Excellent communication skills with the ability to work across different teams and stakeholders.

Posted 1 month ago

Apply

3.0 - 6.0 years

20 - 25 Lacs

Bengaluru

Hybrid

Job Description: We are looking for a talented and motivated Data Analyst / BI Developer with 3-5 years of experience to join our team. The ideal candidate will have a strong background in SQL, experience with dashboard creation using Tableau, and hands-on knowledge of either AWS Redshift (or other AWS cloud data warehouse services) or Databricks . A problem-solver with excellent solution-finding abilities and a proactive, independent work ethic is essential. As a key contributor to the team, you will work with various business stakeholders to deliver actionable insights, and drive data-driven decision-making within the organization. A strong understanding of US healthcare ecosystem will be an added advantage. Key Responsibilities: Develop, design, and maintain dashboards and reports using Tableau to support business decision-making. Write and optimize complex SQL queries to extract, manipulate, and analyze data from multiple sources. Collaborate with cross-functional teams to understand business needs and translate them into effective data solutions. Work with AWS Redshift and/or Databricks for data extraction, transformation, and loading (ETL) processes. Proactively identify and resolve data issues, acting as a solution finder to overcome challenges and drive improvements. Work independently, taking ownership of tasks and ensuring high-quality deliverables within deadlines. Be a strong team player, contributing to team knowledge sharing and fostering a collaborative environment. Apply knowledge of US healthcare systems to help build relevant data solutions and insights. Required Skills & Qualifications: Minimum 3 years of experience in data analysis, business intelligence, or related roles. Strong expertise in SQL for data querying and manipulation. Extensive experience creating dashboards and reports using Tableau and Power BI . Hands-on experience working with AWS Redshift and/or Databricks . Proven problem-solving skills with a focus on providing actionable data solutions. Self-motivated and able to work independently, while being a proactive team player. Experience or strong understanding of US healthcare systems and data-related needs will be a plus. Excellent communication skills with the ability to work across different teams and stakeholders.

Posted 1 month ago

Apply

3.0 - 7.0 years

12 - 22 Lacs

Bengaluru

Hybrid

Job Description: We are looking for a talented and motivated Data Analyst / BI Developer with 35 years of experience to join our team. The ideal candidate will have a strong background in SQL, experience with dashboard creation using Tableau, and hands-on knowledge of either AWS Redshift (or other AWS cloud data warehouse services) or Databricks . A problem solver with excellent solution-finding abilities and a proactive, independent work ethic is essential. As a key contributor to the team, you will work with various business stakeholders to deliver actionable insights and drive data-driven decision-making within the organization. A strong understanding of the US healthcare ecosystem will be an added advantage. Key Responsibilities: Develop, design, and maintain dashboards and reports using Tableau to support business decision-making. Write and optimize complex SQL queries to extract, manipulate, and analyze data from multiple sources. Collaborate with cross-functional teams to understand business needs and translate them into effective data solutions. Work with AWS Redshift and/or Databricks for data extraction, transformation, and loading (ETL) processes. Proactively identify and resolve data issues, acting as a solution finder to overcome challenges and drive improvements. Work independently, taking ownership of tasks and ensuring high-quality deliverables within deadlines. Be a strong team player, contributing to team knowledge sharing and fostering a collaborative environment. Apply knowledge of US healthcare systems to help build relevant data solutions and insights. Required Skills & Qualifications: Minimum 3 years of experience in data analysis, business intelligence, or related roles. Strong expertise in SQL for data querying and manipulation. Extensive experience creating dashboards and reports using Tableau and Power BI . Hands-on experience working with AWS Redshift and/or Databricks . Proven problem-solving skills with a focus on providing actionable data solutions. Self-motivated and able to work independently, while being a proactive team player. Experience or strong understanding of US healthcare systems and data-related needs will be a plus. Excellent communication skills with the ability to work across different teams and stakeholders. Additional Details: Work Mode: Hybrid Notice Period: Preferably looking for Immediate Joiners Job Location: Cessna Business Park, Kadubeesanahalli, Bangalore Interested candidates can share your updated cv to the below mail ID Contact Person- Pawan Contact No -8951873995 Mail ID - pawanbehera@infinitiresearch.com

Posted 1 month ago

Apply

3.0 - 8.0 years

9 - 19 Lacs

Kolkata, Gurugram, Bengaluru

Work from Office

Role & responsibilities Expertise in Amazon Redshift: Schema design, performance tuning (distribution keys, sort keys, WLM), query optimization, security. SQL Mastery: Complex queries, window functions, CTEs. Data Modeling: Star/snowflake schemas, dimensional modeling. ETL/ELT Development: Experience with tools like AWS Glue, Apache Airflow, dbt. Semantic Layer Development: Understanding how to map business terms to physical data structures. Scripting (Python preferred). Good to have DBT knowledge

Posted 1 month ago

Apply

15.0 - 24.0 years

45 - 100 Lacs

Chennai

Remote

What can you expect in a Director of Data Engineering role with TaskUs: Key Responsibilities: Manage a geographically diverse team of Managers/Senior Managers of Data Engineering responsible for the ETL to process, transform, and derive attributes for all operational data for reporting and analytics use from various transactional systems. Sets and enforces BI standards and architecture. Aligns BI architecture with enterprise architecture. Partner with business leaders, technology leaders, and other stakeholders to champion the strategy, design, development, launch, and management of cloud data engineering-related projects and initiatives that can scale and rapidly meet strategic and business objectives Define cloud data engineering strategy, roadmap, and strategic execution steps Collaborate with business leadership and technology partners to leverage data to support and optimize efficiencies Define, design & implement processes for data integration and data management on cloud data platforms, primarily AWS Accountable for management of project prioritization, progress, and workload management across cloud data engineering staff to ensure on-time delivery Review and manage the ticketing queue to ensure timely assignment and progression of support tickets Work directly with the IT Application Teams and other IT areas to understand, assess requirements, and prioritize a backlog of cloud services needed to be delivered to enable transformation Conduct comprehensive need assessments to create and implement modernized serverless data architecture plan that supports the business analytics and reporting needs Establish IT Data & Analysis standards, practices, and security measures to ensure effective and consistent information processing and consistent data quality/accessibility Help architect cloud data engineering source to target auditing altering solutions to ensure data quality Responsible for data architecture, ETL, backup, and security of new AWS-based data lake framework Conducts data quality initiatives to rid the system of old, unused, or duplicate data. Oversees complex data modeling and advanced project metadata development. Ensures that business rules are consistently applied across different user interfaces to limit the possibility of inconsistent results. Managed architected the migration of on-premise DW SQL server star schema to Redshift Designs specifications and standards for semantic layers and multidimensional models for complex BI projects, across all environments. Consults on training and usage for the business community by selecting appropriate BI tools, features, and techniques. Required Qualifications: A People Leader with strong stakeholder management experience Strong knowledge of Data Warehousing concepts with an understanding of traditional and MPP database designs, star and snowflake schemas, and database migration experience, with 10 Years of experience in data modeling. You must have at least 8 years of hands-on development experience using ETL Tools such as Pentaho, AWS Glue, Talend, or Airflow. Knowledge of the architecture, design, and implementation of MPP Databases such as Teradata, Snowflake, or Redshift. 5 years of experience in development using Cloud-based analytics solutions preferable (AWS). Knowledge of designing and implementing streaming pipelines using Apache Kafka, Apache Spark, and Fivetran Segment. At least 5 years experience in using Python in a cloud-based environment is a plus. Knowledge of NoSQL DBs such as MongoDB is not required but preferred. Structured thinker and effective communicator. Education / Certifications: Bachelors degree in Computer Science, Information Technology, or related fields (MBA or MS degree is a plus) or 15 to 20 years relevant experience in lieu of a degree. Work Location / Work Schedule / Travel: Remote (Global) How We Partner To Protect You: Task Us will neither solicit money from you during your application process nor require any form of payment in order to proceed with your application. Kindly ensure that you are always in communication with only authorized recruiters of Task Us. DEI: In Task Us we believe that innovation and higher performance are brought by people from all walks of life. We welcome applicants of different backgrounds, demographics, and circumstances. Inclusive and equitable practices are our responsibility as a business. Task Us is committed to providing equal access to opportunities. If you need reasonable accommodations in any part of the hiring process, please let us know. We invite you to explore all Task Us career opportunities and apply through the provided URL https://www.taskus.com/careers/ .

Posted 1 month ago

Apply

6.0 - 11.0 years

25 - 37 Lacs

Hyderabad, Bengaluru, Delhi / NCR

Work from Office

Azure Expertise, Proven experience with Azure Cloud services especially Azure Data Factory, Azure SQL Database & Azure Databricks Expert in PySpark data processing & analytics Strong background in building and optimizing data pipelines and workflows. Required Candidate profile Solid exp with data modeling,ETL processes & data warehousing Performance Tuning Ability to optimize data pipelines & jobs to ensure scalability & performance troubleshooting & resolving performance

Posted 1 month ago

Apply

7.0 - 12.0 years

20 - 35 Lacs

Pune

Hybrid

Job Duties and Responsibilities: We are looking for a self-starter to join our Data Engineering team. You will work in a fast-paced environment where you will get an opportunity to build and contribute to the full lifecycle development and maintenance of the data engineering platform. With the Data Engineering team you will get an opportunity to - Design and implement data engineering solutions that is scalable, reliable and secure on the Cloud environment Understand and translate business needs into data engineering solutions Build large scale data pipelines that can handle big data sets using distributed data processing techniques that supports the efforts of the data science and data application teams Partner with cross-functional stakeholder including Product managers, Architects, Data Quality engineers, Application and Quantitative Science end users to deliver engineering solutions Contribute to defining data governance across the data platform Basic Requirements: A minimum of a BS degree in computer science, software engineering, or related scientific discipline is desired 3+ years of work experience in building scalable and robust data engineering solutions Strong understanding of Object Oriented programming and proficiency with programming in Python (TDD) and Pyspark to build scalable algorithms 3+ years of experience in distributed computing and big data processing using the Apache Spark framework including Spark optimization techniques 2+ years of experience with Databricks, Delta tables, unity catalog, Delta Sharing, Delta live tables(DLT) and incremental data processing Experience with Delta lake, Unity Catalog Advanced SQL coding and query optimization experience including the ability to write analytical and nested queries 3+ years of experience in building scalable ETL/ ELT Data Pipelines on Databricks and AWS (EMR) 2+ Experience of orchestrating data pipelines using Apache Airflow/ MWAA Understanding and experience of AWS Services that include ADX, EC2, S3 3+ years of experience with data modeling techniques for structured/ unstructured datasets Experience with relational/columnar databases - Redshift, RDS and interactive querying services - Athena/ Redshift Spectrum Passion towards healthcare and improving patient outcomes Demonstrate analytical thinking with strong problem solving skills Stay on top of emerging technologies and posses willingness to learn. Bonus Experience (optional) Experience with Agile environment Experience operating in a CI/CD environment Experience building HTTP/REST APIs using popular frameworks Healthcare experience

Posted 1 month ago

Apply

15.0 - 24.0 years

70 - 90 Lacs

Chennai

Remote

What can you expect in a Director of Data Engineering role with TaskUs: Key Responsibilities: Manage a geographically diverse team of Managers/Senior Manager of Data Engineering responsible for the ETL to process, transform, and derive attributes for all operational data for reporting and analytics use from various transactional systems. Sets and enforces BI standards and architecture. Aligns BI architecture with enterprise architecture. Partner with business leaders, technology leaders, and other stakeholders to champion the strategy, design, development, launch, and management of cloud data engineering related projects and initiatives that can scale and rapidly meet strategic and business objectives Define cloud data engineering strategy, roadmap and strategic execution steps Collaborate with business leadership and technology partners to leverage data to support and optimize efficiencies Define, design & implement processes for data integration and data management on cloud data platforms, primarily AWS Accountable for management of project prioritization, progress, and workload management across cloud data engineering staff to ensure on-time delivery Review and manage ticketing queue to ensure timely assignment and progression of support tickets Work directly with the IT Application Teams and other IT areas to understand, assess requirements and prioritize a backlog of cloud services needs to be delivered to enable transformation Conduct comprehensive need assessments to create and implemented modernized server less data architecture plan that supports the businesses analytics and reporting needs Establish IT Data & Analysis standards, practices and security measures to ensure effective and consistent information processing and consistent data quality/accessibility Help architect cloud data engineering source to target auditing altering solution to ensure data quality Responsible for data architecture, ETL, backup, and security of new AWS based data lake framework Conducts data quality initiative to rid the system of old, unused, or duplicate data. Oversees complex data modeling and advanced project metadata development. Ensures that business rules are consistently applied across different user interfaces to limit the possibility of inconsistent results. Managed architected the migration of on premise DW SQL server star schema to Redshift Designs specifications and standards for semantic layers and multidimensional models for complex BI projects, across all environments. Consults on training and usage for the business community by selecting appropriate BI tools, features, and techniques. Required Qualifications: A People Leader with strong stakeholder management experience Strong knowledge of Data Warehousing concepts with understanding of traditional and MPP database designs, star and snowflake schemas, database migration experience, with 4-5 Years of experience of data modeling. You must have at least 5 years of hands-on development experience using ETL Tools such as Pentaho, AWS Glue, Talend or Airflow. Knowledge on the architecture, design and implementation of MPP Databases such as Teradata, Snowflake or Redshift. 5 years of experience in development using Cloud-based analytics solutions preferrable (AWS). Knowledge of designing and implementing streaming pipelines using Apache Kafka, Apache Spark, Fivetran Segment. At least 5 years experience of using Python in a cloud based environment is definitely a plus. Knowledge on NoSQL DBs such as MongoDB is not required but preferred. Structured thinker and effective communicator. Education / Certifications : Bachelors degree in Computer Science, Information Technology, or related fields (MBA or MS degree is a plus) or 15 to 20 years experience in lieu of a degree. How We Partner To Protect You: TaskUs will neither solicit money from you during your application process nor require any form of payment in order to proceed with your application. Kindly ensure that you are always in communication with only authorized recruiters of TaskUs. DEI: In TaskUs we believe that innovation and higher performance are brought by people from all walks of life. We welcome applicants of different backgrounds, demographics, and circumstances. Inclusive and equitable practices are our responsibility as a business. TaskUs is committed to providing equal access to opportunities. If you need reasonable accommodations in any part of the hiring process, please let us know. We invite you to explore all TaskUs career opportunities and apply through the provided URL https://www.taskus.com/careers/ .

Posted 1 month ago

Apply

2.0 - 6.0 years

0 - 1 Lacs

Pune

Work from Office

As Lead Data Engineer , you'll design and manage scalable ETL pipelines and clean, structured data flows for real-time retail analytics. You'll work closely with ML engineers and business teams to deliver high-quality, ML-ready datasets. Responsibilities: Develop and optimize large-scale ETL pipelines Design schema-aware data flows and dashboard-ready datasets Manage data pipelines on AWS (S3, Glue, Redshift) Work with transactional and retail data for real-time insights

Posted 1 month ago

Apply

5.0 - 7.0 years

5 - 12 Lacs

Pune

Work from Office

Job Title: DevOps Engineer with Expertise in AWS, Database Management, and Cloud Networking Job Description: We are seeking a highly skilled DevOps Engineer with a minimum of 7 years of experience in Amazon Web Services (AWS) , strong expertise in database management , and a solid understanding of cloud networking . The ideal candidate will play a pivotal role in optimizing, scaling, and maintaining our cloud infrastructure and deployment pipelines. You will work collaboratively with cross-functional teams to ensure seamless integration of cloud solutions and infrastructure with development, operations, and security protocols. Key Responsibilities: 1. AWS Infrastructure Management: Design, deploy, and manage scalable, reliable, and secure AWS environments. Automate infrastructure provisioning using tools like Pulumi , CloudFormation, Terraform , or AWS CDK . Optimize AWS services for performance and cost efficiency. CI/CD Pipeline Development 2. CI/CD Pipeline Development: Build and maintain robust CI/CD pipelines using tools like Azure DevOps , Jenkins, GitLab CI, CircleCI , or AWS CodePipeline . Ensure smooth and automated application deployments with minimal downtime. 3. Database Management: Manage, monitor, and optimize databases such as RDS (PostgreSQL, SQl Server) , RedShift , or other cloud database solutions. Design and implement disaster recovery plans, database backups, and high-availability strategies. Troubleshoot performance issues and execute data migrations as needed. 4. Cloud Networking: Configure and manage VPCs, subnets, and security groups to ensure secure and efficient network communication. Implement and manage DNS configurations, load balancers (ALB/NLB), and VPNs. Monitor and troubleshoot network performance issues, ensuring uptime and resiliency. 5. Monitoring and Logging: Implement monitoring solutions using CloudWatch, CloudTrail etc. Set up logging and alerting systems for proactive issue detection. 6. Collaboration and Support: Collaborate with developers, QA engineers, and security teams to ensure infrastructure aligns with business needs. Document processes, configurations, and architecture for team-wide accessibility. Provide on-call support to address critical production issues SOC-2 Compliance monitoring and tracking using Vanta software. Required Qualifications: 5+ years of hands-on experience with AWS services (EC2, S3, Lambda, RDS, IAM, VPC, CloudFront, etc.). Strong proficiency in database management and query optimization. Deep understanding of cloud networking concepts, including routing, security, and load balancing. Experience with infrastructure-as-code (IaC) tools like Terraform , CloudFormation , or similar. Strong scripting skills in Python, Bash , or similar languages for automation. Proficiency with CI/CD tools and Git-based workflows. Familiarity with containerization tools like Docker and orchestration platforms like Kubernetes . Excellent troubleshooting and problem-solving skills. Preferred Skills: AWS certifications (e.g., AWS Certified Solutions Architect, DevOps Engineer). Knowledge of compliance and security best practices in cloud environments. Familiarity with observability platforms like Datadog .

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies