Home
Jobs

1802 Redshift Jobs - Page 32

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 9.0 years

6 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

We are seeking a detail-oriented and highly skilled Data Engineering Test Automation Engineer to ensure the quality, reliability, and performance of our data pipelines and platforms. The ideal candidate will have a strong background in data testing , ETL validation , and test automation frameworks . You will work closely with data engineers, analysts, and DevOps teams to build robust test suites for large-scale data solutions. This role combines deep technical execution with a solid foundation in QA best practices including test planning, defect tracking, and test lifecycle management . You will be responsible for designing and executing manual and automated test strategies for complex real-time and batch data pipelines , contributing to the design of automation frameworks , and ensuring high-quality data delivery across our AWS and Databricks-based analytics platforms . The role is highly technical and hands-on , with a strong focus on automation, data accuracy, completeness, consistency , and ensuring data governance practices are seamlessly integrated into development pipelines. Roles & Responsibilities Design, develop, and maintain automated test scripts for data pipelines, ETL jobs, and data integrations. Validate data accuracy, completeness, transformations, and integrity across multiple systems. Collaborate with data engineers to define test cases and establish data quality metrics. Develop reusable test automation frameworks and CI/CD integrations (e.g., Jenkins, GitHub Actions). Perform performance and load testing for data systems. Maintain test data management and data mocking strategies. Identify and track data quality issues, ensuring timely resolution. Perform root cause analysis and drive corrective actions. Contribute to QA ceremonies (standups, planning, retrospectives) and drive continuous improvement in QA processes and culture. Must-Have Skills Experience in QA roles, with strong exposure to data pipeline validation and ETL Testing. Domain Knowledge of R&D domain of life science. Validate data accuracy, transformations, schema compliance, and completeness across systems using PySpark and SQL . Strong hands-on experience with Python, and optionally PySpark, for developing automated data validation scripts. Proven experience in validating ETL workflows, with a solid understanding of data transformation logic, schema comparison, and source-to-target mapping. Experience working with data integration and processing platforms like Databricks/Snowflake, AWS EMR, Redshift etc Experience in manual and automated testing of data pipelines executions for both batch and real-time data pipelines. Perform performance testing of large-scale complex data engineering pipelines. Ability to troubleshoot data issues independently and collaborate with engineering teams for root cause analysis Strong understanding of QA methodologies, test planning, test case design, and defect lifecycle management. Hands-on experience with API testing using Postman, pytest, or custom automation scripts Experience integrating automated tests into CI/CD pipelines using tools like Jenkins, GitHub Actions, or similar. Knowledge of cloud platforms such as AWS, Azure, GCP. Good-to-Have Skills Certifications in Databricks, AWS, Azure, or data QA (e.g., ISTQB). Understanding of data privacy, compliance, and governance frameworks. Knowledge of UI automated testing frameworks like Selenium, JUnit, TestNG Familiarity with monitoring/observability tools such as Datadog, Prometheus, or Cloud Watch Education and Professional Certifications Masters degree and 3 to 7 years of Computer Science, IT or related field experience OR Bachelors degree and 4 to 9 years of Computer Science, IT or related field experience Soft Skills Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills.

Posted 2 weeks ago

Apply

10.0 - 15.0 years

12 - 17 Lacs

Pune

Hybrid

Naukri logo

Role Overview: The Senior Tech Lead - AWS Data Engineering leads the design, development and optimization of data solutions on the AWS platform. The jobholder has a strong background in data engineering, cloud architecture, and team leadership, with a proven ability to deliver scalable and secure data systems. Responsibilities: Lead the design and implementation of AWS-based data architectures and pipelines. Architect and optimize data solutions using AWS services such as S3, Redshift, Glue, EMR, and Lambda. Provide technical leadership and mentorship to a team of data engineers. Collaborate with stakeholders to define project requirements and ensure alignment with business goals. Ensure best practices in data security, governance, and compliance. Troubleshoot and resolve complex technical issues in AWS data environments. Stay updated on the latest AWS technologies and industry trends. Key Technical Skills & Responsibilities Overall 10+Yrs of Experience in IT Minimum 5-7 years in design and development of cloud data platforms using AWS services Must have experience of design and development of data lake / data warehouse / data analytics solutions using AWS services like S3, Lake Formation, Glue, Athena, EMR, Lambda, Redshift Must be aware about the AWS access control and data security features like VPC, IAM, Security Groups, KMS etc Must be good with Python and PySpark for data pipeline building. Must have data modeling including S3 data organization experience Must have an understanding of hadoop components, No SQL database, graph database and time series database; and AWS services available for those technologies Must have experience of working with structured, semi-structured and unstructured data Must have experience of streaming data collection and processing. Kafka experience is preferred. Experience of migrating data warehouse / big data application to AWS is preferred . Must be able to use Gen AI services (like Amazon Q) for productivity gain Eligibility Criteria: Bachelors degree in Computer Science, Data Engineering, or a related field. Extensive experience with AWS data services and tools. AWS certification (e.g., AWS Certified Data Analytics - Specialty). Experience with machine learning and AI integration in AWS environments. Strong understanding of data modeling, ETL/ELT processes, and cloud integration. Proven leadership experience in managing technical teams. Excellent problem-solving and communication skills. Our Offering Global cutting-edge IT projects that shape the future of digital and have a positive impact on environment. Wellbeing programs & work-life balance - integration and passion sharing events. Attractive Salary and Company Initiative Benefits Courses and conferences Attractive Salary Hybrid work culture

Posted 2 weeks ago

Apply

4.0 - 8.0 years

6 - 10 Lacs

Gurugram

Work from Office

Naukri logo

About the Role: Grade Level (for internal use): 09 The Role: Data Intelligence Engineer The Team The team is responsible for building, maintaining, and evolving the data intelligence architecture, data pipelines, and visualizations. It collaborates with business partners and management, working within multi-functional agile teams to ensure data integrity, lineage, and security. The team values self-service, automation, and leveraging data to drive insights and improvements. The Impact This role is pivotal in transforming raw data into actionable insights that improve productivity, reduce operational risks, and identify business opportunities. By designing and implementing robust data solutions and visualizations, the Data Intelligence Engineer directly supports data-driven decision-making across various levels of the organization. The position contributes to extracting tangible value from data assets, ultimately enhancing overall service performance and business outcomes. Whats in it for you: Opportunity to design, build, and maintain a scalable, flexible, and robust data intelligence architecture, staying current with evolving technology trends. Engage in creative data science and analysis to provide actionable insights that directly influence business productivity and risk reduction strategies. Work in a dynamic environment focused on self-service and automation, with opportunities to utilize and expand knowledge in cloud environments (AWS, Azure, GCP). Collaborate within multi-functional agile teams, contributing to data-driven development and enhancing your skills in a supportive setting. Responsibilities: Build and maintain the data intelligence architecture, ensuring it is scalable, flexible, robust, and cost-conscious. Design, build, and maintain efficient Data Pipelines, focusing on loose coupling, data integrity, and lineage. Develop Data Visualizations with a focus on data security, self-service capabilities, and intelligible temporal metrics to highlight risks and opportunities. Conduct creative data science and analysis to provide actionable insights aimed at improving productivity and reducing risk. Work with business partners to identify how value can be extracted from data, emphasizing self-service and automation. Define, measure, and maintain key performance metrics, statistics for management, customer stats, business trend analysis, and overall service statistics. What Were Looking For: Key Qualifications: Bachelors degree required, with an overall experience of 4-8 years, including 3-4 years in Data Intelligence and 2-3 years in Development & Support. Strong experience in Python or other scripting languages (e.g., Shell, PowerShell) and strong SQL skills with experience in SQL and NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB, DynamoDB, Redshift). Minimum 3+ years experience in development/automation areas, including automating data ingestion, transformation, and aggregation, and working knowledge of cloud technologies like AWS, Azure, or GCP (including Blob/flat file processing). Experience with Power BI or Tableau, including designing dashboards with trending visuals. Good to have knowledge of DAX, Power BI service, dataset refreshes, and performance optimization tools. Soft Skills: Strong communication skills to effectively interact with both technical and non-technical teammates and stakeholders. Proven ability to work independently and collaborate effectively in multi-functional agile teams. Strong problem-solving and analytical skills with an understanding of agile software development processes and data-driven development. A thorough understanding of the software development life cycle and agile techniques is beneficial.

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Responsibilities Job Description Collaborate with business users, analysts, and other stakeholders to understand their data and reporting requirements. Translate business needs into clear and concise functional and technical specifications for data warehouse development. Analyze source system data to understand its structure, quality and potential for integration into the data warehouse. Work closely with developers to contribute to the design and implementation of logical and physical data models. Closely integrate with Quality Analysts to identify and troubleshoot data quality issues within the data warehouse. Assist in performance tuning efforts for SQL queries and data retrieval processes for business stakeholders. Participate in data quality initiatives, including defining data quality rules and monitoring data accuracy. Adhere to data governance/security policies and procedures. Assist in the creation and maintenance of data dictionaries and documentation. Effectively communicate technical concepts to both technical and non-technical audiences. Collaborate with data engineers, BI developers, and other team members to deliver data solutions. Stay up-to-date with the latest trends and technologies in data warehousing and business intelligence. Qualifications Bachelor's degree in Computer Science, Information Systems, Business Analytics, or a related field. 6+ years of data experience with 3-5 years of experience as a Data Warehouse Analyst or a similar role. Strong proficiency in SQL and experience querying large datasets across various database platforms (e.g., GCP, Snowflake, Redshift). Solid understanding of data warehousing concepts, principles, and methodologies (e.g., dimensional modeling, star schema). Good understanding on Affiliate Marketing Data (GA4, Paid marketing channels like Google Ads, Facebook Ads, etc - the more the better) Experience working with ETL/ELT processes and tools. Hands-on experience with at least one major business intelligence and data visualization tool (e.g., Tableau, Power BI, Looker). Excellent analytical, problem-solving, and data interpretation skills. Strong communication (written and verbal) and presentation skills. Ability to work independently and as part of a collaborative team. Detail-oriented with a strong focus on data accuracy and quality. Experience with cloud-based data warehousing platforms (e.g., AWS, GCP) Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Kozhikode, Kerala, India

On-site

Linkedin logo

Overview: 3+ year’s experience in Core Java and Enterprise Java Technologies with following skills: Core Java (8 >) Spring, Spring Boot Front-end technologies such as HTML, CSS, Typescript, and popular JavaScript frameworks (Angular 9 >) , NodeJS , RxJS Proficiency in working with RDBMS (SQL) Good knowledge of REST API and Micro service architectures. Aware of DevOps (CI/CD) process, Jenkins, Docker, Kubernetes Knowledge on Cloud (AWS Lambdas, SQS, EKS, DynamoDB, Redshift etc) Experience in following Tools: IntelliJ, Maven, DB tools, Bitbucket,Confluence. Shoule be hands-on on SQL. Knowledge on data life cycle with ETL and semantic data processing. Responsibilities: Design and Develop Java applications for Data Ingestion Understand the existing system and optimise the code , develop new capabilities. Build UI components on Angular and NodeJS for Admin, Audit, Monitoring and Self service Data Ingestion. Take ownership of individual task end to end. Qualifications: Bachelor’s degree/master’s degree in engineering in Computer Science/Information Technology or any similar stream Good overall Academic background. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Preferred Education Master's Degree Required Technical And Professional Expertise Design and Develop Data Solutions, Design and implement efficient data processing pipelines using AWS services like AWS Glue, AWS Lambda, Amazon S3, and Amazon Redshift. Develop and manage ETL (Extract, Transform, Load) workflows to clean, transform, and load data into structured and unstructured storage systems. Build scalable data models and storage solutions in Amazon Redshift, DynamoDB, and other AWS services. Data Integration: Integrate data from multiple sources including relational databases, third-party APIs, and internal systems to create a unified data ecosystem. Work with data engineers to optimize data workflows and ensure data consistency, reliability, and performance. Automation and Optimization: Automate data pipeline processes to ensure efficiency Preferred Technical And Professional Experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Description Job Purpose ICE Mortgage Technology is driving value to every customer through our effort to automate everything that can be automated in the residential mortgage industry. Our integrated solutions touch each aspect of the loan lifecycle, from the borrower's "point of thought" through e-Close and secondary solutions. Drive real automation that reduces manual workflows, increases productivity, and decreases risk. You will be working in a dynamic product development team while collaborating with other developers, management, and customer support teams. You will have an opportunity to participate in designing and developing services utilized across product lines. The ideal candidate should possess a product mentality, have a strong sense of ownership, and strive to be a good steward of his or her software. More than any concrete experience with specific technology, it is critical for the candidate to have a strong sense of what constitutes good software; be thoughtful and deliberate in picking the right technology stack; and be always open-minded to learn (from others and from failures). Responsibilities Develop high quality data processing infrastructure and scalable services that are capable of ingesting and transforming data at huge scale coming from many different sources on schedule. Turn ideas and concepts into carefully designed and well-authored quality code. Articulate the interdependencies and the impact of the design choices. Develop APIs to power data driven products and external APIs consumed by internal and external customers of data platform. Collaborate with QA, product management, engineering, UX to achieve well groomed, predictable results. Improve and develop new engineering processes & tools. Knowledge And Experience 3+ years of building Enterprise Software Products. Experience in object-oriented design and development with languages such as Java. J2EE and related frameworks. Experience building REST based micro services in a distributed architecture along with any cloud technologies. (AWS preferred) Knowledge in Java/J2EE frameworks like Spring Boot, Microservice, JPA, JDBC and related frameworks is must. Built high throughput real-time and batch data processing pipelines using Kafka, on AWS environment with AWS services like S3, Kinesis, Lamdba, RDS, DynamoDB or Redshift . (Should know basics atleast) Experience with a variety of data stores for unstructured and columnar data as well as traditional database systems, for example, MySQL, Postgres Proven ability to deliver working solutions on time Strong analytical thinking to tackle challenging engineering problems. Great energy and enthusiasm with a positive, collaborative working style, clear communication and writing skills. Experience with working in DevOps environment - “you build it, you run it” Demonstrated ability to set priorities and work in a fast-paced, dynamic team environment within a start-up culture. Experience with big data technologies and exposure to Hadoop, Spark, AWS Glue, AWS EMR etc (Nice to have) Experience with handling large data sets using technologies like HDFS, S3, Avro and Parquet (Nice to have) Show more Show less

Posted 2 weeks ago

Apply

8.0 - 13.0 years

30 - 35 Lacs

Bengaluru

Work from Office

Naukri logo

Data Engineer Location: Bangalore - Onsite Experience: 8 - 15 years Type: Full-time Role Overview We are seeking an experienced Data Engineer to build and maintain scalable, high-performance data pipelines and infrastructure for our next-generation data platform. The platform ingests and processes real-time and historical data from diverse industrial sources such as airport systems, sensors, cameras, and APIs. You will work closely with AI/ML engineers, data scientists, and DevOps to enable reliable analytics, forecasting, and anomaly detection use cases. Key Responsibilities Design and implement real-time (Kafka, Spark/Flink) and batch (Airflow, Spark) pipelines for high-throughput data ingestion, processing, and transformation. Develop data models and manage data lakes and warehouses (Delta Lake, Iceberg, etc) to support both analytical and ML workloads. Integrate data from diverse sources: IoT sensors, databases (SQL/NoSQL), REST APIs, and flat files. Ensure pipeline scalability, observability, and data quality through monitoring, alerting, validation, and lineage tracking. Collaborate with AI/ML teams to provision clean and ML-ready datasets for training and inference. Deploy, optimize, and manage pipelines and data infrastructure across on-premise and hybrid environments. Participate in architectural decisions to ensure resilient, cost-effective, and secure data flows. Contribute to infrastructure-as-code and automation for data deployment using Terraform, Ansible, or similar tools. Qualifications & Required Skills Bachelors or Master’s in Computer Science, Engineering, or related field. 6+ years in data engineering roles, with at least 2 years handling real-time or streaming pipelines. Strong programming skills in Python/Java and SQL. Experience with Apache Kafka, Apache Spark, or Apache Flink for real-time and batch processing. Hands-on with Airflow, dbt, or other orchestration tools. Familiarity with data modeling (OLAP/OLTP), schema evolution, and format handling (Parquet, Avro, ORC). Experience with hybrid/on-prem and cloud platforms (AWS/GCP/Azure) deployments. Proficient in working with data lakes/warehouses like Snowflake, BigQuery, Redshift, or Delta Lake. Knowledge of DevOps practices, Docker/Kubernetes, Terraform or Ansible. Exposure to data observability, data cataloging, and quality tools (e.g., Great Expectations, OpenMetadata). Good-to-Have Experience with time-series databases (e.g., InfluxDB, TimescaleDB) and sensor data. Prior experience in domains such as aviation, manufacturing, or logistics is a plus. Role & responsibilities Preferred candidate profile

Posted 2 weeks ago

Apply

9.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Responsibilities Job Description Lead the design, development and maintenance of data models optimized for reporting and analysis. Ensure data quality, integrity, and consistency throughout the data warehousing process to enable reliable and timely ingestion of data from the source system. Troubleshoot and resolve issues related to data pipelines and data integrity. Work closely with business analysts and other stakeholders to understand their data needs and provide solutions. Communicate technical concepts to non-technical audiences effectively. Ensure the data warehouse is scalable to accommodate growing data volumes and user demands. Ensure adherence to data governance and privacy policies and procedures. Implement and monitor data quality metrics and processes. Lead and mentor a team of data warehouse developers, providing technical guidance and support. Stay up-to-date with the latest trends and technologies in data warehousing and business intelligence. Foster a collaborative and high-performing team environment. Qualifications Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. 9+ years of progressive experience in data warehousing, with at least 3 years in a lead or senior role. Deep understanding of data warehousing concepts, principles and methodologies Strong proficiency in SQL and experience with various database platforms (e.g., BigQuery, Redshift, Snowflake). Good understanding on Affiliate Marketing Data (GA4, Paid marketing channels like Google Ads, Facebook Ads, etc - the more the better) Hands-on experience with dbt and other ETL/ELT tools and technologies. Experience with data modeling techniques (e.g., dimensional modeling, star schema, snowflake schema). Experience with cloud-based data warehousing solutions (e.g., AWS, Azure, GCP) - GCP is highly preferred. Excellent problem-solving, analytical, and troubleshooting skills. Strong communication, presentation, and interpersonal skills. Ability to thrive in a fast-paced and dynamic environment. Familiarity with business intelligence and reporting tools (e.g., Tableau, Power BI, Looker). Experience with data governance and data quality frameworks is a plus. Perks Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Linkedin logo

Role Overview As a Data Science Analyst at Jai Kisan, you will play a critical role in transforming data into actionable insights to inform strategic business decisions. You’ll work cross-functionally with product, engineering, operations, and leadership teams to unlock the full potential of data through advanced analytics, automation, and AI-driven insights. This role requires a solid foundation in data handling, modern analytics tooling, and a deep curiosity for leveraging emerging technologies like LLMs, vector databases, and cloud-native platforms. Key Responsibilities Collect, clean, preprocess, and validate datasets from diverse structured and unstructured sources including APIs, data lakes, and real-time streams. Conduct exploratory data analysis (EDA) to identify trends, correlations, and business opportunities using statistical and machine learning techniques. Build, maintain, and optimize scalable data pipelines using Airflow, dbt, or Dagster to support both batch and real-time analytics. Develop and deploy AI/ML models, including LLM-based applications, for predictive analytics, recommendation systems, and automation use cases. Work with vector databases (e.g., Pinecone, Weaviate, Chroma) for semantic search and embedding-based applications. Design and manage dashboards and self-serve analytics tools using Power BI, Looker Studio, or Tableau to enable data-driven decisions. Collaborate with backend and data engineers to integrate data solutions into microservices and APIs. Interpret and clearly communicate complex analytical findings to stakeholders, including non-technical teams. Stay ahead of industry trends including AI advancements, data governance, MLOps, vector search, and cloud-native services. Required Skills & Technologies Databases: Proficient in SQL, PostgreSQL, MongoDB, with working knowledge of vector databases (e.g., Pinecone, FAISS, Weaviate). Languages & Tools: Strong programming experience in Python (pandas, NumPy, scikit-learn, LangChain, PyTorch/TensorFlow), SQL, and optionally R. Data Workflow Tools: Experience with Apache Airflow, dbt, Dagster, or similar tools. BI & Visualization: Proficiency in Power BI, Tableau, Looker Studio, Plotly, or Matplotlib. AI/ML: Exposure to LLMs (GPT, BERT, etc.), embedding models, and AI prompt engineering for analytics augmentation. Data APIs & Embeddings: Familiarity with OpenAI, Cohere, Hugging Face APIs for vector search and semantic understanding. Cloud Platforms: Hands-on experience with AWS, GCP, or Azure, especially with services like S3, BigQuery, Redshift, Athena, or Azure Synapse. Version Control & DevOps: Experience using Git, CI/CD pipelines, and Docker is a plus. Qualifications Bachelor's or Master’s degree in Computer Science, Data Science, Statistics, Mathematics, Economics, or a related field. 1–3 years of hands-on experience in a data analysis or applied machine learning role. Strong problem-solving and storytelling abilities with a deep sense of ownership. Excellent communication and collaboration skills; ability to translate technical findings into business impact. Good To Have Experience in MCP (Multi-Cloud Platforms) and cloud-agnostic data pipelines. Understanding of data mesh, data fabric, or modern data stack architectures. Contributions to open-source analytics tools or AI projects. Knowledge of data privacy, compliance standards (GDPR, SOC2), and data security best practices. Show more Show less

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Role Overview: We are looking for a highly skilled and experienced Senior ETL & Data Streaming Engineer with over 10 years of experience to play a pivotal role in designing, developing, and maintaining our robust data pipelines. The ideal candidate will have deep expertise in both batch ETL processes and real-time data streaming technologies, coupled with extensive hands-on experience with AWS data services. A proven track record of working with Data Lake architectures and traditional Data Warehousing environments is essential. Key Responsibilities: Design, develop, and implement highly scalable, fault-tolerant, and performant ETL processes using industry-leading ETL tools to extract, transform, and load data from various source systems into our Data Lake and Data Warehouse. Architect and build batch and real-time data streaming solutions using technologies like Talend, Informatica, Apache Kafka or AWS Kinesis to support immediate data ingestion and processing requirements. Utilize and optimize a wide array of AWS data services, including but not limited to AWS S3, AWS Glue, AWS Redshift, AWS Lake Formation, AWS EMR, and others, to build and manage data pipelines. Collaborate with data architects, data scientists, and business stakeholders to understand data requirements and translate them into efficient data pipeline solutions. Ensure data quality, integrity, and security across all data pipelines and storage solutions. Monitor, troubleshoot, and optimize existing data pipelines for performance, cost-efficiency, and reliability. Develop and maintain comprehensive documentation for all ETL and streaming processes, data flows, and architectural designs. Implement data governance policies and best practices within the Data Lake and Data Warehouse environments. Mentor junior engineers and contribute to fostering a culture of technical excellence and continuous improvement. Stay abreast of emerging technologies and industry best practices in data engineering, ETL, and streaming. Required Qualifications: 10+ years of progressive experience in data engineering, with a strong focus on ETL, ELT and data pipeline development. Deep expertise in ETL Tools: Extensive hands-on experience with commercial or open-source ETL tools (Talend) Strong proficiency in Data Streaming Technologies: Proven experience with real-time data ingestion and processing using platforms such as AWS Glue,Apache Kafka, AWS Kinesis, or similar. Extensive AWS Data Services Experience: Proficiency with AWS S3 for data storage and management. Hands-on experience with AWS Glue for ETL orchestration and data cataloging. Strong knowledge of AWS Redshift for data warehousing and analytics. Familiarity with AWS Lake Formation for building secure data lakes. Good to have experience with AWS EMR for big data processing . Data Warehouse (DWH) Knowledge: Strong background in traditional data warehousing concepts, dimensional modeling (Star Schema, Snowflake Schema), and DWH design principles. Programming Languages: Proficient in SQL and at least one scripting language (e.g., Python, Scala) for data manipulation and automation. Database Skills: Strong understanding of relational databases and NoSQL databases. Version Control: Experience with version control systems (e.g., Git). Problem-Solving: Excellent analytical and problem-solving skills with a keen eye for detail. Communication: Strong verbal and written communication skills, with the ability to articulate complex technical concepts to both technical and non-technical audiences. Preferred Qualifications: Certifications in AWS Data Analytics or other relevant areas. Show more Show less

Posted 2 weeks ago

Apply

26.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Greetings from Live Connections !! 😊 🌟 Live Connections Placements Pvt. Ltd. or LiveC as we are popularly known as, is a 26+ year-old search and recruitment organization that specializes in finding and placing professionals across several sectors around the globe. We bring to the table a cumulative recruitment experience built over two decades. 🔗Follow for more https://in.linkedin.com/company/live-connections 🏢 About Client: We are hiring for largest Global IT and business consulting services company, founded in 1976 and headquartered in Montreal, Canada. With over 90,000 professionals across 40+ countries, they deliver end-to-end IT services, including consulting, systems integration, application development, infrastructure management, and business process services. In India, It has a strong presence in cities like Hyderabad, Bangalore, and Chennai, offering a collaborative work environment, focus on innovation, and opportunities to work on global projects across industries like finance, healthcare, telecom, and government 🌟 Employment: Full-time 💼 Title : Data Engineer 📍 Work Location: Hyderabad, Bangalore, and Chennai ✨ Experience: 4-6 years 📅 Mode: Hybrid ⏳Notice Period: ONLY Immediate to 30 Days (Last Working Day or Official Notice) 💸Budget: 22 LPA (on Case to case basis) Must-Have Skills: Python and PySpark – strong scripting and data processing experience. SQL – advanced proficiency in writing complex queries and data manipulation. AWS – hands-on experience with core AWS services like S3, Lambda, Glue, Redshift, EMR, etc. 👉 Aᴘᴘʟʏ ɴᴏᴡ ᴏʀ ᴛᴀɢ sᴏᴍᴇᴏɴᴇ ᴡʜᴏ'ᴅ ʙᴇ ᴀɴ ɪᴅᴇᴀʟ ꜰɪᴛ ! share your Latest CV to mailto:prashanth@livecjobs.com ✉ Do share references and Please share our contact with friends/colleagues who are looking out for a change, maybe we can help them in finding one. Wish you All the Best !! 👍 Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

At PwC, our people in infrastructure focus on designing and implementing robust, secure IT systems that support business operations. They enable the smooth functioning of networks, servers, and data centres to optimise performance and minimise downtime. In infrastructure engineering at PwC, you will focus on designing and implementing robust and scalable technology infrastructure solutions for clients. Your work will involve network architecture, server management, and cloud computing experience. Data Modeler Job Description: Looking for candidates with a strong background in data modeling, metadata management, and data system optimization. You will be responsible for analyzing business needs, developing long term data models, and ensuring the efficiency and consistency of our data systems. Key areas of expertise include Analyze and translate business needs into long term solution data models. Evaluate existing data systems and recommend improvements. Define rules to translate and transform data across data models. Work with the development team to create conceptual data models and data flows. Develop best practices for data coding to ensure consistency within the system. Review modifications of existing systems for cross compatibility. Implement data strategies and develop physical data models. Update and optimize local and metadata models. Utilize canonical data modeling techniques to enhance data system efficiency. Evaluate implemented data systems for variances, discrepancies, and efficiency. Troubleshoot and optimize data systems to ensure optimal performance. Strong expertise in relational and dimensional modeling (OLTP, OLAP). Experience with data modeling tools (Erwin, ER/Studio, Visio, PowerDesigner). Proficiency in SQL and database management systems (Oracle, SQL Server, MySQL, PostgreSQL). Knowledge of NoSQL databases (MongoDB, Cassandra) and their data structures. Experience working with data warehouses and BI tools (Snowflake, Redshift, BigQuery, Tableau, Power BI). Familiarity with ETL processes, data integration, and data governance frameworks. Strong analytical, problem-solving, and communication skills. Qualifications: Bachelor's degree in Engineering or a related field. 3 to 5 years of experience in data modeling or a related field. 4+ years of hands-on experience with dimensional and relational data modeling. Expert knowledge of metadata management and related tools. Proficiency with data modeling tools such as Erwin, Power Designer, or Lucid. Knowledge of transactional databases and data warehouses. Preferred Skills: Experience in cloud-based data solutions (AWS, Azure, GCP). Knowledge of big data technologies (Hadoop, Spark, Kafka). Understanding of graph databases and real-time data processing. Certifications in data management, modeling, or cloud data engineering. Excellent communication and presentation skills. Strong interpersonal skills to collaborate effectively with various teams. Preferred Skills: Experience in cloud-based data solutions (AWS, Azure, GCP). Knowledge of big data technologies (Hadoop, Spark, Kafka). Understanding of graph databases and real-time data processing. Certifications in data management, modeling, or cloud data engineering. Excellent communication and presentation skills. Strong interpersonal skills to collaborate effectively with various teams. Show more Show less

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

What You'll Do The Global Analytics and Insights (GAI) team is seeking an experienced and experienced Data Visualization Manager to lead our data-driven decision-making initiatives. The ideal candidate will have a background in Power BI, expert-level SQL proficiency, to drive actionable insights and demonstrated leadership and mentoring experience, and an ability to drive innovation and manage complex projects. You will become an expert in Avalara's financial, marketing, sales, and operations data. This position will Report to Senior Manager What Your Responsibilities Will Be You will define and execute the organization's BI strategy, ensuring alignment with business goals. You will Lead, mentor, and manage a team of BI developers and analysts, fostering a continuous learning. You will Develop and implement robust data visualization and reporting solutions using Power BI. You will Optimize data models, dashboards, and reports to provide meaningful insights and support decision-making. You will Collaborate with business leaders, analysts, and cross-functional teams to gather and translate requirements into actionable BI solutions. Be a trusted advisor to business teams, identifying opportunities where BI can drive efficiencies and improvements. You will Ensure data accuracy, consistency, and integrity across multiple data sources. You will Stay updated with the latest advancements in BI tools, SQL performance tuning, and data visualization best practices. You will Define and enforce BI development standards, governance, and documentation best practices. You will work closely with Data Engineering teams to define and maintain scalable data pipelines. You will Drive automation and optimization of reporting processes to improve efficiency. What You’ll Need To Be Successful 8+ years of experience in Business Intelligence, Data Analytics, or related fields. 5+ Expert proficiency in Power BI, including DAX, Power Query, data modeling, and dashboard creation. 5+ years of strong SQL skills, with experience in writing complex queries, performance tuning, and working with large datasets. Familiarity with cloud-based BI solutions (e.g., Azure Synapse, AWS Redshift, Snowflake) is a plus. Should have understanding of ETL processes and data warehousing concepts. Strong problem-solving, analytical thinking, and decision-making skills. How We’ll Take Care Of You Total Rewards In addition to a great compensation package, paid time off, and paid parental leave, many Avalara employees are eligible for bonuses. Health & Wellness Benefits vary by location but generally include private medical, life, and disability insurance. Inclusive culture and diversity Avalara strongly supports diversity, equity, and inclusion, and is committed to integrating them into our business practices and our organizational culture. We also have a total of 8 employee-run resource groups, each with senior leadership and exec sponsorship. Learn more about our benefits by region here: Avalara North America What You Need To Know About Avalara We’re Avalara. We’re defining the relationship between tax and tech. We’ve already built an industry-leading cloud compliance platform, processing nearly 40 billion customer API calls and over 5 million tax returns a year, and this year we became a billion-dollar business . Our growth is real, and we’re not slowing down until we’ve achieved our mission - to be part of every transaction in the world. We’re bright, innovative, and disruptive, like the orange we love to wear. It captures our quirky spirit and optimistic mindset. It shows off the culture we’ve designed, that empowers our people to win. Ownership and achievement go hand in hand here. We instill passion in our people through the trust we place in them. We’ve been different from day one. Join us, and your career will be too. We’re An Equal Opportunity Employer Supporting diversity and inclusion is a cornerstone of our company — we don’t want people to fit into our culture, but to enrich it. All qualified candidates will receive consideration for employment without regard to race, color, creed, religion, age, gender, national orientation, disability, sexual orientation, US Veteran status, or any other factor protected by law. If you require any reasonable adjustments during the recruitment process, please let us know. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Presidio, Where Teamwork and Innovation Shape the Future At Presidio, we’re at the forefront of a global technology revolution, transforming industries through cutting-edge digital solutions and next-generation AI. We empower businesses—and their customers—to achieve more through innovation, automation, and intelligent insights. The Role Presidio Senior Engineer will be responsible for driving the development of reliable, scalable, and high-performance data systems. This role requires a strong foundation in cloud platforms, data engineering best practices, and data warehousing. The ideal candidate has hands-on experience in building robust ETL/ELT pipelines Responsibilities Include Design, develop, and maintain scalable ETL/ELT data pipelines for batch and real-time data processing. Build and optimise cloud-native data platforms and data warehouses (e.g., Snowflake, Redshift, BigQuery). Design and implement data models, including normalised and dimensional models (star/snowflake schema). Collaborate with cross-functional teams to gather requirements and deliver reliable data solutions. Ensure data quality, consistency, governance, and security across data platforms. Optimise and tune SQL queries and data workflows for performance and cost efficiency. Lead or mentor junior data engineers and contribute to team-level planning and design. Must-Have Qualifications Cloud Expertise: Strong experience with at least one cloud platform (AWS, Azure, or GCP). Programming: Proficiency in Python, SQL, and shell scripting. Data Warehousing & Modeling: Deep understanding of warehousing concepts and best practices. ETL/ELT Pipelines: Proven experience with building pipelines using orchestration tools like Airflow or DBT. Experience with CI/CD tools and version control (Git). Familiarity with distributed data processing and performance optimisation. Good-to-Have Skills Hands-on experience with UI-based ETL tools like Talend, Informatica, or Azure Data Factory. Exposure to visualisation and BI tools such as Power BI, Tableau, or Looker. Knowledge of data governance frameworks and metadata management tools (e.g., Collibra, Alation). Experience in leading data engineering teams or mentoring team members. Understanding of data security, access control, and compliance standards (e.g., GDPR, HIPAA). Your future at Presidio Joining Presidio means stepping into a culture of trailblazers—thinkers, builders, and collaborators—who push the boundaries of what’s possible. With our expertise in AI-driven analytics, cloud solutions, cybersecurity, and next-gen infrastructure, we enable businesses to stay ahead in an ever-evolving digital world. Here, your impact is real. Whether you're harnessing the power of Generative AI, architecting resilient digital ecosystems, or driving data-driven transformation, you’ll be part of a team that is shaping the future. Ready to innovate? Let’s redefine what’s next—together. About Presidio At Presidio, speed and quality meet technology and innovation. Presidio is a trusted ally for organizations across industries with a decades-long history of building traditional IT foundations and deep expertise in AI and automation, security, networking, digital transformation, and cloud computing. Presidio fills gaps, removes hurdles, optimizes costs, and reduces risk. Presidio’s expert technical team develops custom applications, provides managed services, enables actionable data insights and builds forward-thinking solutions that drive strategic outcomes for clients globally. For more information, visit www.presidio.com . Presidio is committed to hiring the most qualified candidates to join our amazing culture. We aim to attract and hire top talent from all backgrounds, including underrepresented and marginalized communities. We encourage women, people of color, people with disabilities, and veterans to apply for open roles at Presidio. Diversity of skills and thought is a key component to our business success. Recruitment Agencies, Please Note: Presidio does not accept unsolicited agency resumes/CVs. Do not forward resumes/CVs to our careers email address, Presidio employees or any other means. Presidio is not responsible for any fees related to unsolicited resumes/CVs. Show more Show less

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Description The Data Engineer will help build and maintain the cloud Data Lake platform leveraging Databricks. Candidates will be expected to contribute to all stages of the data lifecycle including data ingestion, data modeling, data profiling, data quality, data transformation, data movement, and data curation Architect data systems that are resilient to disruptions and failures Ensure high uptime for all data services Bring modern technologies and practices into the system to improve reliability and support rapid scaling of the business’s data needs Scale up our data infrastructure to meet business needs Develop production data pipeline patterns leveraging Provide subject matter expertise and hands-on delivery of data acquisition, curation and consumption pipelines on Azure. Responsible for maintaining current and emerging state-of-the-art computer and cloud-based solutions and technologies. Build effective relationships with internal stakeholders Familiarity with the technology stack available in the industry for metadata management: Data Governance, Data Quality, MDM, Lineage, Data Catalog, etc. Hands-on experience implementing analytics solutions leveraging Python, Spark SQL, Databricks Lakehouse Architecture, Kubernetes, Docker All other duties as assigned Qualifications Bachelor's degree in Computer Science, Information Technology, Management Information Systems (MIS), Data Science or related field. Applicable years of experience may be substituted for the degree requirement. Up to 8 years of experience in software engineering Experience with large and complex data projects, preferred Experience with large-scale data warehousing architecture and data modeling, preferred Worked with Cloud-based architecture such as Azure Cloud, preferred Experience working with big data technologies e.g. Snowflake, Redshift, Synapse, Postgres, Airflow, Kafka, Spark, DBT, preferred Experience implementing pub/sub and streaming use cases, preferred Experience in design reviews, preferred Experience influencing a team’s technical and business strategy by making insightful contributions to team priorities and approaches, preferred Working knowledge of relational databases, preferred Expert in SQL and high-level languages such as Python, Java or Scala, preferred Demonstrate the ability to analyze large data sets to identify gaps and inconsistencies in ETL pipeline and provide solutions for pipeline reliability and data quality, preferred Experience in infrastructure as code / CICD development environment, preferred Proven ability to build, manage and foster a team-oriented environment Excellent communication (written and oral) and interpersonal skills Excellent organizational, multi-tasking, and time-management skills Job Engineering Primary Location India-Maharashtra-Mumbai Schedule: Full-time Travel: No Req ID: 244483 Job Hire Type Experienced Not Applicable #BMI N/A Show more Show less

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Tips: Provide a summary of the role, what success in the position looks like, and how this role fits into the organization overall. Responsibilities 8+ years of experience in data engineering, with a minimum of 5 years in a leadership role Proficiency in ETL/ELT processes and experience with ETL tools (Talend, Informatica etc.) Expertise in Snowflake or similar cloud-based data platforms (e.g., Redshift, BigQuery) Strong SQL skills and experience with database tuning, data modeling, and schema design Familiarity with programming languages like Python or Java for data processing Knowledge of data governance and compliance standards Excellent communication and project management skills, with a proven ability to prioritize and manage multiple projects simultaneously Location - Gurgaon 3-4 days work from office Meals & Transport free Qualifications Bachelor's OR Master's Degree in IT or equivalent. Excellent verbal and written communication skills Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Responsibilities Job Description Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale databases, and databases involving big data processing. Work on data security and compliance, by implementing access controls, encryption, and compliance standards. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to a relational database system (e.g., PostgreSQL, MySQL) or cloud-based solutions like Google BigQuery. Develop import workflows and scripts to automate the data import process and ensure data accuracy and consistency. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Work with the team to ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and identify and resolve issues. Collaborate with the full-stack web developer in the team to support the implementation of efficient data access and retrieval mechanisms. Implement data security measures to protect sensitive information and comply with relevant regulations. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices. Explore third-party technologies as alternatives to legacy approaches for efficient data pipelines. Familiarize yourself with tools and technologies used in the team's workflow, such as Knime for data integration and analysis. Use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines. Assume accountability for achieving development milestones. Prioritize tasks to ensure timely delivery, in a fast-paced environment with rapidly changing priorities. Collaborate with and assist fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems. Leverage online resources effectively like StackOverflow, ChatGPT, Bard, etc., while considering their capabilities and limitations. Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. Qualifications 5+ Years exp in Database Engineering. Additional Information Perks Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Monthly Office Commutation Reimbursement Program Paid paternity and maternity leaves Show more Show less

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description AWS Fintech team is looking for a Data Engineering Manager to transform and optimize high-scale, world class financial systems that power the global AWS business. The success of these systems will fundamentally impact the profitability and financial reporting for AWS and Amazon. This position will play an integral role in leading programs that impact multiple AWS cost optimization initiatives. These programs will involve multiple development teams across diverse organizations to build sophisticated, highly reliable financial systems. These systems enable routine finance operations as well as machine learning, analytics, and GenAI reporting that enable AWS Finance to optimize profitability and free cash flow. This position requires a proactive, highly organized individual with an aptitude for data-driven decision making, a deep curiosity for learning new systems, and collaborative skills to work with both technical and financial teams. Key job responsibilities Build and lead a team of data engineers, application development engineers, and systems development engineers Drive execution of data engineering programs and projects Help our leadership team make challenging decisions by presenting well-reasoned and data-driven solution proposals and prioritizing recommendations. Identify and execute on opportunities for our organization to move faster in delivering innovations to our customers. This role has oncall responsibilities. A day in the life The successful candidate will build and grow a high-performing data engineering team to transform financial processes at Amazon. The candidate will be curious and interested in the capabilities of Large Language Model-based development tools like Amazon Q to help teams accelerate transformation of systems. The successful candidate will begin with execution to familiarize themselves with the space and then construct a strategic roadmap for the team to innovate. You thrive and succeed in an entrepreneurial environment, and are not hindered by ambiguity or competing priorities. You thrive driving strategic initiatives and also dig in deep to get the job done. About The Team The AWS FinTech team enables the growth of earth’s largest cloud provider by building world-class finance technology solutions for effective decision making. We build scalable long-term solutions that provide transparency into financial business insights while ensuring the highest standards of data quality, consistency, and security. We encourage a culture of experimentation and invest in big ideas and emerging technologies. We are a globally distributed team with software development engineers, data engineers, application developers, technical program managers, and product managers. We invest in providing a safe and welcoming environment where inclusion, acceptance, and individual values are honored. Basic Qualifications Experience managing a data or BI team 2+ years of processing data with a massively parallel technology (such as Redshift, Teradata, Netezza, Spark or Hadoop based big data solution) experience 2+ years of relational database technology (such as Redshift, Oracle, MySQL or MS SQL) experience 2+ years of developing and operating large-scale data structures for business intelligence analytics (using ETL/ELT processes) experience 5+ years of data engineering experience Experience communicating to senior management and customers verbally and in writing Experience leading and influencing the data or BI strategy of your team or organization Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS Preferred Qualifications Knowledge of software development life cycle or agile development environment with emphasis on BI practices Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with AWS Tools and Technologies (Redshift, S3, EC2) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - Amazon Dev Center India - Hyderabad Job ID: A2961772 Show more Show less

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

About US At Particleblack, we drive innovation through intelligent experimentation with Artificial Intelligence. Our multidisciplinary team—comprising solution architects, data scientists, engineers, product managers, and designers—collaborates with domain experts to deliver cutting-edge R&D solutions tailored to your business. Our ecosystem empowers rapid execution with plug-and-play tools, enabling scalable, AI-powered strategies that fast-track your digital transformation. With a focus on automation and seamless integration, we help you stay ahead—letting you focus on your core, while we accelerate your growth Responsibilities & Qualifications Data Architecture Design: Develop and implement scalable and efficient data architectures for batch and real-time data processing.Design and optimize data lakes, warehouses, and marts to support analytical and operational use cases. ETL/ELT Pipelines: Build and maintain robust ETL/ELT pipelines to extract, transform, and load data from diverse sources.Ensure pipelines are highly performant, secure, and resilient to handle large volumes of structured and semi-structured data. Data Quality and Governance: Establish data quality checks, monitoring systems, and governance practices to ensure the integrity, consistency, and security of data assets. Implement data cataloging and lineage tracking for enterprise-wide data transparency. Collaboration with Teams:Work closely with data scientists and analysts to provide accessible, well-structured datasets for model development and reporting. Partner with software engineering teams to integrate data pipelines into applications and services. Cloud Data Solutions: Architect and deploy cloud-based data solutions using platforms like AWS, Azure, or Google Cloud, leveraging services such as S3, BigQuery, Redshift, or Snowflake. Optimize cloud infrastructure costs while maintaining high performance. Data Automation and Workflow Orchestration: Utilize tools like Apache Airflow, n8n, or similar platforms to automate workflows and schedule recurring data jobs. Develop monitoring systems to proactively detect and resolve pipeline failures. Innovation and Leadership: Research and implement emerging data technologies and methodologies to improve team productivity and system efficiency. Mentor junior engineers, fostering a culture of excellence and innovation.| Required Skills:  Experience: 7+ years of overall experience in data engineering roles, with at least 2+ years in a leadership capacity. Proven expertise in designing and deploying large-scale data systems and pipelines. Technical Skills: Proficiency in Python, Java, or Scala for data engineering tasks. Strong SQL skills for querying and optimizing large datasets. Experience with data processing frameworks like Apache Spark, Beam, or Flink. Hands-on experience with ETL tools like Apache NiFi, dbt, or Talend. Experience in pub sub and stream processing using Kafka/Kinesis or the like Cloud Platforms: Expertise in one or more cloud platforms (AWS, Azure, GCP) with a focus on data-related services. Data Modeling: Strong understanding of data modeling techniques (dimensional modeling, star/snowflake schemas). Collaboration: Proven ability to work with cross-functional teams and translate business requirements into technical solutions. Preferred Skills: Familiarity with data visualization tools like Tableau or Power BI to support reporting teams. Knowledge of MLOps pipelines and collaboration with data scientists. Show more Show less

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Description Amazon is seeking a Business Intelligence Engineer (BIE) to support Vendor Investigation and Transaction Accuracy (VITA). VITA’s mission is to detect and prevent theft, fraud, abuse and waste globally. We use advance techniques to prevent erroneous payments and to protect Amazon shareholders. We are seeking a BIE to support our automation and data needs. The ideal candidate thrives in a fast-paced environment, relishes working with large transactional volumes and big datasets, and is passionate about data and analytics. BIE L4 is focused on developing and maintaining small to mid-size BI solutions or components of larger solutions. Their primary sphere of influence extends within their immediate team, where they deliver data-driven solutions that directly inform team business decisions. The core technical requirements demand proficiency in SQL and data manipulation, along with the ability to create and maintain ETL processes. They should be capable of building data structures using schema definition languages and demonstrate competence with analytics visualization tools such as Excel, or QuickSight. The role requires practical application of descriptive statistics and basic inferential statistics in data analysis. BIE L4 builds and maintains various data artifacts including ETL processes, data models, and queries. They create reports that accurately answer business questions while ensuring their code is secure, stable, testable, and maintainable. A key aspect of their role involves automating manual processes where possible and thoroughly validating outputs against source data and business logic. BIE L4 must submit their analyses and code for review, conduct thorough testing, and handle data in accordance with Amazon policies. They are responsible for investigating data anomalies, troubleshooting issues, and either resolving them or ensuring proper handoff to the appropriate owner Key job responsibilities Translate business risks and needs into the development of analytics for fraud and risk detection Coordinate with technical teams as appropriate to develop and implement analytics and reporting needs Partner with stakeholders to gather requirements and integrate necessary data sources to support business analysis and reporting Has quantitative or engineering background Understands how to use one or more industry analytics and metrics visualization tools (e.g. QuickSight). Proficiency in SQL. Knowledgeable in a variety of methods for querying, processing, persisting, analyzing and presenting data. Understands one or more schema definition languages (e.g. DDL, SDL, XSD, RDF). Has a good understanding of data lineage: including sources of data; how metrics are aggregated; and how the resulting business intelligence is consumed, interpreted and acted upon by the business. Proficient in descriptive statistics (i.e. measures of distribution). You are familiar with inferential statistics (e.g. hypothesis testing, confidence intervals) and know when such methods are appropriate. Knowledgeable in methods for identifying trends in metrics. Able to segment metrics along suitable dimensions to reveal deeper dynamics. Able to dive deeply into technical and operational details of the business (e.g., key dependencies, business drivers/KPIs, develop actionable business insights, etc.) and contribute to a constructive technical discussion. Coordinate with global team members to conduct deep dives walk-throughs and quality reviews of evidence to resolve complex problems. Basic Qualifications 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with one or more industry analytics visualization tools (e.g. Excel, Tableau, QuickSight, MicroStrategy, PowerBI) and statistical methods (e.g. t-test, Chi-squared) Experience with scripting language (e.g., Python, Java, or R) Preferred Qualifications Master's degree, or Advanced technical degree Knowledge of data modeling and data pipeline design Experience with statistical analysis, co-relation analysis Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2923272 Show more Show less

Posted 2 weeks ago

Apply

4.0 - 6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

We are seeking a skilled and motivated Data Engineer to join our data engineering team. The ideal candidate will have hands-on experience with AWS storage solutions, particularly Redshift and S3, and a solid background in ETL development using SQL and Python. Proficiency in version control using GitHub is also essential. Key Responsibilities Design, develop, and maintain scalable ETL pipelines to support data integration and analytics initiatives. Build and manage data models in Amazon Redshift, ensuring high performance and efficient storage. Ingest and manage structured and unstructured data from various sources into AWS S3 and Redshift. Optimize SQL queries and Python scripts to improve data processing efficiency and reliability. Collaborate with data analysts, data scientists, and product teams to understand data needs and deliver robust solutions. Maintain code versioning and documentation using GitHub following best practices in code management and CI/CD. Monitor data workflows and troubleshoot issues to ensure data integrity and availability. Required Skills & Qualifications 4 to 6 years of experience in data engineering or ETL development. Strong experience with AWS Redshift and Amazon S3. Proficient in SQL for data manipulation and query optimization. Strong programming skills in Python for ETL scripting and automation. Experience with version control systems, particularly GitHub. Familiarity with data warehousing concepts and cloud data architecture. Ability to work independently and in a collaborative team environment. Strong analytical and problem-solving skills. Show more Show less

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Location : PAN INDIA Workmode : Hybrid Work Timing :2 Pm to 11 PM Primary Skill : Data Engineer Experience in data engineering, with a proven focus on data ingestion and extraction using Python/PySpark.. Extensive AWS experience is mandatory, with proficiency in Glue, Lambda, SQS, SNS, AWS IAM, AWS Step Functions, S3, and RDS (Oracle, Aurora Postgres). 4+ years of experience working with both relational and non-relational/NoSQL databases is required. Strong SQL experience is necessary, demonstrating the ability to write complex queries from scratch. Also, experience in Redshift is required along with other SQL DB experience Strong scripting experience with the ability to build intricate data pipelines using AWS serverless architecture. understanding of building an end-to end Data pipeline. Strong understanding of Kinesis, Kafka, CDK. Experience with Kafka and ECS is also required. strong understanding of data concepts related to data warehousing, business intelligence (BI), data security, data quality, and data profiling is required Experience in Node Js and CDK. JDResponsibilities Lead the architectural design and development of a scalable, reliable, and flexible metadata-driven data ingestion and extraction framework on AWS using Python/PySpark. Design and implement a customizable data processing framework using Python/PySpark. This framework should be capable of handling diverse scenarios and evolving data processing requirements. Implement data pipeline for data Ingestion, transformation and extraction leveraging the AWS Cloud Services Seamlessly integrate a variety of AWS services, including S3,Glue, Kafka, Lambda, SQL, SNS, Athena, EC2, RDS (Oracle, Postgres, MySQL), AWS Crawler to construct a highly scalable and reliable data ingestion and extraction pipeline. Facilitate configuration and extensibility of the framework to adapt to evolving data needs and processing scenarios. Develop and maintain rigorous data quality checks and validation processes to safeguard the integrity of ingested data. Implement robust error handling, logging, monitoring, and alerting mechanisms to ensure the reliability of the entire data pipeline. QualificationsMust Have Over 6 years of hands-on experience in data engineering, with a proven focus on data ingestion and extraction using Python/PySpark. Extensive AWS experience is mandatory, with proficiency in Glue, Lambda, SQS, SNS, AWS IAM, AWS Step Functions, S3, and RDS (Oracle, Aurora Postgres). 4+ years of experience working with both relational and non-relational/NoSQL databases is required. Strong SQL experience is necessary, demonstrating the ability to write complex queries from scratch. Strong working experience in Redshift is required along with other SQL DB experience. Strong scripting experience with the ability to build intricate data pipelines using AWS serverless architecture. Complete understanding of building an end-to end Data pipeline. Nice to have Strong understanding of Kinesis, Kafka, CDK. A strong understanding of data concepts related to data warehousing, business intelligence (BI), data security, data quality, and data profiling is required. Experience in Node Js and CDK. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Role: SSIS Data engineer Work Mode: Hybrid Work timings: 2pm to 11pm Location: Chennai & Hyderabad Primary Skills: AWS Glue AWS Cloud hands on knowledge ETL Concepts SSIS knowledge Scripting and Programming Secondary Skills AWS Redshift SQL Proficiency Detailed JD Migrating SSIS packages to AWS Glue would focus on a data engineer or architect with experience in ETL processes and cloud computing. The role would involve automating the migration of SSIS packages to AWS Glue using tools like AWS Schema Conversion Tool (AWS SCT) and potentially developing custom connectors. Responsibility Migration Planning and Analysis AWS Glue job Creation Customer Connector Development Data Transformation and Validation Monitoring and Maintenance Show more Show less

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. About the role:- This role will be part of a team that develops software that processes data captured every day from over a quarter of a million Computer and Mobile devices worldwide. Measuring panelists activities as they surf the Internet via Browsers, or utilizing Mobile App’s download from Apple’s and Google’s store. The Nielsen software meter used to capture this usage data has been optimized to be unobtrusive yet gather many biometric data points that the backend system can use to identify who is using the device, and also detect fraudulent behavior The Software Engineer is ultimately responsible for delivering technical solutions: starting from the project's onboard until post launch support and including development and testing. It is expected to coordinate, support and work with multiple delocalized project teams in multiple regions. As a Software Engineer with our Digital Meter Processing team, you will further develop the backend system that processes massive amounts of data every day, across 3 different AWS regions. Your role will involve implementing, and maintaining robust, scalable solutions that leverage a Java based system that runs in an AWS environment. You will play a key role in shaping the technical direction of our projects and mentoring other team members Responsibilities:- System Deployment: Build new features in the existing backend processing pipelines CI/CD Implementation: Leverage CI/CD pipelines for automated build, test, and deployment processes. Ensure continuous integration and delivery of features, improvements, and bug fixes Code Quality and Best Practices: Adhere to coding standards, best practices, and design principles. Participate in code reviews and provide constructive feedback to maintain high code quality Performance Optimization: Identify and address performance bottlenecks in both reading, processing and writing data to the backend data stores. Team Collaboration: Follow best practices. Collaborate with cross-functional teams to ensure a cohesive and unified approach to software development Security and Compliance: Implement security best practices for all tiers of the system. Ensure compliance with industry standards and regulations related to AWS platform security Key Skills:- Bachelor's or Master’s degree in Computer Science, Software Engineering, or a related field. Proven experience, minimum 2 years, in Java development expertise and scripting languages such as Python in an AWS Cloud environment. Good experience with SQL and a database system such as Postgres. Good understanding of CI/CD principles and tools. GitLab a plus. Good problem-solving and debugging skills. Good communication and collaboration skills with ability to communicate complex technical concepts and align organization on decisions. Utilizes team collaboration to contribute to innovative solutions efficiently Other desirable skills:- Knowledge of networking principles and security best practices. AWS certifications. Experience with Data Warehouses, ETL, and/or Data Lakes very helpful. Experience with RedShift, Airflow, Python, Lambda, Prometheus, Grafana, & OpsGeni a bonus Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @ nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status or other characteristics protected by law. Show more Show less

Posted 2 weeks ago

Apply

Exploring Redshift Jobs in India

The job market for redshift professionals in India is growing rapidly as more companies adopt cloud data warehousing solutions. Redshift, a powerful data warehouse service provided by Amazon Web Services, is in high demand due to its scalability, performance, and cost-effectiveness. Job seekers with expertise in redshift can find a plethora of opportunities in various industries across the country.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Mumbai
  4. Pune
  5. Chennai

Average Salary Range

The average salary range for redshift professionals in India varies based on experience and location. Entry-level positions can expect a salary in the range of INR 6-10 lakhs per annum, while experienced professionals can earn upwards of INR 20 lakhs per annum.

Career Path

In the field of redshift, a typical career path may include roles such as: - Junior Developer - Data Engineer - Senior Data Engineer - Tech Lead - Data Architect

Related Skills

Apart from expertise in redshift, proficiency in the following skills can be beneficial: - SQL - ETL Tools - Data Modeling - Cloud Computing (AWS) - Python/R Programming

Interview Questions

  • What is Amazon Redshift and how does it differ from traditional databases? (basic)
  • How does data distribution work in Amazon Redshift? (medium)
  • Explain the difference between SORTKEY and DISTKEY in Redshift. (medium)
  • How do you optimize query performance in Amazon Redshift? (advanced)
  • What is the COPY command in Redshift used for? (basic)
  • How do you handle large data sets in Redshift? (medium)
  • Explain the concept of Redshift Spectrum. (advanced)
  • What is the difference between Redshift and Redshift Spectrum? (medium)
  • How do you monitor and manage Redshift clusters? (advanced)
  • Can you describe the architecture of Amazon Redshift? (medium)
  • What are the best practices for data loading in Redshift? (medium)
  • How do you handle concurrency in Redshift? (advanced)
  • Explain the concept of vacuuming in Redshift. (basic)
  • What are Redshift's limitations and how do you work around them? (advanced)
  • How do you scale Redshift clusters for performance? (medium)
  • What are the different node types available in Amazon Redshift? (basic)
  • How do you secure data in Amazon Redshift? (medium)
  • Explain the concept of Redshift Workload Management (WLM). (advanced)
  • What are the benefits of using Redshift over traditional data warehouses? (basic)
  • How do you optimize storage in Amazon Redshift? (medium)
  • What is the difference between Redshift and Redshift Spectrum? (medium)
  • How do you troubleshoot performance issues in Amazon Redshift? (advanced)
  • Can you explain the concept of columnar storage in Redshift? (basic)
  • How do you automate tasks in Redshift? (medium)
  • What are the different types of Redshift nodes and their use cases? (basic)

Conclusion

As the demand for redshift professionals continues to rise in India, job seekers should focus on honing their skills and knowledge in this area to stay competitive in the job market. By preparing thoroughly and showcasing their expertise, candidates can secure rewarding opportunities in this fast-growing field. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies