Jobs
Interviews

95 Redshift Aws Jobs - Page 4

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 12.0 years

18 - 25 Lacs

Noida, Gurugram, Bengaluru

Work from Office

JOB DESCRIPTION Role Expectations: Design, develop, and maintain robust, scalable, and efficient data pipelines Monitor data workflows and systems to ensure reliability and performance Identify and troubleshoot issues related to data flow and database performance Collaborate with cross-functional teams to understand business requirements and translate them into data solutions Continuously optimize existing data processes and architectures. Qualifications: Programming Languages: Proficient in Python and SQL Databases: Strong experience with Amazon Redshift, Aurora, and MySQL Data Engineering: Solid understanding of data warehousing concepts, ETL/ELT processes, and building scalable data pipelines Strong problem-solving and analytical skills Excellent communication and teamwork abilities

Posted 1 month ago

Apply

4.0 - 9.0 years

15 - 25 Lacs

Hyderabad, Chennai

Work from Office

Interested can also apply with sanjeevan.natarajan@careernet.in Role & responsibilities Technical Leadership Lead a team of data engineers and developers; define technical strategy, best practices, and architecture for data platforms. End-to-End Solution Ownership Architect, develop, and manage scalable, secure, and high-performing data solutions on AWS and Databricks. Data Pipeline Strategy Oversee the design and development of robust data pipelines for ingestion, transformation, and storage of large-scale datasets. Data Governance & Quality Enforce data validation, lineage, and quality checks across the data lifecycle. Define standards for metadata, cataloging, and governance. Orchestration & Automation Design automated workflows using Airflow, Databricks Jobs/APIs, and other orchestration tools for end-to-end data operations. Cloud Cost & Performance Optimization Implement performance tuning strategies, cost optimization best practices, and efficient cluster configurations on AWS/Databricks. Security & Compliance Define and enforce data security standards, IAM policies, and compliance with industry-specific regulatory frameworks. Collaboration & Stakeholder Engagement Work closely with business users, analysts, and data scientists to translate requirements into scalable technical solutions. Migration Leadership Drive strategic data migrations from on-prem/legacy systems to cloud-native platforms with minimal risk and downtime. Mentorship & Growth Mentor junior engineers, contribute to talent development, and ensure continuous learning within the team. Preferred candidate profile Python , SQL , PySpark , Databricks , AWS (Mandatory) Leadership Experience in Data Engineering/Architecture Added Advantage: Experience in Life Sciences / Pharma

Posted 1 month ago

Apply

7.0 - 10.0 years

15 - 20 Lacs

Chennai

Work from Office

Job Title: Data Architect / Engagement Lead Location: Chennai Reports To: CEO About the Company: Ignitho Inc. is a leading AI and data engineering company with a global presence, including US, UK, India, and Costa Rica offices. Visit our website to learn more about our work and culture: www.ignitho.com. Ignitho is a portfolio company of Nuivio Ventures Inc ., a venture builder dedicated to developing Enterprise AI product companies across various domains, including AI, Data Engineering, and IoT. Learn more about Nuivio at: www.nuivio.com. Job Summary: As the Data Architect and Engagement Lead, you will define the data architecture strategy and lead client engagements , ensuring alignment between data solutions and business goals . This dual role blends technical leadership with client-facing responsibilities. Key Responsibilities: Design scalable data architectures, including storage, processing, and integration layers. Lead technical discovery and requirements gathering sessions with clients. Provide architectural oversight for data and AI solutions . Act as a liaison between technical teams and business stakeholders . Define data governance, security, and compliance standards . Required Qualifications: Bachelors or Masters in computer science, Information Systems, or similar. 7+ years of experience in data architecture, with client-facing experience. Deep knowledge of data modelling , cloud data platforms (Snowflake / BigQuery/ Redshift / Azure), and orchestration tools. Excellent communication, stakeholder management, and technical leadership skills. Familiarity with AI/ML systems and their data requirements is a strong plus.

Posted 2 months ago

Apply

3.0 - 6.0 years

9 - 18 Lacs

Bengaluru

Work from Office

3 +yrs of exp as Data Engineer Exp in AWS Cloud Services, EC2, S3, IAM Exp on AWS Glue, DMS, RDBMS, MPP Databases like Snowflake, Redshift Knowledge on Data Modelling, ETL Process This role will be 5 days WFO. Plz apply only if you are open to work from office Only immediate joiners required

Posted 2 months ago

Apply

4.0 - 8.0 years

9 - 11 Lacs

Hyderabad

Remote

Role: Data Engineer (ETL Processes, SSIS, AWS) Duration: Fulltime Location: Remote Working hours: 4:30am to 10:30am IST shift timings. Note: We need a ETL engineer for MS SQL Server Integration Service working in 4:30am to 10:30am IST shift timings. Roles & Responsibilities: Design, develop, and maintain ETL processes using SQL Server Integration Services (SSIS). Create and optimize complex SQL queries, stored procedures, and data transformation logic on Oracle and SQL Server databases. Build scalable and reliable data pipelines using AWS services (e.g., S3, Glue, Lambda, RDS, Redshift). Develop and maintain Linux shell scripts to automate data workflows and perform system-level tasks. Schedule, monitor, and troubleshoot batch jobs using tools like Control-M, AutoSys, or cron. Collaborate with stakeholders to understand data requirements and deliver high-quality integration solutions. Ensure data quality, consistency, and security across systems. Maintain detailed documentation of ETL processes, job flows, and technical specifications. Experience with job scheduling tools such as Control-M and/or AutoSys. Exposure to version control tools (e.g., Git) and CI/CD pipelines.

Posted 2 months ago

Apply

5.0 - 6.0 years

10 - 15 Lacs

Chennai, Bengaluru

Work from Office

AI/ML, AWS-based solutions. Amazon SageMaker, Python and ML libraries, data engineering on AWS, AI/ML algorithms &model deployment strategies.CI/CD, Cloud Formation, Terraform). AWS Certified Machine Learning. generative AI, real-time inference &edge

Posted 2 months ago

Apply

9.0 - 14.0 years

25 - 40 Lacs

Bengaluru

Hybrid

Greetings from tsworks Technologies India Pvt . We are hiring for Sr. Data Engineer - Snowflake with AWS If you are interested, please share your CV to mohan.kumar@tsworks.io Position: Senior Data Engineer Experience: 9+ Years Location: Bengaluru, India (Hybrid) Mandatory Required Qualification Strong proficiency in AWS data services such as S3 buckets, Glue and Glue Catalog, EMR, Athena, Redshift, DynamoDB, Quick Sight, etc. Strong hands-on experience building Data Lake-House solutions on Snowflake, and using features such as streams, tasks, dynamic tables, data masking, data exchange etc. Hands-on experience using scheduling tools such as Apache Airflow, DBT, AWS Step Functions and data governance products such as Collibra Expertise in DevOps and CI/CD implementation Excellent Communication Skills In This Role, You Will Design, implement, and manage scalable and efficient data architecture on the AWS cloud platform. Develop and maintain data pipelines for efficient data extraction, transformation, and loading (ETL) processes. Perform complex data transformations and processing using PySpark (AWS Glue, EMR or Databricks), Snowflake's data processing capabilities, or other relevant tools. Hands-on experience working with Data Lake solutions such as Apache Hudi, Delta Lake or Iceberg. Develop and maintain data models within Snowflake and related tools to support reporting, analytics, and business intelligence needs. Collaborate with cross-functional teams to understand data requirements and design appropriate data integration solutions. Integrate data from various sources, both internal and external, ensuring data quality and consistency. Skills & Knowledge Bachelor's degree in computer science, Engineering, or a related field. 9 + Years of experience in Information Technology, designing, developing and executing solutions. 4+ Years of hands-on experience in designing and executing data solutions on AWS and Snowflake cloud platforms as a Data Engineer. Strong proficiency in AWS services such as Glue, EMR, Athena, Databricks, with file formats such as Parquet and Avro. Hands-on experience in data modelling, batch and real-time pipelines, using Python, Java or JavaScript and experience working with Restful APIs are required. Hands-on experience in handling real-time data streams from Kafka or Kinesis is required. Expertise in DevOps and CI/CD implementation. Hands-on experience with SQL and NoSQL databases. Hands-on experience in data modelling, implementation, and management of OLTP and OLAP systems. Knowledge of data quality, governance, and security best practices. Familiarity with machine learning concepts and integration of ML pipelines into data workflows Hands-on experience working in an Agile setting. Is self-driven, naturally curious, and able to adapt to a fast-paced work environment. Can articulate, create, and maintain technical and non-technical documentation. AWS and Snowflake Certifications are preferred.

Posted 2 months ago

Apply

5.0 - 10.0 years

20 - 27 Lacs

Hyderabad

Work from Office

Position: Experienced Data Engineer We are seeking a skilled and experienced Data Engineer to join our fast-paced and innovative Data Science team. This role involves building and maintaining data pipelines across multiple cloud-based data platforms. Requirements: A minimum of 5 years of total experience, with at least 3-4 years specifically in Data Engineering on a cloud platform. Key Skills & Experience: Proficiency with AWS services such as Glue, Redshift, S3, Lambda, RDS , Amazon Aurora ,DynamoDB ,EMR, Athena, Data Pipeline , Batch Job. Strong expertise in: SQL and Python DBT and Snowflake OpenSearch, Apache NiFi, and Apache Kafka In-depth knowledge of ETL data patterns and Spark-based ETL pipelines. Advanced skills in infrastructure provisioning using Terraform and other Infrastructure-as-Code (IaC) tools. Hands-on experience with cloud-native delivery models, including PaaS, IaaS, and SaaS. Proficiency in Kubernetes, container orchestration, and CI/CD pipelines. Familiarity with GitHub Actions, GitLab, and other leading DevOps and CI/CD solutions. Experience with orchestration tools such as Apache Airflow and serverless/FaaS services. Exposure to NoSQL databases is a plus

Posted 2 months ago

Apply

7.0 - 12.0 years

10 - 20 Lacs

Hyderabad

Remote

Job Title: Senior Data Engineer Location: Remote Job Type: Fulltime Experience Level: 7+ years About the Role: We are seeking a highly skilled Senior Data Engineer to join our team in building a modern data platform on AWS. You will play a key role in transitioning from legacy systems to a scalable, cloud-native architecture using technologies like Apache Iceberg, AWS Glue, Redshift, and Atlan for governance. This role requires hands-on experience across both legacy (e.g., Siebel, Talend, Informatica) and modern data stacks. Responsibilities: Design, develop, and optimize data pipelines and ETL/ELT workflows on AWS. Migrate legacy data solutions (Siebel, Talend, Informatica) to modern AWS-native services. Implement and manage a data lake architecture using Apache Iceberg and AWS Glue. Work with Redshift for data warehousing solutions including performance tuning and modelling. Apply data quality and observability practices using Soda or similar tools. Ensure data governance and metadata management using Atlan (or other tools like Collibra, Alation). Collaborate with data architects, analysts, and business stakeholders to deliver robust data solutions. Build scalable, secure, and high-performing data platforms supporting both batch and real-time use cases. Participate in defining and enforcing data engineering best practices. Required Qualifications: 7+ years of experience in data engineering and data pipeline development. Strong expertise with AWS services, especially Redshift, Glue, S3, and Athena. Proven experience with Apache Iceberg or similar open table formats (like Delta Lake or Hudi). Experience with legacy tools like Siebel, Talend, and Informatica. Knowledge of data governance tools like Atlan, Collibra, or Alation. Experience implementing data quality checks using Soda or equivalent. Strong SQL and Python skills; familiarity with Spark is a plus. Solid understanding of data modeling, data warehousing, and big data architectures. Strong problem-solving skills and the ability to work in an Agile environment.

Posted 2 months ago

Apply

3.0 - 5.0 years

12 - 14 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Role & responsibilities Key Responsibilities: Design, develop, and maintain data pipelines and ETL workflows on AWS platform Work with AWS services like S3, Glue, Lambda, Redshift, EMR, and Athena for data ingestion, transformation, and analytics Collaborate with Data Scientists, Analysts, and Business teams to understand data requirements Optimize data workflows for performance, scalability, and reliability Troubleshoot data issues, monitor jobs, and ensure data quality and integrity Write efficient SQL queries and automate data processing tasks Implement data security and compliance best practices Maintain technical documentation and data pipeline monitoring dashboards Required Skills: 3 to 5 years of hands-on experience as a Data Engineer on AWS Cloud Strong expertise with AWS data services: S3, Glue, Redshift, Athena, EMR, Lambda Proficient in SQL , Python, or Scala for data processing and scripting Experience with ETL tools and frameworks on AWS Understanding of data warehousing concepts and architecture Familiarity with CI/CD for data pipelines is a plus Strong problem-solving and communication skills Ability to work in Agile environment and handle multiple priorities Preferred candidate profile

Posted 2 months ago

Apply

5.0 - 9.0 years

0 Lacs

Nagpur

Work from Office

Role & responsibilities Job Role- AWS Data Engineer(L3) Experience-7+ years Location-Nagpur 5+ years of microservices development experience in two of these: Python, Java, Scala 5+ years of experience building data pipelines, CICD pipelines, and fit for purpose data stores 5+ years of experience with Big Data Technologies: Apache Spark, Hadoop, or Kafka 3+ years of experience with Relational & Non-relational Databases: Postgres, MySQL, NoSQL (DynamoDB or MongoDB) 3+ years of experience working with data consumption patterns 3+ years of experience working with automated build and continuous integration systems 3+ years of experience working with data consumption patterns 2+ years of experience with search and analytics platforms: OpenSearch or ElasticSearch 2+ years of experience in Cloud technologies: AWS (Terraform, S3, EMR, EKS, EC2, Glue, Athena) Exposure to data-warehousing products: Snowflake or Redshift Exposure to Relation Data Modelling, Dimensional Data Modeling & NoSQL Data Modelling concepts.

Posted 2 months ago

Apply

12.0 - 22.0 years

25 - 40 Lacs

Bangalore Rural, Bengaluru

Work from Office

Role & responsibilities Requirements: Data Modeling (Conceptual, Logical, Physical)- Minimum 5 years Database Technologies (SQL Server, Oracle, PostgreSQL, NoSQL)- Minimum 5 years Cloud Platforms (AWS, Azure, GCP) - Minimum 3 Years ETL Tools (Informatica, Talend, Apache Nifi) - Minimum 3 Years Big Data Technologies (Hadoop, Spark, Kafka) - Minimum 5 Years Data Governance & Compliance (GDPR, HIPAA) - Minimum 3 years Master Data Management (MDM) - Minimum 3 years Data Warehousing (Snowflake, Redshift, BigQuery)- Minimum 3 years API Integration & Data Pipelines - Good to have. Performance Tuning & Optimization - Minimum 3 years business Intelligence (Power BI, Tableau)- Minimum 3 years Job Description: We are seeking experienced Data Architects to design and implement enterprise data solutions, ensuring data governance, quality, and advanced analytics capabilities. The ideal candidate will have expertise in defining data policies, managing metadata, and leading data migrations from legacy systems to Microsoft Fabric/DataBricks/ . Experience and deep knowledge about at least one of these 3 platforms is critical. Additionally, they will play a key role in identifying use cases for advanced analytics and developing machine learning models to drive business insights. Key Responsibilities: 1. Data Governance & Management Establish and maintain a Data Usage Hierarchy to ensure structured data access. Define data policies, standards, and governance frameworks to ensure consistency and compliance. Implement Data Quality Management practices to improve accuracy, completeness, and reliability. Oversee Metadata and Master Data Management (MDM) to enable seamless data integration across platforms. 2. Data Architecture & Migration Lead the migration of data systems from legacy infrastructure to Microsoft Fabric. Design scalable, high-performance data architectures that support business intelligence and analytics. Collaborate with IT and engineering teams to ensure efficient data pipeline development. 3. Advanced Analytics & Machine Learning Identify and define use cases for advanced analytics that align with business objectives. Design and develop machine learning models to drive data-driven decision-making. Work with data scientists to operationalize ML models and ensure real-world applicability. Required Qualifications: Proven experience as a Data Architect or similar role in data management and analytics. Strong knowledge of data governance frameworks, data quality management, and metadata management. Hands-on experience with Microsoft Fabric and data migration from legacy systems. Expertise in advanced analytics, machine learning models, and AI-driven insights. Familiarity with data modelling, ETL processes, and cloud-based data solutions (Azure, AWS, or GCP). Strong communication skills with the ability to translate complex data concepts into business insights. Preferred candidate profile Immediate Joiner

Posted 2 months ago

Apply

5 - 10 years

15 - 20 Lacs

Hyderabad

Work from Office

DBA Role: Expertise in writing and optimizing queries for performance, including but not limited to Redshift/Postgres/SQL/Big Query and using query plans. Expertise in database permissions, including but not limited to Redshift/BigQuery /Postgres/ SQL / Windows AD Knowledge of database design/ ability to work with data architects and other IT specialists to set up, maintain and monitor data networks/ storage / metrics. Expertise in backup and recovery, including AWS Redshift snapshot restores. Redshift (provisioned and serverless) configuration and creation. Redshift Workload Management, Redshift table statistics. Experience working with third party vendors, being able to coordinate with third parties and internal stakeholders to troubleshoot issues. Experience working with internal stakeholders and business partners on both long- and short-term projects related to efficiency, optimization and cost reduction. Expertise in database management best practices/ IT security best practices Experience with the following a plus: Harness Git Cloud watch Cloudablity Other monitoring dashboard configurations Experience with a variety of computer information systems Excellent attention to detail Problem-solving and critical thinking Ability to explain complex ideas in simple terms

Posted 2 months ago

Apply

5 - 9 years

12 - 22 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

AWS Data Engineer To Apply, use the below link: https://career.infosys.com/jobdesc?jobReferenceCode=INFSYS-EXTERNAL-210775&rc=0 JOB Profile: Significant 5 to 9 years of experience in designing and implementing scalable data engineering solutions on AWS. Strong proficiency in Python programming language. Expertise in serverless architecture and AWS services such as Lambda, Glue, Redshift, Kinesis, SNS, SQS, and CloudFormation. Experience with Infrastructure as Code (IaC) using AWS CDK for defining and provisioning AWS resources. Proven leadership skills with the ability to mentor and guide junior team members. Excellent understanding of data modeling concepts and experience with tools like ERStudio. Strong communication and collaboration skills, with the ability to work effectively in a cross-functional team environment. Experience with Apache Airflow for orchestrating data pipelines is a plus. Knowledge of Data Lakehouse, dbt, or Apache Hudi data format is a plus. Roles and Responsibilities Design, develop, test, deploy, and maintain large-scale data pipelines using AWS services such as S3, Glue, Lambda, Redshift. Collaborate with cross-functional teams to gather requirements and design solutions that meet business needs. Desired Candidate Profile 5-9 years of experience in an IT industry setting with expertise in Python programming language (Pyspark). Strong understanding of AWS ecosystem including S3, Glue, Lambda, Redshift. Bachelor's degree in Any Specialization (B.Tech/B.E.).

Posted 2 months ago

Apply

5 - 10 years

20 - 35 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

To complement the existing cross-functional team, Zensar is looking for a Data Engineer who will assist in designing and also implement scalable and robust processes to support the data engineering capability. This role will be responsible for implementing and supporting large-scale data ecosystems across the Group. This incumbent will use best practices in cloud engineering, data management and data storage to continue our drive to optimize the way that data is stored, consumed and ultimately democratized. The incumbent will also engage with stakeholders across the organization with use of the Data Engineering practices to facilitate the improvement in the way that data is stored and consumed. Role & responsibilities Assist in designing and implementing scalable and robust processes for ingesting and transforming complex datasets. Designs, develops, constructs, maintains and supports data pipelines for ETL from a multitude of sources. Creates blueprints for data management systems to centralize, protect, and maintain data sources. Focused on data stewardship and curation, the data engineer enables the data scientist to run their models and analyses to achieve the desired business outcomes Ingest large, complex data sets that meet functional and non-functional requirements. Enable the business to solve the problem of working with large volumes of data in diverse formats, and in doing so, enable innovative solutions. Design and build bulk and delta data lift patterns for optimal extraction, transformation, and loading of data. Supports the organisations cloud strategy and aligns to the data achitecture and governance including the implementation of these data governance practices. Engineer data in the appropriate formats for downstream customers, risk and product analytics or enterprise applications. Assist in identifying, designing and implementing robust process improvement activities to drive efficiency and automation for greater scalability. This includes looking at new solutions and new ways of working and being on the forefront of emerging technologies. Work with various stakeholders across the organization to understand data requirements and apply technical knowledge of data management to solve key business problems. Provide support in the operational environment with all relevant support teams for data services. Provide input into the management of demand across the various data streams and use cases. Create and maintain functional requirements and system specifications in support of data architecture and detailed design specifications for current and future designs. Support test and deployment of new services and features. Provides technical leadership to junior data engineers in the team Preferred candidate profile A degree in Computer Science, Business Informatics, Mathematics, Statistics, Physics or Engineering. 3+ years of data engineering experience 3+ years of experience with any data warehouse technical architectures, ETL/ELT, and reporting/analytics tools including , but not limited to , any of the following combinations (1) SSIS ,SSRS or something similar (2) ETL Frameworks, (3) Spark (4) AWS data builds Should be at least at a proficient level in at least one of Python or Java Some experience with R, AWS, XML, json, cron will be beneficial Experience with designing and implementing Cloud (AWS) solutions including use of APIs available. Knowledge of Engineering and Operational Excellence using standard methodologies. Best practices in software engineering, data management, data storage, data computing and distributed systems to solve business problems with data.

Posted 2 months ago

Apply

6 - 10 years

15 - 27 Lacs

Noida, Hyderabad, Bengaluru

Work from Office

Job Description: 1. Candidate should have good experience in all the functionalities of Dataiku 2. Should have previous exposure handling large data sets using Dataiku and preparing and calculating data. 3. Should be able to write queries to extract and connect from RDBSM/Data lake and any other manual datasets 4. Most importantly, should be able to understand existing developments and take over with minimal handover. 5. Must be expert in Excel as well given all of the information produced in mostly furnished in excel at the right level of detail to the stakeholders for validation and discussions 6. Must have a eye for accuracy ensuring the flows are robust. 7. Banking process knowledge is a good to have Note: Kindly go through the JD and apply accordingly, its for PAN India Hiring

Posted 2 months ago

Apply

6 - 10 years

15 - 22 Lacs

Noida, Hyderabad, Bengaluru

Work from Office

AWS Data Engineer with hands-on experience in Amazon Redshift and EMR, responsible for building scalable data pipelines and managing big data processing workloads. The role requires strong skills in Spark, Hive, S3 on AWS cloud infrastructure.

Posted 2 months ago

Apply

3 - 8 years

3 - 8 Lacs

Hyderabad

Work from Office

Name of Organization: Jarus Technologies (India) Pvt. Ltd. Organization Website: www.jarustech.com Position: Senior Software Engineer - Data warehouse Domain Knowledge: Insurance (Mandatory) Job Type: Permanent Location: Hyderabad - IDA Cherlapally, ECIL and Divyasree Trinity, Hi-Tech City. Experience: 3+ years Education: B. E. / B. Tech. / M. C. A. Resource Availability: Immediately or a maximum period of 30 days. Technical Skills: • Strong knowledge of data warehousing concepts and technologies. • Proficiency in SQL and other database languages. • Experience with ETL tools (e.g., Informatica, Talend, SSIS). • Familiarity with data modelling techniques. • Experience in building dimensional data modelling objects, dimensions, and facts. • Experience with cloud-based data warehouse platforms (e.g., AWS Redshift, Azure Synapse, Google Big Query). • Familiar with optimizing SQL queries and improving ETL processes for better performance. • Knowledge of data transformation, cleansing, and validation techniques. Experience with incremental loads, change data capture (CDC) and data scheduling. • • Comfortable with version control systems like GIT. • Familiar with BI tools like Power BI for visualization and reporting. Responsibilities: Design, develop and maintain data warehouse systems and ETL (Extract, Transform, Load) processes. • • Develop and optimize data models and schemas to support business needs. • Design and implement data warehouse architectures, including physical and logical designs. • Design and develop dimensions, facts and bridges. • Ensure data quality and integrity throughout the ETL process. • Design and implement relational and multidimensional database structures. • Understand data structures and fundamental design principles of data warehouses. • Analyze and modify data structures to adapt them to business needs. • Identify and resolve data quality issues and data warehouse problems. • Debug ETL processes and data warehouse queries. Communication skills: • Good communication skills to interact with customer • Ability to understand requirements for implementing an insurance warehouse system

Posted 2 months ago

Apply

6 - 10 years

15 - 20 Lacs

Gurugram

Remote

Title: Looker Developer Team: Data Engineering Work Mode: Remote Shift Time: 3:00 PM - 12:00AM IST Contract: 12 months Key Responsibilities Collaborate closely with engineers, architects, business analysts, product owners, and other team members to understand the requirements and develop test strategies. LookML Proficiency: LookML is Looker's proprietary language for defining data models. Looker developers need to be able to write, debug, and maintain LookML code to create and manage data models, explores, and dashboards. Data Modeling Expertise:Understanding how to structure and organize data within Looker is essential. This involves mapping database schemas to LookML, creating views, and defining measures and dimensions. SQL Knowledge: Looker leverages SQL queries under the hood. Developers need to be able to write SQL to understand the data, debug queries, and potentially extend LookML with custom SQL. Looker Environment: Familiarity with the Looker interface, including the IDE, LookML Validator, and SQL Runner, is necessary for efficient development. Education and/or Experience Bachelor's degree in MIS, Computer Science, Information Technology or equivalent required 6+ Years of IT Industry experience in Data management field.

Posted 2 months ago

Apply

8 - 13 years

30 - 40 Lacs

Bengaluru

Hybrid

Key Responsibilities: Develop & Optimize Data Pipelines Architect, build, and enhance scalable data pipelines for high-performance processing. Troubleshoot & Sustain Identify, diagnose, and resolve data pipeline issues to ensure operational efficiency. Data Architecture & Storage Design efficient data storage and retrieval strategies using Postgres, Redshift, and other databases. CI/CD Pipeline Management Implement and maintain continuous integration and deployment strategies for smooth workflow automation. Scalability & Performance Tuning Ensure the robustness of data solutions while optimizing performance at scale. Collaboration & Leadership Work closely with cross-functional teams to ensure seamless data flow and lead engineering best practices. Security & Reliability – Establish governance protocols and ensure data integrity across all pipelines. Technical Skills Required: Programming: Expert in Python and Scala Big Data Technologies: Proficient in Spark, Kafka DevOps & Cloud Infrastructure: Strong understanding of Kubernetes SQL & Database Management: Skilled in SQL administration, Postgres, Redshift CI/CD Implementation: Experience in automating deployment processes for efficient workflow Job Location Bangalore Notice Period: Immediate to 15 days. Interested candidates can share your profiles t o marygracy.antony@ilink-systems.com

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies