Home
Jobs

1759 Redshift Jobs - Page 39

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description We are looking for a Senior Data Engineer with strong hands-on experience in PySpark, AWS Cloud Services, and SQL. The ideal candidate should have a passion for working with large-scale data pipelines, modern cloud data architectures, and possess excellent problem-solving skills. Key Responsibilities: Design, develop, and optimize big data processing pipelines using PySpark. Build and maintain scalable data solutions on AWS (e.g., S3, Glue, Lambda, EMR, Redshift). Write efficient, complex SQL queries for data extraction, transformation, and reporting. Collaborate with data scientists, business analysts, and application teams to ensure seamless data flow. Implement best practices in data quality, security, and governance. Troubleshoot and resolve performance issues in Spark jobs and SQL queries. Document system architecture, data workflows, and operational procedures. Stay up to date with emerging technologies in data engineering and cloud. Technical Skills Required: Strong proficiency in PySpark (at least 3 years of hands-on development experience) Solid experience working on AWS services (such as S3, Glue, Lambda, EMR, Redshift, Athena) Advanced skills in SQL ? writing, optimizing, and troubleshooting queries Experience with version control tools like Git Knowledge of data modeling and schema design for structured and semi-structured data Familiarity with CI/CD pipelines and automation tools is a plus Qualifications: Bachelor?s or Master?s degree in Computer Science, Information Technology, or a related field. 8?12 years of total experience in data engineering or a related field. Minimum of 3 years of relevant experience in PySpark. Key Attributes: Strong analytical and problem-solving skills Ability to work independently and collaboratively across teams Excellent communication (written & verbal) and interpersonal skills Flexible and adaptable in a fast-paced environment Skills Required RoleSenior Data Engineer - PySpark + AWS + SQL Industry TypeIT/ Computers - Software Functional AreaIT-Software Required EducationAny Graduates Employment TypeFull Time, Permanent Key Skills PYSPARK AWS SQL Other Information Job CodeGO/JC/21438/2025 Recruiter NameSPriya Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Description We are looking for a Senior Data Engineer with strong hands-on experience in PySpark, AWS Cloud Services, and SQL. The ideal candidate should have a passion for working with large-scale data pipelines, modern cloud data architectures, and possess excellent problem-solving skills. Key Responsibilities: Design, develop, and optimize big data processing pipelines using PySpark. Build and maintain scalable data solutions on AWS (e.g., S3, Glue, Lambda, EMR, Redshift). Write efficient, complex SQL queries for data extraction, transformation, and reporting. Collaborate with data scientists, business analysts, and application teams to ensure seamless data flow. Implement best practices in data quality, security, and governance. Troubleshoot and resolve performance issues in Spark jobs and SQL queries. Document system architecture, data workflows, and operational procedures. Stay up to date with emerging technologies in data engineering and cloud. Technical Skills Required: Strong proficiency in PySpark (at least 3 years of hands-on development experience) Solid experience working on AWS services (such as S3, Glue, Lambda, EMR, Redshift, Athena) Advanced skills in SQL ? writing, optimizing, and troubleshooting queries Experience with version control tools like Git Knowledge of data modeling and schema design for structured and semi-structured data Familiarity with CI/CD pipelines and automation tools is a plus Qualifications: Bachelor?s or Master?s degree in Computer Science, Information Technology, or a related field. 8?12 years of total experience in data engineering or a related field. Minimum of 3 years of relevant experience in PySpark. Key Attributes: Strong analytical and problem-solving skills Ability to work independently and collaboratively across teams Excellent communication (written & verbal) and interpersonal skills Flexible and adaptable in a fast-paced environment Skills Required RoleSenior Data Engineer - PySpark + AWS + SQL Industry TypeIT/ Computers - Software Functional AreaIT-Software Required EducationAny Graduates Employment TypeFull Time, Permanent Key Skills PYSPARK AWS SQL Other Information Job CodeGO/JC/21438/2025 Recruiter NameSPriya Show more Show less

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Position Summary: We are looking for an experienced Backend Engineer with strong expertise in data engineering to join our team. In this role, you will be responsible for designing and developing scalable data delivery solutions within our AWS-based data warehouse ecosystem. Your work will support business intelligence initiatives by powering dashboards and analytics tools that provide key insights to support strategic decision-making — including enhancing market performance and optimizing songwriter partnerships. Key Responsibilities: Design, develop, and maintain end-to-end ETL workflows and data integration pipelines using AWS tools. Collaborate closely with product managers, software developers, and infrastructure teams to deliver high-quality backend data solutions. Develop and refine stored procedures to ensure efficient data retrieval and transformation. Implement API integrations for seamless data exchange between systems. Continuously identify opportunities to automate and improve backend processes. Participate actively in Agile/Scrum teams, contributing to sprint planning, reviews, and retrospectives. Apply industry best practices to ensure clean, reliable, and scalable data operations. Communicate effectively with both technical stakeholders and cross-functional teams. Rapidly learn and implement new tools and technologies to meet evolving business needs. Required Skills & Experience: 8–10 years of experience in backend or data engineering roles. Strong background in data architecture, including modeling, ingestion, and mining. Hands-on experience with AWS services, including: S3, Glue, Data Pipeline, DMS, RDS, Redshift, Lambda. Proficient in scripting and development using Python, Node.js, and SQL. Solid experience in data warehousing and big data environments. Familiarity with SQL Server and other relational database systems. Proven ability to work effectively in an Agile/Scrum environment. Strong problem-solving skills with a focus on delivering practical, scalable solutions. Nice to Have: AWS certifications (e.g., AWS Certified Data Engineer or Solutions Architect). Exposure to CI/CD practices and DevOps tools. Understanding of data visualization platforms such as Tableau or Power BI. Show more Show less

Posted 2 weeks ago

Apply

4.0 - 5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Role description Python and Should have strong grip on both the language Hands on with Fast API, Apache Spark, React JS, Node JS, Server-Side JS. 4-5 years Developing Software's using a wide range of Amazon Web Services and Cloud Technologies. Professional hands-on development experience and expert in JavaScript frameworks like Dojo, jQuery, Ember React, Angular or equivalent. Hands on with JavaScript Object based model and programming, ECMAScript 6, TypeScript 4 & NPM Expertise with Python and Python frameworks like FastAPI, Django, Flask, Celery, SQLAlchemy Hands on with S3, Lambda, AwsGlue, Fargate, SQS, SNS, Eventbridge Hands on with Containers, Docker, Kubernetes, ECS, EKS, EC2 Hand on with PostgreSQL, AWS Redshift, Oracle, NoSQL, Mongo Build Automation with GitHub Actions, Jenkins, Shell Script Show more Show less

Posted 2 weeks ago

Apply

6.0 - 11.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Summary We are looking for a Senior Analytics Engineer to drive data excellence and innovation in our organization. As a thought leader in data engineering and analytics principles , you will be responsible for designing, building, and optimizing our data infrastructure while ensuring cost efficiency, security, and scalability . You will play a crucial role in managing Databricks and AWS usage , ensuring budget adherence, and taking proactive measures to optimize costs. This role also requires expertise in ETL processes, large-scale data processing, analytics, and data-driven decision-making , along with strong analytical and leadership skills. Responsibilities Act as a thought leader in data engineering and analytics, driving best practices and standards. Oversee cost management of Databricks and AWS, ensuring resource usage stays within allocated budgets and taking corrective actions when necessary. Design, implement, and optimize ETL pipelines for incremental data loading, ensuring seamless data ingestion, transformation, and performance tuning. Lead migration activities, ensuring smooth transitions while maintaining data integrity and availability. Handle massive data loads efficiently, optimizing storage, compute usage, and query performance. Adhere to Git principles for version control, ensuring best practices for collaboration and deployment. Implement and manage DSR (Airflow) workflows to automate and schedule data pipelines efficiently. Ensure data security and compliance, especially when handling PII data, aligning with regulations like GDPR and HIPAA. Optimize query performance and data storage strategies to improve cost efficiency and speed of analytics. Collaborate with data analysts and business stakeholders to enhance analytics capabilities, enabling data-driven decision-making. Develop and maintain dashboards, reports, and analytical models to provide actionable insights for business and engineering teams. Required Skills & Qualifications Four-year or Graduate Degree in Computer Science, Information Systems, or any other related discipline or commensurate work experience or demonstrated competence. 6-11 years of experience in Data Engineering, Analytics, Big Data, or related domains. Strong expertise in Databricks, AWS (S3, EC2, Lambda, RDS, Redshift, Glue, etc.), and cost optimization strategies. Hands-on experience with ETL pipelines, incremental data loads, and large-scale data processing. Proven experience in analyzing large datasets, deriving insights, and optimizing data workflows. Strong knowledge of SQL, Python, PySpark, and other data engineering and analytics tools. Strong problem-solving, analytical, and leadership skills. Experience with BI tools like Tableau, Looker, or Power BI for data visualization and reporting. Preferred Certifications Certified Software Systems Engineer (CSSE) Certified Systems Engineering Professional (CSEP) Cross-Org Skills Effective Communication Results Orientation Learning Agility Digital Fluency Customer Centricity Impact & Scope Impacts function and leads and/or provides expertise to functional project teams and may participate in cross-functional initiatives. Complexity Works on complex problems where analysis of situations or data requires an in-depth evaluation of multiple factors. Disclaimer This job description describes the general nature and level of work performed in this role. It is not intended to be an exhaustive list of all duties, skills, responsibilities, knowledge, etc. These may be subject to change and additional functions may be assigned as needed by management. Show more Show less

Posted 2 weeks ago

Apply

8.0 - 18.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Greetings from TCS!! TCS is Hiring for Data Architect Interview Mode: Virtual Required Experience: 8-18 years Work location: PAN INDIA Data Architect Technical Architect with experience in designing data platforms, experience in one of the major platforms such as snowflake, data bricks, Azure ML, AWS data platforms etc., Hands on Experience in ADF, HDInsight, Azure SQL, Pyspark, python, MS Fabric, data mesh Good to have - Spark SQL, Spark Streaming, Kafka Hands on exp in Databricks on AWS, Apache Spark, AWS S3 (Data Lake), AWS Glue, AWS Redshift / Athena Good To Have - AWS Lambda, Python, AWS CI/CD, Kafka MLflow, TensorFlow, or PyTorch, Airflow, CloudWatch If interested kindly send your updated CV and below mentioned details through E-mail: srishti.g2@tcs.com Name: E-mail ID: Contact Number: Highest qualification: Preferred Location: Highest qualification university: Current organization: Total, years of experience: Relevant years of experience: Any gap: Mention-No: of months/years (career/ education): If any then reason for gap: Is it rebegin: Previous organization name: Current CTC: Expected CTC: Notice Period: Have you worked with TCS before (Permanent / Contract ) : Show more Show less

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

India

Remote

Linkedin logo

Position: Senior Database Administrator Position Overview As a Senior Database Administrator (DBA) at Intelex Technologies, you will play a critical role in managing and optimizing our MS SQL Server, Oracle, and PostgreSQL database environments. You will be responsible for the design, implementation, performance tuning, high availability, and security of our database infrastructure across cloud and on-premises deployments. Working within the DevOps & DataOps team , you will collaborate with developers, cloud engineers, and SREs to ensure seamless database operations supporting our mission-critical applications. Responsibilities And Deliverables Database Administration & Architecture Design, implement, and optimize databases across MS SQL Server, Oracle, and PostgreSQL environments. Participate in architecture/design reviews, ensuring database structures align with application needs and performance goals. Define and maintain best practices for schema design, indexing strategies, and query optimization. Performance Tuning & Scalability Conduct proactive query tuning, execution plan analysis, and indexing strategies to optimize database performance. Monitor, troubleshoot, and resolve performance bottlenecks across MS SQL Server, Oracle, and PostgreSQL. Implement partitioning, replication, and caching to improve data access and efficiency. High Availability, Replication & Disaster Recovery Design and implement HA/DR solutions for all supported databases, including MS Clustering, Oracle Data Guard, PostgreSQL Streaming Replication, and Always On Availability Groups. Perform capacity planning and ensure proper backup and recovery strategies are in place. Automate and test failover and recovery processes to minimize downtime. Security & Compliance Implement role-based access control (RBAC), encryption, auditing, and compliance policies across all database environments. Ensure adherence to SOC 2, ISO 27001, GDPR, and HIPAA security standards. Collaborate with security teams to identify and mitigate vulnerabilities. DevOps, CI/CD, & Automation Integrate database changes into CI/CD pipelines, ensuring automated schema migrations and rollbacks. Use Terraform or other IaC tools for database provisioning and configuration management. Automate routine maintenance tasks, monitoring, and alerting using New Relic and PagerDuty or similar. Cloud & Data Technologies Manage cloud-based database solutions such as Azure SQL, Amazon RDS, Aurora, Oracle Cloud, and PostgreSQL on AWS/Azure. Work with NoSQL solutions like MongoDB when needed. Support data warehousing and analytics solutions (e.g., Snowflake, Redshift, SSAS). Incident Response & On-Call Support Provide on-call support for database-related production incidents on a rotational basis. Conduct root cause analysis and implement long-term fixes for database-related issues. Organizational Alignment This is a highly collaborative role requiring close interactions with: DevOps & SRE teams to improve database scalability and monitoring. Developers to ensure efficient database designs and optimize queries. Cloud & Security teams to maintain compliance and security best practices. Qualifications & Skills Required 8+ years of experience managing MS SQL Server, Oracle, and PostgreSQL in enterprise environments. Expertise in database performance tuning, query optimization, and execution plan analysis. Strong experience with replication, clustering, and high-availability configurations. Hands-on experience with cloud databases in AWS or Azure (RDS, Azure SQL, Oracle Cloud, etc.). Solid experience with backup strategies, disaster recovery planning, and failover testing. Proficiency in T-SQL, PL/SQL, and PostgreSQL SQL scripting. Experience automating database tasks using PowerShell, Python, or Bash. Preferred Experience with containerized database deployments like Docker, or K8s. Knowledge of Kafka, AMQP, or event-driven architectures for handling high-volume transactions. Familiarity with Oracle Data Guard, GoldenGate, PostgreSQL Logical Replication, and Always On Availability Groups. Experience working in DevOps/SRE environments with CI/CD for database deployments. Exposure to big data technologies and analytical platforms. Certifications such as Oracle DBA Certified Professional, Microsoft Certified: Azure Database Administrator Associate, or AWS Certified Database – Specialty. Education & Other Requirements Bachelor's or Master's degree in Computer Science, Data Engineering, or equivalent experience. This role requires a satisfactory Criminal Background Check and Public Safety Verification. Why Join Intelex Technologies? Work with cutting-edge database technologies in a fast-paced, DevOps-driven environment. Make an impact by supporting critical EHS applications that improve workplace safety. Flexible remote work options and opportunities for professional growth. Collaborate with top-tier cloud, DevOps, and security experts to drive innovation. Fortive Corporation Overview Fortive’s essential technology makes the world stronger, safer, and smarter. We accelerate transformation across a broad range of applications including environmental, health and safety compliance, industrial condition monitoring, next-generation product design, and healthcare safety solutions. We are a global industrial technology innovator with a startup spirit. Our forward-looking companies lead the way in software-powered workflow solutions, data-driven intelligence, AI-powered automation, and other disruptive technologies. We’re a force for progress, working alongside our customers and partners to solve challenges on a global scale, from workplace safety in the most demanding conditions to groundbreaking sustainability solutions. We are a diverse team 17,000 strong, united by a dynamic, inclusive culture and energized by limitless learning and growth. We use the proven Fortive Business System (FBS) to accelerate our positive impact. At Fortive, we believe in you. We believe in your potential—your ability to learn, grow, and make a difference. At Fortive, we believe in us. We believe in the power of people working together to solve problems no one could solve alone. At Fortive, we believe in growth. We’re honest about what’s working and what isn’t, and we never stop improving and innovating. Fortive: For you, for us, for growth. About Intelex Since 1992, Intelex Technologies, ULC. is a global leader in the development and support of software solutions for Environment, Health, Safety and Quality (EHSQ) programs. Our scalable, web-based software provides clients with unprecedented flexibility in managing, tracking and reporting on essential corporate information. Intelex software easily integrates with common ERP systems like SAP and PeopleSoft creating a seamless solution for enterprise-wide information management. Intelex’s friendly, knowledgeable staff ensures our almost 1400 clients and over 3.5 million users from companies across the globe get the most out of our groundbreaking, user-friendly software solutions. Visit www.intelex.com to learn more. We Are an Equal Opportunity Employer. Fortive Corporation and all Fortive Companies are proud to be equal opportunity employers. We value and encourage diversity and solicit applications from all qualified applicants without regard to race, color, national origin, religion, sex, age, marital status, disability, veteran status, sexual orientation, gender identity or expression, or other characteristics protected by law. Fortive and all Fortive Companies are also committed to providing reasonable accommodations for applicants with disabilities. Individuals who need a reasonable accommodation because of a disability for any part of the employment application process, please contact us at applyassistance@fortive.com. Since 1992, Intelex Technologies, ULC. is a global leader in the development and support of software solutions for Environment, Health, Safety and Quality (EHSQ) programs. Our scalable, web-based software provides clients with unprecedented flexibility in managing, tracking and reporting on essential corporate information. Intelex software easily integrates with common ERP systems like SAP and PeopleSoft creating a seamless solution for enterprise-wide information management. Intelex’s friendly, knowledgeable staff ensures our almost 1400 clients and over 3.5 million users from companies across the globe get the most out of our groundbreaking, user-friendly software solutions. Visit www.intelex.com to learn more. We Are an Equal Opportunity Employer. Fortive Corporation and all Fortive Companies are proud to be equal opportunity employers. We value and encourage diversity and solicit applications from all qualified applicants without regard to race, color, national origin, religion, sex, age, marital status, disability, veteran status, sexual orientation, gender identity or expression, or other characteristics protected by law. Fortive and all Fortive Companies are also committed to providing reasonable accommodations for applicants with disabilities. Individuals who need a reasonable accommodation because of a disability for any part of the employment application process, please contact us at applyassistance@fortive.com. Show more Show less

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

India

Remote

Linkedin logo

Role - Python Data Analyst Location - Ahmedabad, Noida, Pune, Bangalore, Hyderabad Type - Permanent Work Mode - Hybrid (2 days office, 3 days remote) Job Description: We are looking for a highly motivated Data Analyst with strong expertise in AWS cloud services and Python to join our analytics team. The ideal candidate will be responsible for extracting, transforming, and analyzing data to generate actionable business insights. You will work closely with data engineers, business stakeholders, and product teams to support data-driven decision-making. Responsibilities Analyze and interpret complex datasets to identify trends, patterns, and insights. Design and implement data pipelines and workflows using AWS services such as S3, Glue, Lambda, Athena, Redshift, and CloudWatch. Write efficient and reusable Python scripts for data wrangling, automation, and analytics. Collaborate with business stakeholders to gather requirements and develop analytical solutions. Ensure data accuracy, consistency, and quality through regular validation and monitoring. Document processes, analysis findings, and data workflows for transparency and future reference. Requirements Bachelor’s degree in computer science, Statistics, Mathematics, Engineering, or a related field. 4+ years of hands-on experience in data analysis or a related field. Strong proficiency in Python for data processing, analysis, and automation. Solid experience working with AWS cloud services, especially S3, Glue, Lambda, Redshift, and Athena. Proficiency in writing SQL queries for large datasets. Strong understanding of data structures, ETL pipelines, and data governance. Good communication and problem-solving skills Preferred Skills : Experience with Pandas, NumPy, Boto3, PySpark, or other Python libraries for data analysis. Familiarity with version control systems (Git) and CI/CD pipelines. Exposure to machine learning concepts or data science is a plus. Knowledge of data visualization tools like Tableau, Amazon QuickSight, or Power BI Show more Show less

Posted 2 weeks ago

Apply

4.0 - 8.0 years

5 - 9 Lacs

Hyderabad, Bengaluru

Work from Office

Naukri logo

Whats in it for you? Pay above market standards The role is going to be contract based with project timelines from 2 12 months, or freelancing Be a part of an Elite Community of professionals who can solve complex AI challenges Work location could be: Remote (Highly likely) Onsite on client location Deccan AIs Office: Hyderabad or Bangalore Responsibilities: Design and architect enterprise-scale data platforms, integrating diverse data sources and tools Develop real-time and batch data pipelines to support analytics and machine learning Define and enforce data governance strategies to ensure security, integrity, and compliance along with optimizing data pipelines for high performance, scalability, and cost efficiency in cloud environments Implement solutions for real-time streaming data (Kafka, AWS Kinesis, Apache Flink) and adopt DevOps/DataOps best practices Required Skills: Strong experience in designing scalable, distributed data systems and programming (Python, Scala, Java) with expertise in Apache Spark, Hadoop, Flink, Kafka, and cloud platforms (AWS, Azure, GCP) Proficient in data modeling, governance, warehousing (Snowflake, Redshift, BigQuery), and security/compliance standards (GDPR, HIPAA) Hands-on experience with CI/CD (Terraform, Cloud Formation, Airflow, Kubernetes) and data infrastructure optimization (Prometheus, Grafana) Nice to Have: Experience with graph databases, machine learning pipeline integration, real-time analytics, and IoT solutions Contributions to open-source data engineering communities What are the next steps? Register on our Soul AI website

Posted 2 weeks ago

Apply

4.0 - 8.0 years

13 - 17 Lacs

Hyderabad, Bengaluru

Work from Office

Naukri logo

Responsibilities: Design and architect enterprise-scale data platforms, integrating diverse data sources and tools Develop real-time and batch data pipelines to support analytics and machine learning Define and enforce data governance strategies to ensure security, integrity, and compliance along with optimizing data pipelines for high performance, scalability, and cost efficiency in cloud environments Implement solutions for real-time streaming data (Kafka, AWS Kinesis, Apache Flink) and adopt DevOps/DataOps best practices Required Skills: Strong experience in designing scalable, distributed data systems and programming (Python, Scala, Java) with expertise in Apache Spark, Hadoop, Flink, Kafka, and cloud platforms (AWS, Azure, GCP) Proficient in data modeling, governance, warehousing (Snowflake, Redshift, Big Query), and security/compliance standards (GDPR, HIPAA) Hands-on experience with CI/CD (Terraform, Cloud Formation, Airflow, Kubernetes) and data infrastructure optimization (Prometheus, Grafana) Nice to Have: Experience with graph databases, machine learning pipeline integration, real-time analytics, and IoT solutions Contributions to open-source data engineering communities

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

About Fluent Health: Fluent Health is a dynamic healthcare startup revolutionizing how you manage your healthcare and that of your family. The company will provide customers with high-quality, personalized options, credible information through trustworthy content, and absolute privacy. To assist us in our growth journey, we are seeking a highly motivated and experienced Senior Data Engineer to play a pivotal role in future success. Website Link- https://fluentinhealth.com/ Job Description: We’re looking for a Senior Data Engineer to lead the design, implementation, and optimization of our analytical and real-time data platform. In this hybrid role, you’ll combine hands-on data engineering with high-level architectural thinking to build scalable data infrastructure with ClickHouse as the cornerstone of our analytics and data warehousing strategy. You’ll work closely with engineering, product, analytics, and compliance teams to establish data best practices, ensure data governance, and unlock insights for internal teams and future data monetization initiatives. Responsibilities: Architecture & Strategy: Own and evolve the target data architecture, with a focus on ClickHouse for large-scale analytical and real-time querying workloads. Define and maintain a scalable and secure data platform architecture that supports various use cases including real-time analytics, reporting, and ML applications. Set data governance and modeling standards, and ensure data lineage, integrity, and security practices are followed. Evaluate and integrate complementary technologies into the data stack (e.g., message queues, data lakes, orchestration frameworks). Data Engineering: Design, develop, and maintain robust ETL/ELT pipelines to ingest and transform data from diverse sources into our data warehouse. Optimize ClickHouse schema and query performance for real-time and historical analytics workloads. Build data APIs and interfaces for product and analytics teams to interact with the data platform. Implement monitoring and observability tools to ensure pipeline reliability and data quality. Collaboration & Leadership: Collaborate with data consumers (e.g., product managers, data analysts, ML engineers) to understand data needs and translate them into scalable solutions. Work with security and compliance teams to implement data privacy, classification, retention, and access control policies. Mentor junior data engineers and contribute to hiring efforts as we scale the team. Requirements: 5+ years of experience in data engineering, with at least 2+ years in a senior or architectural role. Expert-level proficiency in ClickHouse or similar columnar databases (e.g., BigQuery, Druid, Redshift ). Proven experience designing and operating scalable data warehouse and data lake architectures. Deep understanding of data modeling, partitioning, indexing, and query optimization techniques. Strong experience building ETL/ELT pipelines using tools like Airflow, dbt, or custom frameworks. Familiarity with stream processing and event-driven architectures (e.g., Kafka , Pub/Sub). Proficiency with SQL and at least one programming language like Python, Scala, or Java. Experience with data governance, compliance frameworks (e.g., HIPAA, GDPR), and data cataloging tools. Knowledge of real-time analytics use cases and streaming architectures. Familiarity with machine learning pipelines and integrating data platforms with ML workflows. Experience working in regulated or high-security domains like healthtech, fintech, product based companies Strategic thinker with the ability to translate business needs into technical architecture. Strong communication skills to align cross-functional stakeholders and explain complex topics clearly. Self-motivated and comfortable operating in a fast-paced, startup-like environment. Passion for clean, reliable, and future-proof data systems. Show more Show less

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Greater Bengaluru Area

On-site

Linkedin logo

Role Overview: As our Business Reporting Leader , you'll drive this philosophy forward by spearheading enterprise-wide reporting initiatives and delivering high-quality, actionable insights to business users across the organization. In this strategic, hands-on leadership role, you will lead a global team of reporting analysts and BI developers, collaborate with cross-functional stakeholders, and architect data solutions that drive impact across Sales, Marketing, Product, and Operations. You’ll be responsible for designing scalable reporting frameworks, optimizing performance, and promoting a culture of data-driven decision-making. If you are passionate about data storytelling, thrive in a fast-paced environment, and have deep expertise in SQL , Tableau , and modern BI best practices—this is your opportunity to define the future of business intelligence at scale. Key Responsibilities Translate business priorities into effective reporting strategies, partnering closely with stakeholders to deliver meaningful data visualizations and dashboards. Lead, mentor, and scale a high-performing team of BI analysts and developers, fostering a culture of innovation and excellence. Design and maintain interactive, self-service Tableau dashboards and reporting solutions that provide real-time insights. Champion SQL as a core reporting tool—develop complex queries, optimize performance, and ensure data integrity across large datasets. Define and track key business KPIs , aligning metrics across functions and surfacing insights to support strategic and operational decisions. Drive standardization and governance in reporting processes, ensuring consistency, accuracy, and trust in data. Serve as the Tableau and SQL subject matter expert , overseeing dashboard design, performance tuning, UX optimization, and advanced analytics features. Oversee Tableau Server/Cloud administration , including user roles, publishing workflows, access controls, and usage monitoring. Collaborate with Data Engineering to ensure scalable data pipelines, robust semantic layers, and a reliable BI infrastructure. Deliver clear, concise, and compelling presentations of data-driven insights to senior leadership and executive stakeholders. Stay current with industry trends in BI, making recommendations on tools, processes, and frameworks for continuous improvement. Qualifications Bachelor’s degree in Computer Science, Information Systems, Business Analytics, or a related field; Master’s degree is a plus. 6+ years of experience in business intelligence or data analytics, with demonstrated expertise in SQL and Tableau . Proven success in leading and growing BI/reporting teams in dynamic, cross-functional environments. Expert-level proficiency in Tableau, including LOD expressions, calculated fields, interactivity, performance tuning, and storytelling. Deep experience with SQL —writing, optimizing, and troubleshooting complex queries across large and diverse datasets. Strong understanding of data modeling, ETL concepts, and modern cloud-based data platforms (e.g., Snowflake, Redshift). Experience ensuring data quality, governance, and integrity in enterprise reporting systems. Excellent communication skills with the ability to explain complex data concepts to both technical and non-technical audiences. Strong stakeholder management and ability to influence decisions through data. Experience in SaaS or high-growth tech environments is highly desirable. Tableau certifications (e.g., Desktop Specialist, Certified Associate) are a strong plus. Working Hours: 2 PM - 11 PM (IST) Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Khairatabad, Telangana, India

On-site

Linkedin logo

Location: IN - Hyderabad Telangana Goodyear Talent Acquisition Representative: Maria Monica Canding Sponsorship Available: No Relocation Assistance Available: No Job Responsibilities You are responsible for designing and building data products, legal data layers, data streams, algorithms, and reporting systems (e.g., dashboards, front ends). You ensure the correct design of solutions, performance, and scalability while considering appropriate cost control. You link data product design with DevOps and infrastructure. You act as a reference within and outside the Analytics team. You serve as a technical partner to Data Engineers regarding digital product implementation. Qualifications You have a Bachelor’s degree in Computer Science, Engineering, Management Information Systems, or a related discipline, or you have 10 or more years of experience in Information Technology in lieu of a degree. You have 5 or more years of experience in Information Technology. You have an in-depth understanding of database structure principles. You have experience gathering and analyzing system requirements. You have knowledge of data mining and segmentation techniques. You have expertise in SQL and Oracle. You are familiar with data visualization tools (e.g., Tableau, Cognos, SAP Analytics Cloud). You possess proven analytical skills and a problem-solving attitude. You have a proven ability to work with distributed systems. You are able to develop creative solutions to problems. You have knowledge and strong skills with SQL and NoSQL databases and applications, such as Teradata, Redshift, MongoDB, or equivalent. Goodyear is an Equal Employment Opportunity and Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to that individual's race, color, religion or creed, national origin or ancestry, sex (including pregnancy), sexual orientation, gender identity, age, physical or mental disability, ethnicity, citizenship, or any other characteristic protected by law. Goodyear is one of the world’s largest tire companies. It employs about 74,000 people and manufactures its products in 57 facilities in 23 countries around the world. Its two Innovation Centers in Akron, Ohio and Colmar-Berg, Luxembourg strive to develop state-of-the-art products and services that set the technology and performance standard for the industry. For more information about Goodyear and its products, go to www.goodyear.com/corporate Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

About KeyValue KeyValue is a trusted product engineering partner for Start Ups & Scale Ups - unlocking their passion, developing ideas, and creating abundant value to all stakeholders in the ecosystem. We have ideated, conceived, strategized and built some of the globe’s most innovative Fintech, Payments, Financial Services, Digital Commerce, Madtech, Edtech, Socialtech, Logistics, High Technology, Blockchain, Crypto, NFT and Healthcare companies, helping them conceive, scale, pivot, and enhance their businesses. KeyValue’s mission is to be the world’s most trusted product development hub – delivering high-value outcomes for start-ups & scale-ups – with a talented skilled team – in a thriving and inclusive culture. Our inclusive culture is engaging & experiential, creating an environment to learn & collaborate with freedom to think, create, explore, grow and thrive. An ownership mindset with growth orientation forms the bedrock of exceptional client success! We are looking for an experienced and resilient person to join our growing team of analytic experts and willing to learn through bigger challenges. What you will do: Work closely with product and engineering team to understand the domains, features and metrics Design and build scalable data pipelines to handle data from different sources Extract data using ETL tools and load to data warehouse ( AWS Redshift / Google BigQuery) Implement batch processing for structured and unstructured data Analyse data and create visualizations using tools like Tableau/Metabase/GoogleDataStudio which helps in implementing business decisions Work on core data team which designs and maintain the Data warehouse Troubleshooting and resolving issues in data processing and pipelines. Should be able to anticipate problems and build processes to avoid them. Learn new technology in a short span of time. Setting up CI/CD. What makes you a great fit Proficiency in database design and writing SQL queries Experience in any of data warehouse solutions AWS Redshift / Google BigQuery / Snowflake Knowledge on platforms such as Segment / HevoData / Stitch / Amplitude / Clevertap Hands on experience in ApacheSpark / Python / R / Hadoop / Kafka Knowledge on working with connectors (REST / SOAP etc.) Experience in BI platforms like Metabase / Power BI / Tableau / Google Data Studio Show more Show less

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Who We Are BCG partners with clients from the private, public, and not‐for profit sectors in all regions of the globe to identify their highest value opportunities, address their most critical challenges, and transform their enterprises. We work with the most innovative companies globally, many of which rank among the world’s 500 largest corporations. Our global presence makes us one of only a few firms that can deliver a truly unified team for our clients – no matter where they are located. Our ~22,000 employees, located in 90+ offices in 50+ countries, enable us to work in collaboration with our clients, to tailor our solutions to each organization. We value and utilize the unique talents that each of these individuals brings to BCG; the wide variety of backgrounds of our consultants, specialists, and internal staff reflects the importance we place on diversity. Our employees hold degrees across a full range of disciplines – from business administration and economics to biochemistry, engineering, computer science, psychology, medicine, and law. What You'll Do BCG X develops innovative and AI driven solutions for the Fortune 500 in their highest‐value use cases. The BCG X Software group productizes repeat use‐cases, creating both reusable components as well as single‐tenant and multi‐tenant SaaS offerings that are commercialized through the BCG consulting business. BCG X is currently looking for a Software Engineering Architect to drive impact and change for the firms engineering and analytics engine and bring new products to BCG clients globally. This Will Include Serving as a leader within BCG X and specifically the KEY Impact Management by BCG X Tribe (Transformation, Post-Merger-Integration related software and data products) overseeing the delivery of high-quality software: driving technical roadmap, architectural decisions and mentoring engineers Influencing and serving as a key decision maker in BCG X technology selection & strategy Active “hands-on” role, building intelligent analytical products to solve problems, write elegant code, and iterate quickly Overall responsibility for the engineering and architecture alignment of all solutions delivered within the tribe. Responsible for technology roadmap of existing and new components delivered. Architecting and implementing backend and frontend solutions primarily using .NET, C#, MS SQL Server, Angular, and other technologies best suited for the goals, including open source i.e. Node, Django, Flask, Python where needed. What You'll Bring 10+ years of technology and software engineering experience in a complex and fast-paced business environment (ideally agile environment) with exposure to a variety of technologies and solutions, with at least 5 year’ experience in Architect role. Experience with a wide range of Application and Data architectures, platforms and tools including: Service Oriented Architecture, Clean Architecture, Software as a Service, Web Services, Object-Oriented Languages (like C# or Java), SQL Databases (like Oracle or SQL Server), Relational, Non-relational Databases, Hands on experience with analytics tools and reporting tools, Data Science experience etc. Thoroughly up to date in technology: Modern cloud architectures including AWS, Azure, GCP, Kubernetes Very strong particularly in .NET, C#, MS SQL Server, Angular technologies Open source stacks including NodeJs, React, Angular, Flask are good to have CI/CD / DevSecOps / GitOps toolchains and development approaches Knowledge in machine learning & AI frameworks Big data pipelines and systems: Spark, Snowflake, Kafka, Redshift, Synapse, Airflow At least Bachelors degree; Master’s degree and/or MBA preferred Team player with excellent work habits and interpersonal skills Care deeply about product quality, reliability, and scalability Passion about the people and culture side of engineering teams Outstanding written and oral communications skills The ability to travel, depending on project requirements.#BCGXjob Boston Consulting Group is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, age, religion, sex, sexual orientation, gender identity / expression, national origin, disability, protected veteran status, or any other characteristic protected under national, provincial, or local law, where applicable, and those with criminal histories will be considered in a manner consistent with applicable state and local laws. BCG is an E - Verify Employer. Click here for more information on E-Verify. Show more Show less

Posted 2 weeks ago

Apply

6.0 - 8.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

We are looking for a results-driven Senior Data Engineer to join our engineering team. The ideal candidate will have hands-on expertise in data pipeline development, cloud infrastructure, and BI support, with a strong command over modern data stacks. You'll be responsible for building scalable ETL/ELT workflows, managing data lakes and marts, and enabling seamless data delivery to analytics and business intelligence teams. This role requires deep technical know-how in PostgreSQL, Python scripting, Apache Airflow, AWS or other cloud environments, and a working knowledge of modern data and BI tools Responsibilities Design and optimize complex SQL queries, stored procedures, and indexes. Perform performance tuning and query plan analysis. Contribute to schema design and data normalization. Migrate data from multiple sources to cloud or ODS platforms. Design schema mapping and implement transformation logic. Ensure consistency, integrity, and accuracy in migrated data. Build automation scripts for data ingestion, cleansing, and transformation. Handle file formats (JSON, CSV, XML), REST APIs, cloud SDKs (e. g., Boto3). Maintain reusable script modules for operational pipelines. Develop and manage DAGs for batch/stream workflows. Implement retries, task dependencies, notifications, and failure handling. Integrate Airflow with cloud services, data lakes, and data warehouses. Manage data storage (S3 GCS, Blob), compute services, and data pipelines. Set up permissions, IAM roles, encryption, and logging for security. Monitor and optimize the cost and performance of cloud-based data operations. Design and manage data marts using dimensional models. Build star/snowflake schemas to support BI and self-serve analytics. Enable incremental load strategies and partitioning. Work with tools like DBT, Fivetran, Redshift, Snowflake, BigQuery, or Kafka. Support modular pipeline design and metadata-driven frameworks. Ensure high availability and scalability of the stack. Collaborate with BI teams to design datasets and optimize queries. Support the development of dashboards and reporting layers. Manage access, data refreshes, and performance for BI tools. Requirements 6-8 years of hands-on experience in data engineering roles. Strong SQL skills in PostgreSQL (tuning, complex joins, procedures). Advanced Python scripting skills for automation and ETL. Proven experience with Apache Airflow (custom DAGs, error handling). Solid understanding of cloud architecture (especially AWS). Experience with data marts and dimensional data modeling. Exposure to modern data stack tools (DBT, Kafka, Snowflake, etc. ) Familiarity with BI tools like Power BI, Apache Superset, or Supertech BI. Version control (Git) and CI/CD pipeline knowledge are a plus. Excellent problem-solving and communication skills. This job was posted by Suryansh Singh Karchuli from ShepHertz Technologies. Interested candidates can apply directly at Talent.acquisition@shephertz.com Show more Show less

Posted 2 weeks ago

Apply

0.0 - 4.0 years

0 Lacs

Jaipur, Rajasthan

Remote

Indeed logo

Senior Data Engineer Kadel Labs is a leading IT services company delivering top-quality technology solutions since 2017, focused on enhancing business operations and productivity through tailored, scalable, and future-ready solutions. With deep domain expertise and a commitment to innovation, we help businesses stay ahead of technological trends. As a CMMI Level 3 and ISO 27001:2022 certified company, we ensure best-in-class process maturity and information security, enabling organizations to achieve their digital transformation goals with confidence and efficiency. Role: Senior Data Engineer Experience: 4-6 Yrs Location: Udaipur , Jaipur,Kolkata Job Description: We are looking for a highly skilled and experienced Data Engineer with 4–6 years of hands-on experience in designing and implementing robust, scalable data pipelines and infrastructure. The ideal candidate will be proficient in SQL and Python and have a strong understanding of modern data engineering practices. You will play a key role in building and optimizing data systems, enabling data accessibility and analytics across the organization, and collaborating closely with cross-functional teams including Data Science, Product, and Engineering. Key Responsibilities: ·Design,develop, and maintain scalable ETL/ELT data pipelines using SQL and Python · Collaborate with data analysts, data scientists, and product teams to understand data needs · Optimize queries and data models for performance and reliability · Integrate data from various sources, including APIs, internal databases, and third-party systems · Monitor and troubleshoot data pipelines to ensure data quality and integrity · Document processes, data flows, and system architecture · Participate in code reviews and contribute to a culture of continuous improvement Required Skills: ·4–6 years of experience in data engineering, data architecture, or backend development with a focus on data · Strong command of SQL for data transformation and performance tuning · Experience with Python (e.g., pandas, Spark, ADF) · Solid understanding of ETL/ELT processes and data pipeline orchestration · Proficiency with RDBMS (e.g., PostgreSQL, MySQL, SQL Server) · Experience with data warehousing solutions (e.g., Snowflake, Redshift, BigQuery) · Familiarity with version control (Git), CI/CD workflows, and containerized environments (Docker, Kubernetes) · Basic Programming Skills · Excellent problem-solving skills and a passion for clean, efficient data systems Preferred Skills: ·Experience with cloud platforms (AWS, Azure, GCP) and services like S3, Glue, Dataflow, etc. · Exposure to enterprise solutions (e.g., Databricks, Synapse) · Knowledge of big data technologies (e.g., Spark, Kafka, Hadoop) · Background in real-time data streaming and event-driven architectures · Understanding of data governance, security, and compliance best practices · Prior experience working in agile development environment Educational Qualifications: ·Bachelor's degree in Computer Science, Information Technology, or a related field. Visit us: https://kadellabs.com/ https://in.linkedin.com/company/kadel-labs https://www.glassdoor.co.in/Overview/Working-at-Kadel-Labs-EI_IE4991279.11,21.htm Job Types: Full-time, Permanent Pay: ₹826,249.60 - ₹1,516,502.66 per year Benefits: Flexible schedule Health insurance Leave encashment Paid time off Provident Fund Work from home Schedule: Day shift Monday to Friday Supplemental Pay: Overtime pay Performance bonus Quarterly bonus Yearly bonus Ability to commute/relocate: Jaipur, Rajasthan: Reliably commute or planning to relocate before starting work (Required) Experience: Data Engineer: 4 years (Required) Location: Jaipur, Rajasthan (Required) Work Location: In person

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Level Up Your Career with Zynga! At Zynga, we bring people together through the power of play. As a global leader in interactive entertainment and a proud label of Take-Two Interactive, our games have been downloaded over 6 billion times—connecting players in 175+ countries through fun, strategy, and a little friendly competition. From thrilling casino spins to epic strategy battles, mind-bending puzzles, and social word challenges, our diverse game portfolio has something for everyone. Fan-favorites and latest hits include FarmVille™, Words With Friends™, Zynga Poker™, Game of Thrones Slots Casino™, Wizard of Oz Slots™, Hit it Rich! Slots™, Wonka Slots™, Top Eleven™, Toon Blast™, Empires & Puzzles™, Merge Dragons!™, CSR Racing™, Harry Potter: Puzzles & Spells™, Match Factory™, and Color Block Jam™—plus many more! Founded in 2007 and headquartered in California, our teams span North America, Europe, and Asia, working together to craft unforgettable gaming experiences. Whether you're spinning, strategizing, matching, or competing, Zynga is where fun meets innovation—and where you can take your career to the next level. Join us and be part of the play! TBD We are seeking experienced and passionate engineers to join our collaborative and innovative team. Zynga’s mission is to “Connect the World through Games” by building a truly social experience that makes the world a better place. The ideal candidate needs to have a strong focus on building high-quality, maintainable software that has global impact. The Analytics Engineering team is responsible for all things data at Zynga. We own the full game and player data pipeline - from ingestion to storage to driving insights and analytics. As a Data Engineer, you will be responsible for the software design and development of quality services and products to support the Analytics needs of our games. In this role, you will be part of our Analytics Engineering group focusing on advanced technology developments for building scalable data infrastructure and end-to-end services which can be leveraged by the various games. We are a 120+ organization servicing 1500 others across 13 global locations. Your responsibilities will include Build and operate a multi PB-scale data platform. Design, code, and develop new features/fix bugs/enhancements to systems and data pipelines (ETLs) while adhering to the SLA. Identifying anomalies, inconsistencies in data sets and algorithms and flagging it to the relevant team and / or fixing the bugs in the data workflows where applicable. Follow the best engineering methodologies towards ensuring performance, reliability, scalability, and measurability. Collaborate effectively with teammates, contributing to an innovative environment of technical excellence. You will be a perfect fit if you have Bachelor’s degree in Computer Science, or a related technical discipline (or equivalent). 3+ years of strong data engineering design/development experience in building large-scale, distributed data platforms/products. Advanced coding expertise in SQL & Python/JVM-based language. Exposure to heterogeneous data storage systems like relational, NoSQL, in-memory etc. Knowledge of data modeling, lineage, data access and its governance. Proficient in AWS services like Redshift, Kinesis, Lambda, RDS, EKS/ECS etc. Exposure to open source software, frameworks and broader powerful technologies (Airflow, Kafka, DataHub etc). Shown ability to deliver work on time with attention to quality. Excellent written and spoken communication skills and ability to work optimally in a geographically distributed team environment. We encourage you to apply even if you don’t meet every single requirement. Your unique perspective and experience could be exactly what we’re looking for. We are proud to be an equal opportunity employer, which means we are committed to creating and celebrating diverse thoughts, cultures, and backgrounds throughout our organization. Employment with us is based on substantive ability, objective qualifications, and work ethic – not an individual’s race, creed, color, religion, sex or gender, gender identity or expression, sexual orientation, national origin or ancestry, alienage or citizenship status, physical or mental disability, pregnancy, age, genetic information, veteran status, marital status, status as a victim of domestic violence or sex offenses, reproductive health decision, or any other characteristics protected by applicable law. As an equal opportunity employer, we are committed to providing the necessary support and accommodation to qualified individuals with disabilities, health conditions, or impairments (subject to any local qualifying requirements) to ensure their full participation in the job application or interview process. Please contact us at accommodationrequest@zynga.com to request any accommodations or for support related to your application for an open position. Please be aware that Zynga does not conduct job interviews or make job offers over third-party messaging apps such as Telegram, WhatsApp, or others. Zynga also does not engage in any financial exchanges during the recruitment or onboarding process, and will never ask a candidate for their personal or financial information over an app or other unofficial chat channel. Any attempt to do so may be the result of a scamp or phishing attack, and you should not engage. Zynga’s in-house recruitment team will only contact individuals through their official Company email addresses (i.e., via a zynga.com, naturalmotion.com, smallgiantgames.com, themavens.com, gram.gs email domain). Show more Show less

Posted 2 weeks ago

Apply

3.0 - 5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About Appen Appen is a leader in AI enablement for critical tasks such as model improvement, supervision, and evaluation. To do this we leverage our global crowd of over one million skilled contractors, speaking over 180 languages and dialects, representing 130 countries. In addition, we utilize the industry's most advanced AI-assisted data annotation platform to collect and label various types of data like images, text, speech, audio, and video. Our data is crucial for building and continuously improving the world's most innovative artificial intelligence systems and Appen is already trusted by the world's largest technology companies. Now with the explosion of interest in generative AI, Appen is helping leaders in automotive, financial services, retail, healthcare, and governments the confidence to deploy world-class AI products. At Appen, we are purpose driven. Our fundamental role in AI is to ensure all models are helpful, honest, and harmless, so we firmly believe in unlocking the power of AI to build a better world. We have a learn-it-all culture that values perspective, growth, and innovation. We are customer-obsessed, action-oriented, and celebrate winning together. At Appen, we are committed to creating an inclusive and diverse workplace. We are an equal opportunity employer that does not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We are looking for a Platform Engineer to design, build, and maintain scalable infrastructure platforms that empower development teams. This role focuses on automation, reliability, and developer experience , leveraging Platform Engineering and Site Reliability Engineering (SRE) principles . You should have expertise in cloud technologies, Infrastructure as Code (IaC), CI/CD, and observability . If you enjoy building scalable, self-service, and resilient platforms , we encourage you to apply. Key Responsibility Design and build internal platforms that enable development teams to deploy and manage applications efficiently. Develop and maintain Infrastructure as Code (IaC) using Terraform or CloudFormation to ensure scalable and repeatable infrastructure management. Implement and optimize CI/CD pipelines for fast and reliable software delivery. Manage Kubernetes (EKS, ECS) and cloud-native services to ensure platform scalability and reliability. Automate operational tasks to improve developer experience and reduce toil. Enhance observability using Prometheus, Grafana, New Relic, ELK/EFK, or similar tools for monitoring and logging. Enforce security best practices and assist in compliance and disaster recovery planning. Participate in incident response and on-call rotation, ensuring platform stability. Collaborate with developers to optimize infrastructure performance and remove bottlenecks. Mentor and guide junior engineers in best practices for cloud infrastructure, automation, and reliability. Required Skills & Qualifications 3-5 years of experience in Platform Engineering, DevOps, or SRE roles. Strong expertise in cloud platforms (AWS preferred), including services like EC2, S3, RDS, Aurora, Redshift, EKS, ECS, Lambda, API Gateway, CloudFront, Route53, and CloudWatch. Proficiency in Infrastructure as Code (IaC) with Terraform or CloudFormation. Hands-on experience with Kubernetes, container orchestration, and cloud-native architectures. Experience with CI/CD tools such as GitHub Actions, ArgoCD, Jenkins, or GitLab CI/CD. Strong automation and scripting skills (Python, Go, or Bash). Knowledge of monitoring, logging, and observability tools (Prometheus, Grafana, New Relic, ELK/EFK, etc.). Experience in incident management, troubleshooting, and performance tuning. Strong problem-solving skills with a focus on scalability and reliability. Excellent communication and collaboration skills. AWS certification is a plus Appen is the global leader in data for the AI Lifecycle with more than 25 years’ experience in data sourcing, annotation, and model evaluation. Through our expertise, platform, and global crowd, we enable organizations to launch the world’s most innovative artificial intelligence products with speed and at scale. Appen maintains the industry’s most advanced AI-assisted data annotation platform and boasts a global crowd of more than 1 million contributors worldwide, speaking more than 235 languages. Our products and services make Appen a trusted partner to leaders in technology, automotive, finance, retail, healthcare, and government. Appen has customers and offices globally. Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

India

Remote

Linkedin logo

About BeGig BeGig is the leading tech freelancing marketplace. We empower innovative, early-stage, non-tech founders to bring their visions to life by connecting them with top-tier freelance talent. By joining BeGig, you’re not just taking on one role—you’re signing up for a platform that will continuously match you with high-impact opportunities tailored to your expertise. Your Opportunit yJoin our network as a Data Enginee r and work directly with visionary startups to design, build, and optimize data pipelines and systems. You’ll help transform raw data into actionable insights, ensuring that data flows seamlessly across the organization to support informed decision-making. Enjoy the freedom to structure your engagement on an hourly or project basis—all while working remotely . Role Overvi ewAs a Data Engineer, you wil l:Design & Develop Data Pipeline s: Build and maintain scalable, robust data pipelines that power analytics and machine learning initiative s.Optimize Data Infrastructur e: Ensure data is processed efficiently, securely, and in a timely manne r.Collaborate & Innovat e: Work closely with data scientists, analysts, and other stakeholders to streamline data ingestion, transformation, and storag e. What You’ll DoData Pipeline Developme nt:Design, develop, and maintain end-to-end data pipelines using modern data engineering tools and framewor ks.Automate data ingestion, transformation, and loading processes across various data sourc es.Implement data quality and validation measures to ensure accuracy and reliabili ty. Infrastructure & Optimizat ion:Optimize data workflows for performance and scalability in cloud environments (AWS, GCP, or Azu re).Leverage tools such as Apache Spark, Kafka, or Airflow for data processing and orchestrat ion.Monitor and troubleshoot pipeline issues, ensuring smooth data operati ons. Technical Requirements & S killsExperi ence: 3+ years in data engineering or a related f ield.Program ming: Proficiency in Python, SQL, and familiarity with Scala or Java is a plus.Data Platf orms: Experience with big data technologies like Hadoop, Spark, or sim ilar.C loud: Working knowledge of cloud-based data solutions (e.g., AWS Redshift, BigQuery, or Azure Data L ake).ETL & Data Warehou sing: Hands-on experience with ETL processes and data warehousing solut ions.T ools: Familiarity with data orchestration tools such as Apache Airflow or sim ilar.Database Sys tems: Experience with both relational (PostgreSQL, MySQL) and NoSQL datab ases. What We’re Looki ng ForA detail-oriented data engineer with a passion for building efficient, scalable data sy stems.A proactive problem-solver who thrives in a fast-paced, dynamic enviro nment.A freelancer with excellent communication skills and the ability to collaborate with cross-functional teams. Why J oin Us?Immediate Impact: Tackle challenging data problems that drive real business ou tcomes.Remote & Fl exible: Work from anywhere with engagements structured to suit your sc hedule.Future Opportu nities: Leverage BeGig’s platform to secure additional data-focused roles as your expertise grows.Innovativ e Work: Collaborate with startups at the forefront of data innovation and tech nology. Ready to Transfo rm Data?Apply now to becom e a key Data Engineer for our client and a valued member of the BeGig network! Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Title: Data Engineer Experience : 3+ years Location: Gurgaon Job Details and Requirements: Skills: Primary : Python / SQL / AWS / API Development / AWS Neptune Good to have – Redshift / S3 / Lambda / VPC / Subnet / EC2 / ECS / Load balancer / SCM domain /Data modelling / Ontologies Show more Show less

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Job Title : Data Architect Reports To Tittle : Head of Technology Business Function/Sub Function: IT Location: Noida, India Data Architecture Design: Design, develop, and maintain the enterprise data architecture, including data models, database schemas, and data flow diagrams. Develop a data strategy and roadmap that aligns with business objectives and ensures the scalability of data systems. Architect both transactional (OLTP) and analytical (OLAP) databases, ensuring optimal performance and data consistency. Data Integration & Management: Oversee the integration of disparate data sources into a unified data platform, leveraging ETL/ELT processes and data integration tools. Design and implement data warehousing solutions, data lakes, and/or data marts that enable efficient storage and retrieval of large datasets. Ensure proper data governance, including the definition of data ownership, security, and privacy controls in accordance with compliance standards (GDPR, HIPAA, etc.). Collaboration with Stakeholders: Work closely with business stakeholders, including analysts, developers, and executives, to understand data requirements and ensure that the architecture supports analytics and reporting needs. Collaborate with DevOps and engineering teams to optimize database performance and support large-scale data processing pipelines. Technology Leadership: Guide the selection of data technologies, including databases (SQL/NoSQL), data processing frameworks (Hadoop, Spark), cloud platforms (Azure is a must), and analytics tools. Stay updated on emerging data management technologies, trends, and best practices, and assess their potential application within the organization. Data Quality & Security: Define data quality standards and implement processes to ensure the accuracy, completeness, and consistency of data across all systems. Establish protocols for data security, encryption, and backup/recovery to protect data assets and ensure business continuity. Mentorship & Leadership: Lead and mentor data engineers, data modelers, and other technical staff in best practices for data architecture and management. Provide strategic guidance on data-related projects and initiatives, ensuring that all efforts are aligned with the enterprise data strategy. Required Skills & Experience: Extensive Data Architecture Expertise: Over 7 years of experience in data architecture, data modeling, and database management. Proficiency in designing and implementing relational (SQL) and non-relational (NoSQL) database solutions. Strong experience with data integration tools (Azure Tools are a must + any other third party tools), ETL/ELT processes, and data pipelines. Advanced Knowledge of Data Platforms: Expertise in Azure cloud data platform is a must. Other platforms such as AWS (Redshift, S3), Azure (Data Lake, Synapse), and/or Google Cloud Platform (BigQuery, Dataproc) is a bonus. Experience with big data technologies (Hadoop, Spark) and distributed systems for large-scale data processing. Hands-on experience with data warehousing solutions and BI tools (e.g., Power BI, Tableau, Looker). Data Governance & Compliance: Strong understanding of data governance principles, data lineage, and data stewardship. Knowledge of industry standards and compliance requirements (e.g., GDPR, HIPAA, SOX) and the ability to architect solutions that meet these standards. Technical Leadership: Proven ability to lead data-driven projects, manage stakeholders, and drive data strategies across the enterprise. Strong programming skills in languages such as Python, SQL, R, or Scala. Certification: Azure Certified Solution Architect, Data Engineer, Data Scientist certifications are mandatory. Pre-Sales Responsibilities: Stakeholder Engagement: Work with product stakeholders to analyze functional and non-functional requirements, ensuring alignment with business objectives. Solution Development: Develop end-to-end solutions involving multiple products, ensuring security and performance benchmarks are established, achieved, and maintained. Proof of Concepts (POCs): Develop POCs to demonstrate the feasibility and benefits of proposed solutions. Client Communication: Communicate system requirements and solution architecture to clients and stakeholders, providing technical assistance and guidance throughout the pre-sales process. Technical Presentations: Prepare and deliver technical presentations to prospective clients, demonstrating how proposed solutions meet their needs and requirements. Additional Responsibilities: Stakeholder Collaboration: Engage with stakeholders to understand their requirements and translate them into effective technical solutions. Technology Leadership: Provide technical leadership and guidance to development teams, ensuring the use of best practices and innovative solutions. Integration Management: Oversee the integration of solutions with existing systems and third-party applications, ensuring seamless interoperability and data flow. Performance Optimization: Ensure solutions are optimized for performance, scalability, and security, addressing any technical challenges that arise. Quality Assurance: Establish and enforce quality assurance standards, conducting regular reviews and testing to ensure robustness and reliability. Documentation: Maintain comprehensive documentation of the architecture, design decisions, and technical specifications. Mentoring: Mentor fellow developers and team leads, fostering a collaborative and growth-oriented environment. Qualifications: Education: Bachelor’s or master’s degree in computer science, Information Technology, or a related field. Experience: Minimum of 7 years of experience in data architecture, with a focus on developing scalable and high-performance solutions. Technical Expertise: Proficient in architectural frameworks, cloud computing, database management, and web technologies. Analytical Thinking: Strong problem-solving skills, with the ability to analyze complex requirements and design scalable solutions. Leadership Skills: Demonstrated ability to lead and mentor technical teams, with excellent project management skills. Communication: Excellent verbal and written communication skills, with the ability to convey technical concepts to both technical and non-technical stakeholders. Show more Show less

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Job Title : Data Architect Reports To Tittle : Head of Technology Business Function/Sub Function: IT Location: Noida, India Data Architecture Design: Design, develop, and maintain the enterprise data architecture, including data models, database schemas, and data flow diagrams. Develop a data strategy and roadmap that aligns with business objectives and ensures the scalability of data systems. Architect both transactional (OLTP) and analytical (OLAP) databases, ensuring optimal performance and data consistency. Data Integration & Management: Oversee the integration of disparate data sources into a unified data platform, leveraging ETL/ELT processes and data integration tools. Design and implement data warehousing solutions, data lakes, and/or data marts that enable efficient storage and retrieval of large datasets. Ensure proper data governance, including the definition of data ownership, security, and privacy controls in accordance with compliance standards (GDPR, HIPAA, etc.). Collaboration with Stakeholders: Work closely with business stakeholders, including analysts, developers, and executives, to understand data requirements and ensure that the architecture supports analytics and reporting needs. Collaborate with DevOps and engineering teams to optimize database performance and support large-scale data processing pipelines. Technology Leadership: Guide the selection of data technologies, including databases (SQL/NoSQL), data processing frameworks (Hadoop, Spark), cloud platforms (Azure is a must), and analytics tools. Stay updated on emerging data management technologies, trends, and best practices, and assess their potential application within the organization. Data Quality & Security: Define data quality standards and implement processes to ensure the accuracy, completeness, and consistency of data across all systems. Establish protocols for data security, encryption, and backup/recovery to protect data assets and ensure business continuity. Mentorship & Leadership: Lead and mentor data engineers, data modelers, and other technical staff in best practices for data architecture and management. Provide strategic guidance on data-related projects and initiatives, ensuring that all efforts are aligned with the enterprise data strategy. Required Skills & Experience: Extensive Data Architecture Expertise: Over 7 years of experience in data architecture, data modeling, and database management. Proficiency in designing and implementing relational (SQL) and non-relational (NoSQL) database solutions. Strong experience with data integration tools (Azure Tools are a must + any other third party tools), ETL/ELT processes, and data pipelines. Advanced Knowledge of Data Platforms: Expertise in Azure cloud data platform is a must. Other platforms such as AWS (Redshift, S3), Azure (Data Lake, Synapse), and/or Google Cloud Platform (BigQuery, Dataproc) is a bonus. Experience with big data technologies (Hadoop, Spark) and distributed systems for large-scale data processing. Hands-on experience with data warehousing solutions and BI tools (e.g., Power BI, Tableau, Looker). Data Governance & Compliance: Strong understanding of data governance principles, data lineage, and data stewardship. Knowledge of industry standards and compliance requirements (e.g., GDPR, HIPAA, SOX) and the ability to architect solutions that meet these standards. Technical Leadership: Proven ability to lead data-driven projects, manage stakeholders, and drive data strategies across the enterprise. Strong programming skills in languages such as Python, SQL, R, or Scala. Certification: Azure Certified Solution Architect, Data Engineer, Data Scientist certifications are mandatory. Pre-Sales Responsibilities: Stakeholder Engagement: Work with product stakeholders to analyze functional and non-functional requirements, ensuring alignment with business objectives. Solution Development: Develop end-to-end solutions involving multiple products, ensuring security and performance benchmarks are established, achieved, and maintained. Proof of Concepts (POCs): Develop POCs to demonstrate the feasibility and benefits of proposed solutions. Client Communication: Communicate system requirements and solution architecture to clients and stakeholders, providing technical assistance and guidance throughout the pre-sales process. Technical Presentations: Prepare and deliver technical presentations to prospective clients, demonstrating how proposed solutions meet their needs and requirements. Additional Responsibilities: Stakeholder Collaboration: Engage with stakeholders to understand their requirements and translate them into effective technical solutions. Technology Leadership: Provide technical leadership and guidance to development teams, ensuring the use of best practices and innovative solutions. Integration Management: Oversee the integration of solutions with existing systems and third-party applications, ensuring seamless interoperability and data flow. Performance Optimization: Ensure solutions are optimized for performance, scalability, and security, addressing any technical challenges that arise. Quality Assurance: Establish and enforce quality assurance standards, conducting regular reviews and testing to ensure robustness and reliability. Documentation: Maintain comprehensive documentation of the architecture, design decisions, and technical specifications. Mentoring: Mentor fellow developers and team leads, fostering a collaborative and growth-oriented environment. Qualifications: Education: Bachelor’s or master’s degree in computer science, Information Technology, or a related field. Experience: Minimum of 7 years of experience in data architecture, with a focus on developing scalable and high-performance solutions. Technical Expertise: Proficient in architectural frameworks, cloud computing, database management, and web technologies. Analytical Thinking: Strong problem-solving skills, with the ability to analyze complex requirements and design scalable solutions. Leadership Skills: Demonstrated ability to lead and mentor technical teams, with excellent project management skills. Communication: Excellent verbal and written communication skills, with the ability to convey technical concepts to both technical and non-technical stakeholders. Show more Show less

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana

On-site

Indeed logo

Location Gurugram, Haryana, India Category Corporate Job Id GGN00002011 Marketing / Loyalty / Mileage Plus / Alliances Job Type Full-Time Posted Date 06/02/2025 Achieving our goals starts with supporting yours. Grow your career, access top-tier health and wellness benefits, build lasting connections with your team and our customers, and travel the world using our extensive route network. Come join us to create what’s next. Let’s define tomorrow, together. Description Description - External United's Kinective Media Data Engineering team designs, develops, and maintains massively scaling ad- technology solutions brought to life with innovative architectures, data analytics, and digital solutions. Our Values : At United Airlines, we believe that inclusion propels innovation and is the foundation of all that we do. Our Shared Purpose: "Connecting people. Uniting the world." drives us to be the best airline for our employees, customers, and everyone we serve, and we can only do that with a truly diverse and inclusive workforce. Our team spans the globe and is made up of diverse individuals all working together with cutting-edge technology to build the best airline in the history of aviation. With multiple employee-run "Business Resource Group" communities and world-class benefits like health insurance, parental leave, and space available travel, United is truly a one-of-a-kind place to work that will make you feel welcome and accepted. Come join our team and help us make a positive impact on the world. Job overview and responsibilities Data Engineering organization is responsible for driving data driven insights & innovation to support the data needs for commercial projects with a digital focus. Data Engineer will be responsible to partner with various teams to define and execute data acquisition, transformation, processing and make data actionable for operational and analytics initiatives that create sustainable revenue and share growth. Execute unit tests and validating expected results to ensure accuracy & integrity of data and applications through analysis, coding, writing clear documentation and problem resolution. This role will also drive the adoption of data processing and analysis within the AWS environment and help cross train other members of the team. Leverage strategic and analytical skills to understand and solve customer and business centric questions. Coordinate and guide cross-functional projects that involve team members across all areas of the enterprise, vendors, external agencies and partners Leverage data from a variety of sources to develop data marts and insights that provide a comprehensive understanding of the business. Develop and implement innovative solutions leading to automation Use of Agile methodologies to manage projects Mentor and train junior engineers. This position is offered on local terms and conditions. Expatriate assignments and sponsorship for employment visas, even on a time-limited visa status, will not be awarded. This position is for United Airlines Business Services Pvt. Ltd - a wholly owned subsidiary of United Airlines Inc. Qualifications Qualifications - External Required BS/BA, in computer science or related STEM field 2+ years of IT experience in software development 2+ years of development experience using Java, Python, Scala 2+ years of experience with Big Data technologies like PySpark, Hadoop, Hive, HBASE, Kafka, Nifi 2+ years of experience with database systems like redshift,MS SQL Server, Oracle, Teradata. Creative, driven, detail-oriented individuals who enjoy tackling tough problems with data and insights Individuals who have a natural curiosity and desire to solve problems are encouraged to apply 2+ years of IT experience in software development 2+ years of development experience using Java, Python, Scala Must be legally authorized to work in India for any employer without sponsorship Successful completion of interview required to meet job qualification Reliable, punctual attendance is an essential function of the position Must be legally authorized to work in India for any employer without sponsorship Must be fluent in English (written and spoken) Successful completion of interview required to meet job qualification Reliable, punctual attendance is an essential function of the position Preferred Masters in computer science or related STEM field Experience with cloud based systems like AWS, AZURE or Google Cloud Certified Developer / Architect on AWS Strong experience with continuous integration & delivery using Agile methodologies Data engineering experience with transportation/airline industry Strong problem-solving skills Strong knowledge in Big Data

Posted 2 weeks ago

Apply

40.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Business Intelligence Engineer Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. What You Will Do As a Business Intelligence Engineer, you will solve unique and complex problems at a rapid pace, utilizing the latest technologies to create solutions that are highly scalable. This role involves working closely with product managers, designers, and other engineers to create high-quality, scalable solutions and responding to requests for rapid releases of analytical outcomes. Design, develop, and maintain interactive dashboards, reports, and data visualizations using BI tools (e.g., Power BI, Tableau, Cognos, others). Analyse datasets to identify trends, patterns, and insights that inform business strategy and decision-making. Partner with leaders and stakeholders across Finance, Sales, Customer Success, Marketing, Product, and other departments to understand their data and reporting requirements. Stay abreast of the latest trends and technologies in business intelligence and data analytics, inclusive of AI use in BI. Elicit and document clear and comprehensive business requirements for BI solutions, translating business needs into technical specifications and solutions. Collaborate with Data Engineers to ensure efficient up-system transformations and create data models/views that will hydrate accurate and reliable BI reporting. Contribute to data quality and governance efforts to ensure the accuracy and consistency of BI data. What We Expect Of You Basic Qualifications: Master’s degree and 1 to 3 years of Computer Science, IT or related field experience OR Bachelor’s degree and 3 to 5 years of Computer Science, IT or related field experience OR Diploma and 7 to 9 years of Computer Science, IT or related field experience Functional Skills: 1+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Preferred Qualifications: Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets AWS Developer certification (preferred) Soft Skills: Excellent analytical and troubleshooting skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. Show more Show less

Posted 2 weeks ago

Apply

Exploring Redshift Jobs in India

The job market for redshift professionals in India is growing rapidly as more companies adopt cloud data warehousing solutions. Redshift, a powerful data warehouse service provided by Amazon Web Services, is in high demand due to its scalability, performance, and cost-effectiveness. Job seekers with expertise in redshift can find a plethora of opportunities in various industries across the country.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Mumbai
  4. Pune
  5. Chennai

Average Salary Range

The average salary range for redshift professionals in India varies based on experience and location. Entry-level positions can expect a salary in the range of INR 6-10 lakhs per annum, while experienced professionals can earn upwards of INR 20 lakhs per annum.

Career Path

In the field of redshift, a typical career path may include roles such as: - Junior Developer - Data Engineer - Senior Data Engineer - Tech Lead - Data Architect

Related Skills

Apart from expertise in redshift, proficiency in the following skills can be beneficial: - SQL - ETL Tools - Data Modeling - Cloud Computing (AWS) - Python/R Programming

Interview Questions

  • What is Amazon Redshift and how does it differ from traditional databases? (basic)
  • How does data distribution work in Amazon Redshift? (medium)
  • Explain the difference between SORTKEY and DISTKEY in Redshift. (medium)
  • How do you optimize query performance in Amazon Redshift? (advanced)
  • What is the COPY command in Redshift used for? (basic)
  • How do you handle large data sets in Redshift? (medium)
  • Explain the concept of Redshift Spectrum. (advanced)
  • What is the difference between Redshift and Redshift Spectrum? (medium)
  • How do you monitor and manage Redshift clusters? (advanced)
  • Can you describe the architecture of Amazon Redshift? (medium)
  • What are the best practices for data loading in Redshift? (medium)
  • How do you handle concurrency in Redshift? (advanced)
  • Explain the concept of vacuuming in Redshift. (basic)
  • What are Redshift's limitations and how do you work around them? (advanced)
  • How do you scale Redshift clusters for performance? (medium)
  • What are the different node types available in Amazon Redshift? (basic)
  • How do you secure data in Amazon Redshift? (medium)
  • Explain the concept of Redshift Workload Management (WLM). (advanced)
  • What are the benefits of using Redshift over traditional data warehouses? (basic)
  • How do you optimize storage in Amazon Redshift? (medium)
  • What is the difference between Redshift and Redshift Spectrum? (medium)
  • How do you troubleshoot performance issues in Amazon Redshift? (advanced)
  • Can you explain the concept of columnar storage in Redshift? (basic)
  • How do you automate tasks in Redshift? (medium)
  • What are the different types of Redshift nodes and their use cases? (basic)

Conclusion

As the demand for redshift professionals continues to rise in India, job seekers should focus on honing their skills and knowledge in this area to stay competitive in the job market. By preparing thoroughly and showcasing their expertise, candidates can secure rewarding opportunities in this fast-growing field. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies