Jobs
Interviews

745 Amazon Redshift Jobs - Page 12

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 12.0 years

1 - 2 Lacs

Hyderabad

Remote

Role & responsibilities We are looking for a highly experienced Senior Cloud Data Engineer to lead the design, development, and optimization of our cloud-based data infrastructure. This role requires deep technical expertise in AWS services, data engineering best practices, and infrastructure automation. You will be instrumental in shaping our data architecture and enabling data-driven decision-making across the organization. Key Responsibilities: Design, build, and maintain scalable and secure data pipelines using AWS Glue , Redshift , and Python . Develop and optimize SQL queries and stored procedures for complex data transformations and migrations. Automate infrastructure provisioning and deployment using Terraform , ensuring repeatability and compliance. Architect and implement data lake and data warehouse solutions on AWS. Collaborate with cross-functional teams including data scientists, analysts, and DevOps to deliver high-quality data solutions. Monitor, troubleshoot, and optimize data workflows for performance, reliability, and cost-efficiency. Implement data quality checks, validation frameworks, and monitoring tools. Ensure data security, privacy, and compliance with industry standards and regulations. Lead code reviews, mentor junior engineers, and promote best practices in data engineering. Participate in capacity planning, cost optimization, and performance tuning of cloud data infrastructure. Evaluate and integrate new tools and technologies to improve data engineering capabilities. Document technical designs, processes, and operational procedures. Support business intelligence and analytics teams by ensuring timely and accurate data availability. Required Skills & Experience: 10+ years of experience in data engineering or cloud data architecture. Strong expertise in AWS Redshift , including schema design, performance tuning, and workload management. Proficiency in SQL and stored procedures for ETL and data migration tasks. Hands-on experience with Terraform for infrastructure as code (IaC) in AWS environments. Deep knowledge of AWS Glue for ETL orchestration and job development. Advanced programming skills in Python , especially for data processing and automation. Solid understanding of data warehousing, data lakes, and cloud-native data architectures. Preferred candidate profile AWS Certifications (e.g., AWS Certified Data Analytics Specialty, AWS Certified Solutions Architect). Experience with CI/CD pipelines and DevOps practices. Familiarity with additional AWS services like S3, Lambda, CloudWatch, Step Functions, and IAM. Knowledge of data governance, lineage, and cataloging tools (e.g., AWS Glue Data Catalog, Apache Atlas). Experience with real-time data processing frameworks (e.g., Kinesis, Kafka, Spark Streaming).

Posted 1 month ago

Apply

7.0 - 12.0 years

30 - 40 Lacs

Hyderabad

Work from Office

Support enhancements to the MDM platform Develop pipelines using snowflake python SQL and airflow Track System Performance Troubleshoot issues Resolve production issues Required Candidate profile 5+ years of hands on expert level Snowflake, Python, orchestration tools like Airflow Good understanding of investment domain Experience with dbt, Cloud experience (AWS, Azure) DevOps

Posted 1 month ago

Apply

2.0 - 5.0 years

16 - 18 Lacs

Coimbatore

Work from Office

Overview Overview Annalect is currently seeking a Senior Data Engineer to join our Technology team. In this role you will build Annalect products which sit atop cloud-based data infrastructure. We are looking for people who have a shared passion for technology, design & development, data, and fusing these disciplines together to build cool things. In this role, you will work on one or more software and data products in the Annalect Engineering Team. You will participate in technical architecture, design and development of software products as well as research and evaluation of new technical solutions Responsibilities Designing, building, testing, and deploying data transfers across various cloud environments (Azure, GCP, AWS, Snowflake, etc). Developing data pipelines, monitoring, maintaining, and tuning. Write at-scale data transformations in SQL and Python. Perform code reviews and provide leadership and guidance to junior developers. Qualifications Curiosity in learning the business requirements that are driving the engineering requirements. Interest in new technologies and eagerness to bring those technologies and out of the box ideas to the team. 3+ years of SQL experience. 3+ years of professional Python experience. 3+ years of professional Linux experience. Preferred familiarity with Snowflake, AWS, GCP, Azure cloud environments. Intellectual curiosity and drive; self-starters will thrive in this position. Passion for Technology: Excitement for new technology, bleeding edge applications, and a positive attitude towards solving real world challenges. Additional Skills BS BS, MS or PhD in Computer Science, Engineering, or equivalent real-world experience. Experience with big data and/or infrastructure. Bonus for having experience in setting up Petabytes of data so they can be easily accessed. Understanding of data organization, ie partitioning, clustering, file sizes, file formats. Experience working with classical relational databases (Postgres, Mysql, MSSQL). Experience with Hadoop, Hive, Spark, Redshift, or other data processing tools (Lots of time will be spent building and optimizing transformations) Proven ability to independently execute projects from concept to implementation to launch and to maintain a live product. Perks of working at Annalect We have an incredibly fun, collaborative, and friendly environment, and often host social and learning activities such as game night, speaker series, and so much more! Halloween is a special day on our calendar since it is our Founding Day – we go all out with decorations, costumes, and prizes! Generous vacation policy. Paid time off (PTO) includes vacation days, personal days, and a Summer Friday program. Extended time off around the holiday season. Our office is closed between Xmas and New Year to encourage our hardworking employees to rest, recharge and celebrate the season with family and friends. As part of Omnicom, we have the backing and resources of a global billion-dollar company, but also have the flexibility and pace of a “startup” - we move fast, break things, and innovate. Work with modern stack and environment to keep on learning and improving helping to experiment and shape latest technologies

Posted 1 month ago

Apply

4.0 - 9.0 years

10 - 15 Lacs

Pune

Work from Office

MS Azure Infra (Must), PaaS will be a plus, ensuring solutions meet regulatory standards and manage risk effectively. Hands-On Experience using Terraform to design and deploy solutions (at least 5+ years), adhering to best practices to minimize risk and ensure compliance with regulatory requirements. Primary Skill AWS Infra along with PaaS will be an added advantage. Certification in Terraform is an added advantage. Certification in Azure and AWS is an added advantage. Can handle large audiences to present HLD, LLD, and ERC. Able to drive Solutions/Projects independently and lead projects with a focus on risk management and regulatory compliance. Secondary Skills Amazon Elastic File System (EFS) Amazon Redshift Amazon S3 Apache Spark Ataccama DQ Analyzer AWS Apache Airflow AWS Athena Azure Data Factory Azure Data Lake Storage Gen2 (ADLS) Azure Databricks Azure Event Hub Azure Stream Analytics Azure Synapse Analytics BigID C++ Cloud Storage Collibra Data Governance (DG) Collibra Data Quality (DQ) Data Lake Storage Data Vault Modeling Databricks DataProc DDI Dimensional Data Modeling EDC AXON Electronic Medical Record (EMR) Extract, Transform & Load (ETL) Financial Services Logical Data Model (FSLDM) Google Cloud Platform (GCP) BigQuery Google Cloud Platform (GCP) Bigtable Google Cloud Platform (GCP) Dataproc HQL IBM InfoSphere Information Analyzer IBM Master Data Management (MDM) Informatica Data Explorer Informatica Data Quality (IDQ) Informatica Intelligent Data Management Cloud (IDMC) Informatica Intelligent MDM SaaS Inmon methodology Java Kimball Methodology Metadata Encoding & Transmission Standards (METS) Metasploit Microsoft Excel Microsoft Power BI NewSQL noSQL OpenRefine OpenVAS Performance Tuning Python R RDD Optimization SaS SQL Tableau Tenable Nessus TIBCO Clarity

Posted 1 month ago

Apply

8.0 - 10.0 years

12 - 18 Lacs

Noida

Work from Office

Primary Role Function: - Create and maintain optimal data pipeline architecture, - Assemble large, complex data sets that meet functional non-functional business requirements. - Experience with AWS cloud services: EC2, Glue, RDS, Redshift - Experience with big data tools: Hadoop, Spark, Kafka, etc. - Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. - Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc. - Experience with object-oriented/object function scripting languages: Python. - Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data technologies. - Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. - Keep our data separated and secure across national boundaries through multiple data centers and AWS regions. - Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. - Work with data and analytics experts to strive for greater functionality in our data systems. - Writes high quality and well-documented code according to accepted standards based on user requirements Knowledge: - Thorough in-depth knowledge of design and analysis methodology and application development processes - Exhibits solid knowledge of databases - Programming experience with extensive business knowledge - University degree in Computer Science, Engineering or equivalent industry experience - Solid understanding of SDLC and QA requirements Mandatory Competencies Data on Cloud - AWS S3 Cloud - AWS Python - Airflow Python - Python DevOps - Docker

Posted 1 month ago

Apply

0.0 - 3.0 years

1 - 4 Lacs

Bengaluru

Work from Office

Job Title: Data Engineer Company Name: Kinara Capital Job Description: As a Data Engineer at Kinara Capital, you will play a critical role in building and maintaining the data infrastructure necessary for effective data analysis and decision-making. You will collaborate with data scientists, analysts, and other stakeholders to support data-driven initiatives. Your primary responsibilities will include designing and implementing robust data pipelines, ensuring data quality and integrity, and optimizing data storage and retrieval processes. Key Responsibilities: - Develop, construct, test, and maintain data architectures including databases and large-scale processing systems. - Create and manage data pipelines to ingest, process, and transform data from various sources. - Collaborate with data scientists and analysts to understand data needs and develop solutions to meet those needs. - Monitor data quality and implement data governance best practices. - Optimize SQL queries and improve performance of data-processing systems. - Ensure data privacy and security standards are met and maintained. - Document data processes and pipelines to facilitate knowledge sharing within the team. Skills and Tools Required: - Proficiency in programming languages such as Python, Java, or Scala. - Experience with data warehousing solutions, such as Amazon Redshift, Google BigQuery, or Snowflake. - Strong knowledge of SQL and experience with relational databases like MySQL, PostgreSQL, or Oracle. - Familiarity with big data technologies like Apache Hadoop, Apache Spark, or Apache Kafka. - Understanding of data modeling and ETL (Extract, Transform, Load) processes. - Experience with cloud platforms such as AWS, Azure, or Google Cloud Platform. - Familiarity with data visualization tools (e.g., Tableau, Power BI) is a plus. - Strong analytical and problem-solving skills, with attention to detail. - Excellent communication skills to work collaboratively with cross-functional teams. Join Kinara Capital and leverage your data engineering skills to help drive innovative solutions and empower businesses through data.

Posted 1 month ago

Apply

6.0 - 11.0 years

17 - 20 Lacs

Mumbai

Work from Office

We are looking for a highly skilled and experienced professional with 6 to 11 years of experience to join our team as a Manager - Business Transformation in Mumbai. Roles and Responsibility Develop and implement automation processes to enhance efficiency and productivity. Create MIS dashboards using Tableau for business insights and decision-making. Design analytical models, including scorecards, based on business requirements. Conduct deviation analytics to identify areas for improvement. Collaborate with cross-functional teams to drive business transformation initiatives. Analyze data to provide actionable recommendations to stakeholders. Job Graduate with a strong understanding of MIS and visualization tools. Proven experience in process automation and data analysis. Strong knowledge of Tableau and other data visualization tools. Excellent analytical and problem-solving skills. Ability to work collaboratively with cross-functional teams. Strong communication and interpersonal skills.

Posted 1 month ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Mumbai

Work from Office

We are seeking a skilled Python Developer with expertise in Django, Flask, and API development to join our growing team. The Python Developer will be responsible for designing and implementing backend services, APIs, and integrations that power our core platform. The ideal candidate should have a strong foundation in Python programming, experience with Django and/or Flask frameworks, and a proven track record of delivering robust and scalable solutions. Responsibilities: Design, develop, and maintain backend services and APIs using Python frameworks such as Django and Flask. Collaborate with front-end developers, product managers, and stakeholders to translate business requirements into technical solutions. Build and integrate RESTful APIs for seamless communication between our applications and external services. Qualifications: Bachelors degree in computer science, Engineering, or related field; or equivalent experience. 5+ years of professional experience as a Python Developer, with a focus on backend development. Secondary Skill Amazon Elastic File System (EFS) Amazon Redshift Amazon S3 Apache Spark Ataccama DQ Analyzer AWS Apache Airflow AWS Athena Azure Data Factory Azure Data Lake Storage Gen2 (ADLS) Azure Databricks Azure Event Hub Azure Stream Analytics Azure Synapse Analytics BigID C++ Cloud Storage Collibra Data Governance (DG) Collibra Data Quality (DQ) Data Lake Storage Data Vault Modeling Databricks DataProc DDI Dimensional Data Modeling EDC AXON Electronic Medical Record (EMR) Extract, Transform & Load (ETL) Financial Services Logical Data Model (FSLDM) Google Cloud Platform (GCP) BigQuery Google Cloud Platform (GCP) Bigtable Google Cloud Platform (GCP) Dataproc HQL IBM InfoSphere Information Analyzer IBM Master Data Management (MDM) Informatica Data Explorer Informatica Data Quality (IDQ) Informatica Intelligent Data Management Cloud (IDMC) Informatica Intelligent MDM SaaS Inmon methodology Java Kimball Methodology Metadata Encoding & Transmission Standards (METS) Metasploit Microsoft Excel Microsoft Power BI NewSQL noSQL OpenRefine OpenVAS Performance Tuning Python R RDD Optimization SaS SQL Tableau Tenable Nessus TIBCO Clarity

Posted 1 month ago

Apply

7.0 - 12.0 years

6 - 11 Lacs

Bengaluru

Work from Office

Your Job As a Data Engineer you will be a part of an team that designs, develops, and delivers Data Pipelines and Data Analytics Solutions for Koch Industries. Koch Industries is a privately held global organization with over 120,000 employees around the world, with subsidiaries involved in manufacturing, trading, and investments. Koch Global Solution India (KGSI) is being developed in India to extend its IT operations, as well as act as a hub for innovation in the IT function. As KSGI rapidly scales up its operations in India, its employees will get opportunities to carve out a career path for themselves within the organization. This role will have the opportunity to join on the ground floor and will play a critical part in helping build out the Koch Global Solution (KGS) over the next several years. Working closely with global colleagues would provide significant international exposure to the employees. Our Team The Enterprise data and analytics team at Georgia Pacific is focused on creating an enterprise capability around Data Engineering Solutions for operational and commercial data as well as helping businesses develop, deploy, manage monitor Data Pipelines and Analytics solutions of manufacturing, operations, supply chain and other key areas. What You Will Do ETL SolutionsDesign, implement, and manage large-scale ETL solutions using the AWS technology stack, including Event Bridge, Lambda, Glue, Step Functions, Redshift, and CloudWatch. Data Pipeline ManagementDesign, develop, enhance, and debug existing data pipelines to ensure seamless operations. Data ModellingProven Experience in Designing, Developing Data Modeling. Best Practices ImplementationDevelop and implement best practices to ensure high data availability, computational efficiency, cost-effectiveness, and data quality within Snowflake and AWS environments. EnhancementBuild and enhance Data Products, processes, functionalities, and tools to streamline all stages of data lake implementation and analytics solution development, including proof of concepts, prototypes, and production systems. Production SupportProvide ongoing support for production data pipelines, ensuring high availability and performance. Issue ResolutionMonitor, troubleshoot, and resolve issues within data pipelines and ETL processes promptly. AutomationDevelop and implement scripts and tools to automate routine tasks and enhance system efficiency Who You Are (Basic Qualifications) Bachelor's degree in Computer Science, Engineering, or a related IT field, with at least 7+ years of experience in software development. 5+ Years of hands-on experience of Designing, implementing, and managing large-scale ETL solutions using the AWS technology stack including Event Bridge, Lambda, Glue, Step Functions, Redshift, and CloudWatch. Primary skill setSQL, S3, AWS Glue, Pyspark, Python, Lambda, Columnar DB (Redshift), AWS IAM, Step Functions, Git, Terraform, CI/CD. Good to haveExperience with the MSBI stack, including SSIS, SSAS, and SSRS. What Will Put You Ahead In-depth knowledge of entire suite of services in AWS Data Service Platform. Strong coding experience using Python, Pyspark. Experience of designing and implementing Data Modeling. Cloud Data Analytics/Engineering certification. Who We Are At Koch, employees are empowered to do what they do best to make life better. Learn how ourhelps employees unleash their potential while creating value for themselves and the company.Additionally, everyone has individual work and personal needs. We seek to enable the best work environment that helps you and the business work together to produce superior results.

Posted 1 month ago

Apply

4.0 - 8.0 years

2 - 6 Lacs

Noida

Work from Office

We are looking for a skilled SQL + Pyspark Developer with 4 to 8 years of experience. The ideal candidate should have strong proficiency in SQL and Pyspark/Python, with the ability to work effectively in a team. Roles and Responsibility Design, develop, and implement data models using SQL and Pyspark/Python. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain large-scale data systems using Apache Spark and AWS Glue ETL. Ensure data quality and integrity by implementing data validation and testing procedures. Optimize database performance and troubleshoot issues. Participate in code reviews and contribute to improving overall code quality. Job Proven experience with SQL, Python, Amazon Redshift, Apache Spark (Pyspark), AWS IAM, Amazon S3, and AWS Glue ETL. Strong communication and collaboration skills, with the ability to work effectively in a team. Good understanding of data modeling concepts and principles. Ability to work in a fast-paced environment and meet deadlines. Strong problem-solving skills and attention to detail. Familiarity with Agile development methodologies and version control systems.

Posted 1 month ago

Apply

8.0 - 10.0 years

5 - 9 Lacs

Noida

Work from Office

We are looking for a skilled Power BI Dashboarding and Visualization Developer with 8 to 10 years of experience. The ideal candidate will have a strong background in designing and developing interactive dashboards and visualizations using Power BI, as well as integrating and optimizing Power BI solutions within cloud environments. Roles and Responsibility Design and develop interactive dashboards and visualizations using Power BI. Integrate and optimize Power BI solutions within AWS and Azure environments. Collaborate with business users to gather requirements and deliver insights. Ensure data accuracy, security, and performance. Develop and maintain complex data models and reports using Power BI. Troubleshoot and resolve issues related to Power BI dashboard development. Job Strong experience with Power BI, including DAX, Power Query, and data modeling. Proficiency in SQL for querying and data manipulation. Familiarity with data warehouses such as Redshift, Snowflake, or Synapse. Knowledge of Azure Data Factory and AWS Glue for data integration. Understanding of REST APIs and integrating external data sources. Experience with Git for version control and CI/CD pipelines. Excellent communication and problem-solving skills.

Posted 1 month ago

Apply

6.0 - 8.0 years

2 - 6 Lacs

Pune

Work from Office

We are looking for a skilled Python AWS Developer with 6 to 8 years of experience. The ideal candidate will have expertise in developing scalable and efficient applications on the AWS platform. Roles and Responsibility Design, develop, and deploy scalable and efficient applications on the AWS platform. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop high-quality code that meets industry standards and best practices. Troubleshoot and resolve technical issues efficiently. Participate in code reviews and contribute to improving overall code quality. Stay updated with the latest trends and technologies in Python and AWS development. Job Strong proficiency in Python programming language. Experience with AWS services such as EC2, S3, Lambda, etc. Knowledge of database management systems such as MySQL or PostgreSQL. Familiarity with agile development methodologies and version control systems like Git. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a team environment. Additional Info The company name is Apptad Technologies Pvt Ltd., and the industry is Employment Firms/Recruitment Services Firms.

Posted 1 month ago

Apply

5.0 - 10.0 years

3 - 6 Lacs

Noida

Work from Office

We are looking for a skilled Reltio MDM Developer with 5 to 10 years of experience to join our team. The ideal candidate will be responsible for implementing and managing the Reltio MDM platform, ensuring data consistency, accuracy, and up-to-date-ness across different systems. Roles and Responsibility Design and implement data models in Reltio, focusing on data structure and relationships to ensure a unified view of master data. Integrate Reltio MDM with other systems such as CRMs, ERPs, data warehouses, and external data sources to ensure data synchronization. Set up and enforce data governance policies to ensure data quality, accuracy, and compliance with regulations. Use tools to cleanse, enrich, and standardize data to make it consistent across different systems. Customize the Reltio platform using APIs and configurations to meet organizational needs. Develop and implement business rules in Reltio to ensure data processing according to predefined criteria and workflows. Perform performance tuning of the MDM platform to handle large volumes of data efficiently. Monitor the system for issues and perform maintenance tasks for continuous operation. Job Deep understanding of Reltio's capabilities, including its data model, APIs, and integrations. Familiarity with Java, Python, or other programming languages for customization and scripting. Ability to design and implement data models that fit the organization's needs. Knowledge of MDM concepts like data governance, data quality, and data lifecycle management. Experience with databases and query languages like SQL or NoSQL. Familiarity with integrating MDM solutions with various systems using RESTful APIs, SOAP, or other protocols. Experience with cloud services (e.g., AWS, Azure) as Reltio can be deployed on the cloud. Knowledge of data quality tools and techniques for data validation and enrichment. The company offers a full-time/long-term job opportunity with opportunities for growth and development.

Posted 1 month ago

Apply

5.0 - 10.0 years

4 - 8 Lacs

Noida

Work from Office

We are looking for a skilled Database Engineer with 5 to 10 years of experience to design, develop, and maintain our database infrastructure. This position is based remotely. Roles and Responsibility Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale and big data processing. Implement data security measures to protect sensitive information and comply with relevant regulations. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to relational database systems or cloud-based solutions like Google BigQuery and AWS. Develop import workflows and scripts to automate data import processes. Ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and resolve issues, while collaborating with the full-stack web developer to implement efficient data access and retrieval mechanisms. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows, exploring third-party technologies as alternatives to legacy approaches for efficient data pipelines. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices, and use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines, taking accountability for achieving development milestones. Prioritize tasks to ensure timely delivery in a fast-paced environment with rapidly changing priorities, while also collaborating with fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems, leveraging online resources effectively like StackOverflow, ChatGPT, Bard, etc., considering their capabilities and limitations. Job Proficiency in SQL and relational database management systems like PostgreSQL or MySQL, along with database design principles. Strong familiarity with Python for scripting and data manipulation tasks, with additional knowledge of Python OOP being advantageous. Demonstrated problem-solving skills with a focus on optimizing database performance and automating data import processes. Knowledge of cloud-based databases like AWS RDS and Google BigQuery. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. About Company Marketplace is an experienced team of industry experts dedicated to helping readers make informed decisions and choose the right products with ease. We arm people with trusted advice and guidance, so they can make confident decisions and get back to doing the things they care about most.

Posted 1 month ago

Apply

7.0 - 8.0 years

6 - 10 Lacs

Noida

Work from Office

We are looking for a skilled Data Warehouse Lead with 7 to 8 years of experience to design, develop, and maintain data models optimized for reporting and analysis. The ideal candidate will have a strong background in data warehousing concepts, principles, and methodologies. This position is based remotely. Roles and Responsibility Lead the design, development, and maintenance of data models optimized for reporting and analysis. Ensure data quality, integrity, and consistency throughout the data warehousing process. Troubleshoot and resolve issues related to data pipelines and data integrity. Collaborate with business analysts and stakeholders to understand their data needs and provide solutions. Communicate technical concepts effectively to non-technical audiences. Ensure the data warehouse is scalable to accommodate growing data volumes and user demands. Adhere to data governance and privacy policies and procedures. Implement and monitor data quality metrics and processes. Lead and mentor a team of data warehouse developers, providing technical guidance and support. Stay updated with the latest trends and technologies in data warehousing and business intelligence. Foster a collaborative and high-performing team environment. Job Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. 7-8 years of progressive experience in data warehousing, with at least 3 years in a lead or senior role. Deep understanding of data warehousing concepts, principles, and methodologies. Strong proficiency in SQL and experience with various database platforms (e.g., BigQuery, Redshift, Snowflake). Good understanding of Affiliate Marketing Data (GA4, Paid marketing channels like Google Ads, Facebook Ads, etc. the more the better). Hands-on experience with dbt and other ETL/ELT tools and technologies. Experience with data modeling techniques (e.g., dimensional modeling, star schema, snowflake schema). Experience with cloud-based data warehousing solutions (e.g., AWS, Azure, GCP) - GCP is highly preferred. Excellent problem-solving, analytical, and troubleshooting skills. Strong communication, presentation, and interpersonal skills. Ability to thrive in a fast-paced and dynamic environment. Familiarity with business intelligence and reporting tools (e.g., Tableau, Power BI, Looker). Experience with data governance and data quality frameworks is a plus.

Posted 1 month ago

Apply

0.0 - 3.0 years

2 - 6 Lacs

Chandigarh

Work from Office

We are looking for a highly skilled and experienced Analyst to join our team at eClerx Services Ltd. The ideal candidate will have a strong background in IT Services & Consulting, with excellent analytical skills and attention to detail. Roles and Responsibility Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain complex data analysis systems and reports. Provide expert-level support for data analysis and reporting needs. Identify trends and patterns in large datasets to inform business decisions. Design and implement process improvements to increase efficiency and productivity. Develop and maintain technical documentation for data analysis systems. Job Requirements Strong understanding of data analysis principles and techniques. Proficiency in data visualization tools and programming languages. Excellent communication and problem-solving skills. Ability to work in a fast-paced environment and meet deadlines. Strong attention to detail and organizational skills. Experience working with large datasets and developing complex reports.

Posted 1 month ago

Apply

1.0 - 5.0 years

3 - 7 Lacs

Pune

Work from Office

We are looking for a skilled Senior Analyst to join our team at eClerx Services Ltd., with 6-10 years of experience in the IT Services & Consulting industry. The ideal candidate will have a strong background in analysis and problem-solving, with excellent communication skills. Roles and Responsibility Conduct thorough analysis of complex data sets to identify trends and patterns. Develop and implement effective analytical solutions to drive business growth. Collaborate with cross-functional teams to provide insights and recommendations. Design and maintain databases and systems to support business intelligence initiatives. Develop and deliver presentations to senior management on key findings and recommendations. Stay up-to-date with industry trends and emerging technologies to improve processes. Job Requirements Strong understanding of analytical principles and methodologies. Proficiency in statistical analysis and data visualization tools. Excellent communication and interpersonal skills. Ability to work in a fast-paced environment and meet deadlines. Strong problem-solving and critical thinking skills. Experience with business intelligence tools and technologies.

Posted 1 month ago

Apply

4.0 - 6.0 years

3 - 7 Lacs

Noida

Work from Office

company name=Apptad Technologies Pvt Ltd., industry=Employment Firms/Recruitment Services Firms, experience=4 to 6 , jd= Experience with AWS Python AWS CloudFormation Step Functions Glue Lambda S3 SNS SQS IAM Athena EventBridge and API Gateway Experience in Python development Expertise in multiple applications and functionalities Domain skills with a quick learning inclination Good SQL knowledge and understanding of databases Familiarity with MS Office and SharePoint High aptitude and excellent problem solving skills Strong analytical skills Interpersonal skills and ability to influence stakeholders , Title=Python Developer, ref=6566420

Posted 1 month ago

Apply

8.0 - 12.0 years

15 - 30 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Lead design, development, and deployment of cloud-native and hybrid solutions on AWS and GCP. Ensure robust infrastructure using services like GKE, GCE, Cloud Functions, Cloud Run (GCP) and EC2, Lambda, ECS, S3, etc. (AWS).

Posted 1 month ago

Apply

5.0 - 10.0 years

7 - 11 Lacs

Pune

Work from Office

About the Role Were looking for a Data Engineer to help build reliable and scalable data pipelines that power reports, dashboards, and business decisions at Hevo. Youll work closely with engineering, product, and business teams to make sure data is accurate, available, and easy to use. Key Responsibilities Independently design and implement scalable ELT workflows using tools like Hevo, dbt, Airflow, and Fivetran. Ensure the availability, accuracy, and timeliness of datasets powering analytics, dashboards, and operations. Collaborate with Platform and Engineering teams to address issues related to ingestion, schema design, and transformation logic. Escalate blockers and upstream issues proactively to minimize delays for stakeholders. Maintain strong documentation and ensure discoverability of all models, tables, and dashboards. Own end-to-end pipeline quality, minimizing escalations or errors in models and dashboards. Implement data observability practices such as freshness checks, lineage tracking, and incident alerts. Regularly audit and improve accuracy across business domains. Identify gaps in instrumentation, schema evolution, and transformation logic. Ensure high availability and data freshness through monitoring, alerting, and incident resolution processes. Set up internal SLAs, runbooks, and knowledge bases (data catalog, transformation logic, FAQs). Improve onboarding material and templates for future engineers and analysts Required Skills & Experience 3-5 years of experience in Data Engineering, Analytics Engineering, or related roles. Proficient in SQL and Python for data manipulation, automation, and pipeline creation. Strong understanding of ELT pipelines, schema management, and data transformation concepts. Experience with modern data stack : dbt, Airflow, Hevo, Fivetran, Snowflake, Redshift, or BigQuery. Solid grasp of data warehousing concepts: OLAP/OLTP, star/snowflake schemas, relational & columnar databases. Understanding of Rest APIs, Webhooks, and event-based data ingestion. Strong debugging skills and ability to troubleshoot issues across systems. Preferred Background Experience in high-growth industries such as eCommerce, FinTech, or hyper-commerce environments. Experience working with or contributing to a data platform (ELT/ETL tools, observability, lineage, etc.). Core Competencies Excellent communication and problem-solving skills Attention to detail and a self-starter mindset High ownership and urgency in execution Collaborative and coachable team player Strong prioritization and resilience under pressure

Posted 1 month ago

Apply

3.0 - 6.0 years

5 - 9 Lacs

Gurugram

Work from Office

Experience : 8-10 years. Job Title : Devops Engineer. Location : Gurugram. Job Summary. We are seeking a highly skilled and experienced Lead DevOps Engineer to drive the design, automation, and maintenance of secure and scalable cloud infrastructure. The ideal candidate will have deep technical expertise in cloud platforms (AWS/GCP), container orchestration, CI/CD pipelines, and DevSecOps practices.. You will be responsible for leading infrastructure initiatives, mentoring team members, and collaborating. closely with software and QA teams to enable high-quality, rapid software delivery.. Key Responsibilities. Cloud Infrastructure & Automation :. Design, deploy, and manage secure, scalable cloud environments using AWS, GCP, or similar platforms.. Develop Infrastructure-as-Code (IaC) using Terraform for consistent resource provisioning.. Implement and manage CI/CD pipelines using tools like Jenkins, GitLab CI/CD, GitHub Actions, Bitbucket Pipelines, AWS CodePipeline, or Azure DevOps.. Containerization & Orchestration :. Containerize applications using Docker for seamless development and deployment.. Manage and scale Kubernetes clusters (on-premise or cloud-managed like AWS EKS).. Monitor and optimize container environments for performance, scalability, and cost-efficiency.. Security & Compliance :. Enforce cloud security best practices including IAM policies, VPC design, and secure secrets management (e.g., AWS Secrets Manager).. Conduct regular vulnerability assessments, security scans, and implement remediation plans.. Ensure infrastructure compliance with industry standards and manage incident response protocols.. Monitoring & Optimization :. Set up and maintain monitoring/observability systems (e.g., Grafana, Prometheus, AWS CloudWatch, Datadog, New Relic).. Analyze logs and metrics to troubleshoot issues and improve system performance.. Optimize resource utilization and cloud spend through continuous review of infrastructure configurations.. Scripting & Tooling :. Develop automation scripts (Shell/Python) for environment provisioning, deployments, backups, and log management.. Maintain and enhance CI/CD workflows to ensure efficient and stable deployments.. Collaboration & Leadership :. Collaborate with engineering and QA teams to ensure infrastructure aligns with development needs.. Mentor junior DevOps engineers, fostering a culture of continuous learning and improvement.. Communicate technical concepts effectively to both technical and non-technical :. Education. Bachelor's degree in Computer Science, Engineering, or a related technical field, or equivalent hands-on : AWS Certified DevOps Engineer Professional (preferred) or other relevant cloud :. 8+ years of experience in DevOps or Cloud Infrastructure roles, including at least 3 years in a leadership capacity.. Strong hands-on expertise in AWS (ECS, EKS, RDS, S3, Lambda, CodePipeline) or GCP equivalents.. Proven experience with CI/CD tools: Jenkins, GitLab CI/CD, GitHub Actions, Bitbucket Pipelines, Azure DevOps.. Advanced knowledge of Docker and Kubernetes ecosystem.. Skilled in Infrastructure-as-Code (Terraform) and configuration management tools like Ansible.. Proficient in scripting (Shell, Python) for automation and tooling.. Experience implementing DevSecOps practices and advanced security configurations.. Exposure to data tools (e.g., Apache Superset, AWS Athena, Redshift) is a plus.. Soft Skills. Strong problem-solving abilities and capacity to work under pressure.. Excellent communication and team collaboration.. Organized with attention to detail and a commitment to Skills :. Experience with alternative cloud platforms (e.g., Oracle Cloud, DigitalOcean).. Familiarity with advanced observability stacks (Grafana, Prometheus, Loki, Datadog).. (ref:hirist.tech). Show more Show less

Posted 1 month ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Chennai

Work from Office

We are looking for a skilled AWS Developer with 5 to 10 years of experience. Chennai and requires an immediate or 15-day notice period. Roles and Responsibility Design, develop, and deploy scalable and efficient software applications on AWS. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain high-quality code that meets industry standards and best practices. Troubleshoot and resolve technical issues efficiently. Participate in code reviews and contribute to improving overall code quality. Stay updated with the latest trends and technologies in AWS development. Job Requirements Strong proficiency in AWS services such as EC2, S3, Lambda, etc. Experience with cloud-based technologies and platforms. Excellent problem-solving skills and attention to detail. Strong communication and teamwork skills. Ability to work in a fast-paced environment and meet deadlines. Familiarity with agile development methodologies and version control systems. Skills: AWS DEVELOPER

Posted 1 month ago

Apply

5.0 - 8.0 years

6 - 10 Lacs

Hyderabad

Work from Office

We are looking for a skilled Senior Data Engineer with 5-8 years of experience to join our team at IDESLABS PRIVATE LIMITED. The ideal candidate will have a strong background in data engineering and excellent problem-solving skills. Roles and Responsibility Design, develop, and implement large-scale data pipelines and architectures. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain complex data systems and databases. Ensure data quality, integrity, and security. Optimize data processing workflows for improved performance and efficiency. Troubleshoot and resolve technical issues related to data engineering. Job Requirements Strong knowledge of data engineering principles and practices. Experience with data modeling, database design, and data warehousing. Proficiency in programming languages such as Python, Java, or C++. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a team environment. Strong communication and interpersonal skills.

Posted 1 month ago

Apply

10.0 - 15.0 years

7 - 11 Lacs

Noida

Work from Office

R1 RCM India is proud to be recognized amongst India's Top 50 Best Companies to Work For TM 2023 by Great Place To Work Institute. We are committed to transform the healthcare industry with our innovative revenue cycle management services. Our goal is to make healthcare simpler and enable efficiency for healthcare systems, hospitals, and physician practices. With over 30,000 employees globally, we are about 14,000 strong in India with offices in Delhi NCR, Hyderabad, Bangalore, and Chennai. Our inclusive culture ensures that every employee feels valued, respected, and appreciated with a robust set of employee benefits and engagement activities. Description : We are seeking a highly skilled and motivated Data Cloud Architect to join our Product and technology team. As a Data Cloud Architect, you will play a key role in designing and implementing our cloud-based data architecture, ensuring scalability, reliability, and optimal performance for our data-intensive applications. Your expertise in cloud technologies, data architecture, and data engineering will drive the success of our data initiatives. Responsibilities: Collaborate with cross-functional teams, including data engineers, data leads, product owner and stakeholders, to understand business requirements and data needs. Design and implement end-to-end data solutions on cloud platforms, ensuring high availability, scalability, and security. Architect delta lakes, data lake, data warehouses, and streaming data solutions in the cloud. Evaluate and select appropriate cloud services and technologies to support data storage, processing, and analytics. Develop and maintain cloud-based data architecture patterns and best practices. Design and optimize data pipelines, ETL processes, and data integration workflows. Implement data security and privacy measures in compliance with industry standards. Collaborate with DevOps teams to deploy and manage data-related infrastructure on the cloud. Stay up-to-date with emerging cloud technologies and trends to ensure the organization remains at the forefront of data capabilities. Provide technical leadership and mentorship to data engineering teams. Qualifications: Bachelors degree in computer science, Engineering, or a related field (or equivalent experience). 10 years of experience as a Data Architect, Cloud Architect, or in a similar role. Expertise in cloud platforms such as Azure. Strong understanding of data architecture concepts and best practices. Proficiency in data modeling, ETL processes, and data integration techniques. Experience with big data technologies and frameworks (e.g., Hadoop, Spark). Knowledge of containerization technologies (e.g., Docker, Kubernetes). Familiarity with data warehousing solutions (e.g., Redshift, Snowflake). Strong knowledge of security practices for data in the cloud. Excellent problem-solving and troubleshooting skills. Effective communication and collaboration skills. Ability to lead and mentor technical teams. Additional Preferred Qualifications: Bachelors degree / Master's degree in Data Science, Computer Science, or related field. Relevant cloud certifications (e.g., Azure Solutions Architect) and data-related certifications. Experience with real-time data streaming technologies (e.g., Apache Kafka). Knowledge of machine learning and AI concepts in relation to cloud-based data solutions. Working in an evolving healthcare setting, we use our shared expertise to deliver innovative solutions. Our fast-growing team has opportunities to learn and grow through rewarding interactions, collaboration and the freedom to explore professional interests. Our associates are given valuable opportunities to contribute, to innovate and create meaningful work that makes an impact in the communities we serve around the world. We also offer a culture of excellence that drives customer success and improves patient care. We believe in giving back to the community and offer a competitive benefits package. To learn more, visitr1rcm.com Visit us on Facebook

Posted 1 month ago

Apply

8.0 - 12.0 years

20 - 25 Lacs

Pune

Work from Office

Designation: Big Data Lead/Architect Location: Pune Experience: 8-10 years NP - immediate joiner/15-30 days notice Reports To – Product Engineering Head Job Overview We are looking to hire a talented big data engineer to develop and manage our company’s Big Data solutions. In this role, you will be required to design and implement Big Data tools and frameworks, implement ELT processes, collaborate with development teams, build cloud platforms, and maintain the production system. To ensure success as a big data engineer, you should have in-depth knowledge of Hadoop technologies, excellent project management skills, and high-level problem-solving skills. A top-notch Big Data Engineer understands the needs of the company and institutes scalable data solutions for its current and future needs. Responsibilities: Meeting with managers to determine the company’s Big Data needs. Developing big data solutions on AWS, using Apache Spark, Databricks, Delta Tables, EMR, Athena, Glue, Hadoop, etc. Loading disparate data sets and conducting pre-processing services using Athena, Glue, Spark, etc. Collaborating with the software research and development teams. Building cloud platforms for the development of company applications. Maintaining production systems. Requirements: 8-10 years of experience as a big data engineer. Must be proficient with Python & PySpark. In-depth knowledge of Hadoop, Apache Spark, Databricks, Delta Tables, AWS data analytics services. Must have extensive experience with Delta Tables, JSON, Parquet file format. Good to have experience with AWS data analytics services like Athena, Glue, Redshift, EMR. Familiarity with Data warehousing will be a plus. Must have Knowledge of NoSQL and RDBMS databases. Good communication skills. Ability to solve complex data processing, transformation related problems

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies