Jobs
Interviews

745 Amazon Redshift Jobs - Page 20

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 12.0 years

9 - 13 Lacs

Chennai

Work from Office

Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : Engineering graduate preferably Computer Science graduate 15 years of full time education Summary :As a Data Platform Engineer, you will be responsible for assisting with the blueprint and design of the data platform components using Databricks Unified Data Analytics Platform. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Roles & Responsibilities:- Assist with the blueprint and design of the data platform components using Databricks Unified Data Analytics Platform.- Collaborate with Integration Architects and Data Architects to ensure cohesive integration between systems and data models.- Develop and maintain data pipelines using Databricks Unified Data Analytics Platform.- Design and implement data security and access controls using Databricks Unified Data Analytics Platform.- Troubleshoot and resolve issues related to data platform components using Databricks Unified Data Analytics Platform. Professional & Technical Skills: - Must To Have Skills: Experience with Databricks Unified Data Analytics Platform.- Must To Have Skills: Strong understanding of data platform components and architecture.- Good To Have Skills: Experience with cloud-based data platforms such as AWS or Azure.- Good To Have Skills: Experience with data security and access controls.- Good To Have Skills: Experience with data pipeline development and maintenance. Additional Information:- The candidate should have a minimum of 7.5 years of experience in Databricks Unified Data Analytics Platform.- The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful data-driven solutions.-This position is based at our Bangalore, Hyderabad, Chennai and Pune Offices.- Mandatory office (RTO) for 2- 3 days and have to work on 2 shifts (Shift A- 10:00am to 8:00pm IST and Shift B - 12:30pm to 10:30 pm IST) Qualification Engineering graduate preferably Computer Science graduate 15 years of full time education

Posted 1 month ago

Apply

7.0 - 12.0 years

9 - 13 Lacs

Bengaluru

Work from Office

Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. You will play a crucial role in shaping the data platform components. Roles & Responsibilities:- Expected to be an SME, collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Lead the implementation of data platform solutions.- Conduct performance tuning and optimization of data platform components. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of cloud-based data platforms.- Experience in designing and implementing data pipelines.- Knowledge of data governance and security best practices. Additional Information:- The candidate should have a minimum of 7.5 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

4.0 - 8.0 years

11 - 16 Lacs

Hyderabad

Work from Office

Job Summary: We are looking for a highly skilled AWS Data Architect to design and implement scalable, secure, and high-performing data architecture solutions on AWS. The ideal candidate will have hands-on experience in building data lakes, data warehouses, and data pipelines, along with a solid understanding of data governance and cloud security best practices. Roles and Responsibilities: Design and implement data architecture solutions on AWS using services such as S3, Redshift, Glue, Lake Formation, Athena, and Lambda. Develop scalable ETL/ELT workflows and data pipelines using AWS Glue, Apache Spark, or AWS Data Pipeline. Define and implement data governance, security, and compliance strategies, including IAM policies, encryption, and data cataloging. Create and manage data lakes and data warehouses that are scalable, cost-effective, and secure. Collaborate with data engineers, analysts, and business stakeholders to develop robust data models and reporting solutions. Evaluate and recommend tools, technologies, and best practices to optimize data architecture and ensure high-quality solutions. Ensure data quality, performance tuning, and optimization for large-scale data storage and processing Required Skills and Qualifications: Proven experience in AWS data services such as S3, Redshift, Glue, etc. Strong knowledge of data modeling, data warehousing, and big data architecture. Hands-on experience with ETL/ELT tools and data pipeline frameworks. Good understanding of data security and compliance in cloud environments. Excellent problem-solving skills and ability to work collaboratively with cross-functional teams. Strong verbal and written communication skills. Preferred Skills: AWS Certified Data Analytics – Specialty or AWS Solutions Architect Certification. Experience in performance tuning and optimizing large datasets.

Posted 1 month ago

Apply

0.0 - 1.0 years

3 - 5 Lacs

Bengaluru

Work from Office

1 About Company Kinara Capital is a FinTech NBFC dedicated to driving Financial Inclusion in the MSME sector. Our mission is to transform lives, livelihoods, and local economies by providing fast and flexible loans without property collateral to small business entrepreneurs. Led by a women-majority management team, Kinara Capital values diversity and inclusion and fosters a collaborative working environment. Kinara Capital is the only company from India recognized globally by the World Bank/IFC with a gold award in 2019 as Bank of the Year-Asia’ for our innovative work in SME financing. Kinara Capital is an RBI-registered Systemically Important NBFC. Headquartered in Bangalore, we have 110 branches across Karnataka, Gujarat, Maharashtra, Andhra Pradesh, Telangana, Tamil Nadu, and UT Puducherry with more than 1000 employees. https://kinaracapital.com/ Title: Data Engineer Team: Data Warehouse Team Purpose of Job: This is a hands-on coding and designing position and we are looking for an exceptionally talented data engineer who has exposure in implementing AWS services to build data pipelines, api integration and designing data warehouse. Job Responsibilities: Excellent coding skills in Python, PySpark, SQL. Have extensive experience in Spark ecosystem and has worked on both real time and batch processing Have experience in AWS Glue, EMR, DMS, Lambda, S3, DynamoDB, Step functions, Airflow, RDS, Aurora etc. Experience with modern Database systems such as Redshift, Presto, Hive etc. Worked on building data lakes in the past on S3 or Apache Hudi Solid understanding of Data Warehousing Concepts Good to have experience on tools such as Kafka or Kinesis Good to have AWS Developer Associate or Solutions Architect Associate Certification Qualifications: At least a bachelor’s degree in Science, Engineering, Applied Mathematics. Other Requirements: Learning Attitude and good communication skills Report to: Lead Data Engineer Place of work: Head office, Bangalore. Job Type: Full Time No. of Posts: 2

Posted 1 month ago

Apply

2.0 - 7.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Talend ETL Good to have skills : NAMinimum 2 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements. You will play a crucial role in developing solutions to enhance business operations and efficiency. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Develop and implement ETL processes using Talend ETL tool.- Collaborate with cross-functional teams to gather and analyze data requirements.- Optimize and troubleshoot ETL processes for performance and efficiency.- Create and maintain technical documentation for ETL processes.- Assist in testing and debugging ETL processes to ensure data accuracy. Professional & Technical Skills: - Must To Have Skills: Proficiency in Talend ETL.- Strong understanding of data integration concepts.- Experience with data modeling and database design.- Knowledge of SQL and database querying.- Familiarity with data warehousing concepts. Additional Information:- The candidate should have a minimum of 2 years of experience in Talend ETL.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

10.0 - 15.0 years

22 - 37 Lacs

Bengaluru

Work from Office

Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Join Kyndryl as a Data Architect where you will unlock the power of data to drive strategic decisions and shape the future of our business. As a key member of our team, you will harness your expertise in basic statistics, business fundamentals, and communication to uncover valuable insights and transform raw data into rigorous visualizations and compelling stories. In this role, you will have the opportunity to work closely with our customers as part of a top-notch team. You will dive deep into vast IT datasets, unraveling the mysteries hidden within, and discovering trends and patterns that will revolutionize our customers' understanding of their own landscapes. Armed with your advanced analytical skills, you will draw compelling conclusions and develop data-driven insights that will directly impact their decision-making processes. Your Role and Responsibilities: Data Architecture Design: Design scalable, secure, and high-performance data architectures, including data warehouses, data lakes, and BI solutions. Data Modeling: Develop and maintain complex data models (ER, star, and snowflake schemas) to support BI and analytics requirements. BI Strategy and Implementation: Lead the design and implementation of BI solutions using platforms like Power BI, Tableau, Qlik, and Looker. ETL/ELT Management: Architect efficient ETL/ELT pipelines for data transformation and integration across multiple data sources. Data Governance: Implement data quality, data lineage, and metadata management frameworks to ensure data reliability and compliance. Performance Optimization: Optimize data storage and retrieval processes for speed, scalability, and efficiency. Stakeholder Collaboration: Work closely with business and technical teams to define data requirements and deliver actionable insights. Cloud and Big Data: Utilize cloud-native tools like Azure Synapse, AWS Redshift, GCP BigQuery, and Databricks for large-scale data processing. Mentorship: Guide junior data engineers and BI developers on best practices and advanced techniques. Your unique ability to communicate and empathize with stakeholders will be invaluable. By understanding the business objectives and success criteria of each project, you will align your data analysis efforts seamlessly with our overarching goals. With your mastery of business valuation, decision-making, project scoping, and storytelling, you will transform data into meaningful narratives that drive real-world impact. At Kyndryl, we believe that data holds immense potential, and we are committed to helping you unlock that potential. You will have access to vast repositories of data, empowering you to delve deep to determine root causes of defects and variation. By gaining a comprehensive understanding of the data and its specific purpose, you will be at the forefront of driving innovation and making a difference. If you are ready to unleash your analytical ability, collaborate with industry experts, and shape the future of data-driven decision making, then join us as a Data Analyst at Kyndryl. Together, we will harness the power of data to redefine what is possible and create a future filled with limitless possibilities. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Skills and Experience Education: Bachelor's or master’s in computer science, Data Science, or a related field. Experience: 8+ years in data architecture, BI, and analytics roles. BI Tools: Power BI, Tableau, Qlik, Looker, SAP Analytics Cloud. Data Modeling: ER, dimensional, star, and snowflake schemas. Cloud Platforms: Azure, AWS, GCP, Snowflake. Databases: SQL Server, Oracle, MySQL, NoSQL (MongoDB, DynamoDB). ETL Tools: Informatica, Talend, SSIS, Apache Nifi. Scripting: Python, R, SQL, DAX, MDX. Soft Skills: Strong communication, problem-solving, and leadership abilities. Knowledge of deployment patterns. Strong documentation, troubleshooting, and data profiling skills. Excellent analytical, conceptual, and problem-solving abilities. Ability to manage multiple priorities and swiftly adapt to changing demands. Preferred Skills and Experience Microsoft Certified: Azure Data Engineer Associate AWS Certified Data Analytics - Specialty Google Professional Data Engineer Tableau Desktop Certified Professional Power BI Data Analyst Associate Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.

Posted 1 month ago

Apply

5.0 - 10.0 years

16 - 20 Lacs

Hyderabad

Work from Office

We are Hiring Data Engineer for a US based IT Company Based in Hyderabad. Candidates with minimum 5 Years of experience in Data Engineering can apply. This job is for 1 year contract only Job Title: Data Engineer Location: Hyderabad CTC: Upto 20 LPA Experience: 5+ Years Job Overview: We are looking for a seasoned Senior Data Engineer with deep hands-on experience in Talend and IBM DataStage to join our growing enterprise data team. This role will focus on designing and optimizing complex data integration solutions that support enterprise-wide analytics, reporting, and compliance initiatives. In this senior-level position, you will collaborate with data architects, analysts, and key stakeholders to facilitate large-scale data movement, enhance data quality, and uphold governance and security protocols. Key Responsibilities: Develop, maintain, and enhance scalable ETL pipelines using Talend and IBM DataStage Partner with data architects and analysts to deliver efficient and reliable data integration solutions Review and optimize existing ETL workflows for performance, scalability, and reliability Consolidate data from multiple sourcesboth structured and unstructuredinto data lakes and enterprise platforms Implement rigorous data validation and quality assurance procedures to ensure data accuracy and integrity Adhere to best practices for ETL development, including source control and automated deployment Maintain clear and comprehensive documentation of data processes, mappings, and transformation rules Support enterprise initiatives around data migration , modernization , and cloud transformation Mentor junior engineers and participate in code reviews and team learning sessions Required Qualifications: Minimum 5 years of experience in data engineering or ETL development Proficient with Talend (Open Studio and/or Talend Cloud) and IBM DataStage Strong skills in SQL , data profiling, and performance tuning Experience handling large datasets and complex data workflows Solid understanding of data warehousing , data modeling , and data lake architecture Familiarity with version control systems (e.g., Git) and CI/CD pipelines Strong analytical and troubleshooting skills Effective verbal and written communication, with strong documentation habits Preferred Qualifications: Prior experience in banking or financial services Exposure to cloud platforms such as AWS , Azure , or Google Cloud Knowledge of data governance tools (e.g., Collibra, Alation) Awareness of data privacy regulations (e.g., GDPR, CCPA) Experience working in Agile/Scrum environments For further assistance contact/whatsapp: 9354909518 or write to priya@gist.org.in

Posted 1 month ago

Apply

7.0 - 10.0 years

27 - 42 Lacs

Chennai

Work from Office

Data Engineer Skills and Qualifications SQL - Mandatory Strong knowledge of AWS services (e.g., S3, Glue, Redshift, Lambda ). - Mandatory Experience working with DBT – Nice to have Proficiency in PySpark or Python for big data processing. - Mandatory Experience with orchestration tools like Apache Airflow and AWS CodePipeline . - Mandatory Familiarity with CI/CD tools and DevOps practices. Expertise in data modeling, ETL processes, and data warehousing.

Posted 1 month ago

Apply

3.0 - 7.0 years

0 - 0 Lacs

Hyderabad

Work from Office

Experience Required: 3+ years Technical knowledge: AWS, Python, SQL, S3, EC2, Glue, Athena, Lambda, DynamoDB, RedShift, Step Functions, Cloud Formation, CI/CD Pipelines, Github, EMR, RDS,AWS Lake Formation, GitLab, Jenkins and AWS CodePipeline. Role Summary: As a Senior Data Engineer,with over 3 years of expertise in Python, PySpark, SQL to design, develop and optimize complex data pipelines, support data modeling, and contribute to the architecture that supports big data processing and analytics to cutting-edge cloud solutions that drive business growth. You will lead the design and implementation of scalable, high-performance data solutions on AWS and mentor junior team members.This role demands a deep understanding of AWS services, big data tools, and complex architectures to support large-scale data processing and advanced analytics. Key Responsibilities: Design and develop robust, scalable data pipelines using AWS services, Python, PySpark, and SQL that integrate seamlessly with the broader data and product ecosystem. Lead the migration of legacy data warehouses and data marts to AWS cloud-based data lake and data warehouse solutions. Optimize data processing and storage for performance and cost. Implement data security and compliance best practices, in collaboration with the IT security team. Build flexible and scalable systems to handle the growing demands of real-time analytics and big data processing. Work closely with data scientists and analysts to support their data needs and assist in building complex queries and data analysis pipelines. Collaborate with cross-functional teams to understand their data needs and translate them into technical requirements. Continuously evaluate new technologies and AWS services to enhance data capabilities and performance. Create and maintain comprehensive documentation of data pipelines, architectures, and workflows. Participate in code reviews and ensure that all solutions are aligned to pre-defined architectural specifications. Present findings to executive leadership and recommend data-driven strategies for business growth. Communicate effectively with different levels of management to gather use cases/requirements and provide designs that cater to those stakeholders. Handle clients in multiple industries at the same time, balancing their unique needs. Provide mentoring and guidance to junior data engineers and team members. Requirements: 3+ years of experience in a data engineering role with a strong focus on AWS, Python, PySpark, Hive, and SQL. Proven experience in designing and delivering large-scale data warehousing and data processing solutions. Lead the design and implementation of complex, scalable data pipelines using AWS services such as S3, EC2, EMR, RDS, Redshift, Glue, Lambda, Athena, and AWS Lake Formation. Bachelor's or Masters degree in Computer Science, Engineering, or a related technical field. Deep knowledge of big data technologies and ETL tools, such as Apache Spark, PySpark, Hadoop, Kafka, and Spark Streaming. Implement data architecture patterns, including event-driven pipelines, Lambda architectures, and data lakes. Incorporate modern tools like Databricks, Airflow, and Terraform for orchestration and infrastructure as code. Implement CI/CD using GitLab, Jenkins, and AWS CodePipeline. Ensure data security, governance, and compliance by leveraging tools such as IAM, KMS, and AWS CloudTrail. Mentor junior engineers, fostering a culture of continuous learning and improvement. Excellent problem-solving and analytical skills, with a strategic mindset. Strong communication and leadership skills, with the ability to influence stakeholders at all levels. Ability to work independently as well as part of a team in a fast-paced environment. Advanced data visualization skills and the ability to present complex data in a clear and concise manner. Excellent communication skills, both written and verbal, to collaborate effectively across teams and levels. Preferred Skills: Experience with Databricks, Snowflake, and machine learning pipelines. Exposure to real-time data streaming technologies and architectures. Familiarity with containerization and serverless computing (Docker, Kubernetes, AWS Lambda).

Posted 1 month ago

Apply

5.0 - 9.0 years

12 - 22 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Significant 5 to 9 years of experience in designing and implementing scalable data engineering solutions on AWS. Strong proficiency in Python programming language. Expertise in serverless architecture and AWS services such as Lambda, Glue, Redshift, Kinesis, SNS, SQS, and CloudFormation. Experience with Infrastructure as Code (IaC) using AWS CDK for defining and provisioning AWS resources. Proven leadership skills with the ability to mentor and guide junior team members. Excellent understanding of data modeling concepts and experience with tools like ERStudio. Strong communication and collaboration skills, with the ability to work effectively in a cross-functional team environment. Experience with Apache Airflow for orchestrating data pipelines is a plus. Knowledge of Data Lakehouse, dbt, or Apache Hudi data format is a plus. Roles and Responsibilities Design, develop, test, deploy, and maintain large-scale data pipelines using AWS services such as S3, Glue, Lambda, Redshift. Collaborate with cross-functional teams to gather requirements and design solutions that meet business needs. Desired Candidate Profile 5-9 years of experience in an IT industry setting with expertise in Python programming language (Pyspark). Strong understanding of AWS ecosystem including S3, Glue, Lambda, Redshift. Bachelor's degree in Any Specialization (B.Tech/B.E.).

Posted 1 month ago

Apply

5.0 - 10.0 years

10 - 15 Lacs

Chennai, Bengaluru

Work from Office

job requisition idJR1027452 Overall Responsibilities: Data Pipeline Development: Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy. Data Ingestion: Implement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP. Data Transformation and Processing: Use PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements. Performance Optimization: Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes. Data Quality and Validation: Implement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline. Automation and Orchestration: Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem. Monitoring and Maintenance: Monitor pipeline performance, troubleshoot issues, and perform routine maintenance on the Cloudera Data Platform and associated data processes. Collaboration: Work closely with other data engineers, analysts, product managers, and other stakeholders to understand data requirements and support various data-driven initiatives. Documentation: Maintain thorough documentation of data engineering processes, code, and pipeline configurations. Software : Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Familiarity with Hadoop, Kafka, and other distributed computing tools. Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Strong scripting skills in Linux. Category-wise Technical Skills: PySpark: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Cloudera Data Platform: Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Data Warehousing: Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Big Data Technologies: Familiarity with Hadoop, Kafka, and other distributed computing tools. Orchestration and Scheduling: Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Scripting and Automation: Strong scripting skills in Linux. Experience: 5-12 years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform. Proven track record of implementing data engineering best practices. Experience in data ingestion, transformation, and optimization on the Cloudera Data Platform. Day-to-Day Activities: Design, develop, and maintain ETL pipelines using PySpark on CDP. Implement and manage data ingestion processes from various sources. Process, cleanse, and transform large datasets using PySpark. Conduct performance tuning and optimization of ETL processes. Implement data quality checks and validation routines. Automate data workflows using orchestration tools. Monitor pipeline performance and troubleshoot issues. Collaborate with team members to understand data requirements. Maintain documentation of data engineering processes and configurations. Qualifications: Bachelors or Masters degree in Computer Science, Data Engineering, Information Systems, or a related field. Relevant certifications in PySpark and Cloudera technologies are a plus. Soft Skills: Strong analytical and problem-solving skills. Excellent verbal and written communication abilities. Ability to work independently and collaboratively in a team environment. Attention to detail and commitment to data quality.

Posted 1 month ago

Apply

5.0 - 10.0 years

13 - 18 Lacs

Gurugram

Work from Office

Position Summary To be a technology expert architecting solutions and mentoring people in BI / Reporting processes with prior expertise in the Pharma domain. Job Responsibilities o Technology Leadership – Lead guide the team independently or with little support to design, implement deliver complex reporting and BI project assignments. o Technical portfolio – Expertise in a range of BI and hosting technologies like the AWS stack (Redshift, EC2), Qlikview, QlikSense, Tableau, Microstrategy, Spotfire o Project Management – Get accurate briefs from the Client and translate into tasks for team members with priorities and timeline plans. Must maintain high standards of quality and thoroughness. Should be able to monitor accuracy and quality of others' work. Ability to think in advance about potential risks and mitigation plans. o Logical Thinking – Able to think analytically, use a systematic and logical approach to analyze data, problems, and situations. Must be able to guide team members in analysis. o Handle Client Relationship – Manage client relationship and client expectations independently. Should be able to deliver results back to the Client independently. Should have excellent communication skills. Education BE/B.Tech Master of Computer Application Work Experience - Minimum of 5 years of relevant experience in Pharma domain. - Technical: Should have 10+ years of hands on experience in the following tools: Must have working knowledge of toolsAtleast 2 of the following – Qlikview, QlikSense, Tableau, Microstrategy, Spotfire/ (Informatica, SSIS, Talend & metallion)/ Big Data technologies - Hadoop ecosystem. Aware of techniques such asUI design, Report modeling, performance tuning and regression testing Basic expertise with MS excel Advanced expertise with SQL - Functional: Should have experience in following concepts and technologies: Specifics: Pharma data sources like IMS, Veeva, Symphony, Cegedim etc. Business processes like alignment, market definition, segmentation, sales crediting, activity metrics calculation Calculation of all sales, activity and managed care KPIs Behavioural Competencies Teamwork & Leadership Motivation to Learn and Grow Ownership Cultural Fit Talent Management Technical Competencies Problem Solving Lifescience Knowledge Communication Project Management Attention to P&L Impact Capability Building / Thought Leadership Scale of revenues managed / delivered

Posted 1 month ago

Apply

5.0 - 10.0 years

30 - 35 Lacs

Noida

Work from Office

Position Summary This position is part of the technical leadership in data warehousing and Business Intelligence areas. Someone who can work on multiple project streams and clients for better business decision making especially in the area of Lifesciences/ Pharmaceutical domain. Job Responsibilities o Technology Leadership – Lead guide the team independently or with little support to design, implement deliver complex cloud data management and BI project assignments. o Technical portfolio – Expertise in a range of BI and data hosting technologies like the AWS stack (Redshift, EC2), Snowflake, Spark, Full Stack, Qlik, Tableau, Microstrategy o Project Management – Get accurate briefs from the Client and translate into tasks for team members with priorities and timeline plans. Must maintain high standards of quality and thoroughness. Should be able to monitor accuracy and quality of others' work. Ability to think in advance about potential risks and mitigation plans. o Logical Thinking – Able to think analytically, use a systematic and logical approach to analyze data, problems, and situations. Must be able to guide team members in analysis. o Handle Client Relationship, P&L – Manage client relationship and client expectations independently. Should be able to deliver results back to the Client independently. Should have excellent communication skills. Education BE/B.Tech Master of Computer Application Work Experience Minimum of 5 years of relevant experience in Pharma domain. TechnicalShould have 15 years of hands on experience in the following tools Must have working knowledge of toolsAtleast 2 of the following – Qlikview, QlikSense, Tableau, Microstrategy, Spotfire Aware of techniques such asUI design, Report modeling, performance tuning and regression testing Basic expertise with MS excel Advanced expertise with SQL FunctionalShould have experience in following concepts and technologies Specifics Pharma data sources like IMS, Veeva, Symphony, Cegedim etc. Business processes like alignment, market definition, segmentation, sales crediting, activity metrics calculation 0-2 years of relevant experience in a large/midsize IT services/Consulting/Analytics Company1-3 years of relevant experience in a large/midsize IT services/Consulting/Analytics Company3-5 years of relevant experience in a large/midsize IT services/Consulting/Analytics Company3-5 years of relevant experience in a large/midsize IT services/Consulting/Analytics Company Behavioural Competencies Project Management Communication Attention to P&L Impact Teamwork & Leadership Motivation to Learn and Grow Lifescience Knowledge Ownership Cultural Fit Scale of resources managed Scale of revenues managed / delivered Problem solving Talent Management Capability Building / Thought Leadership Technical Competencies AWS KnowHow Formal Industry Certification AWS Certified Cloud Practitioner Snowflake Data Engineering Data Governance Data Modelling Data Operations (Service Management) Data Warehousing & Data Lake Databricks Dataiku Formal Industry Certification Informatica_Cloud Data Warehouse & Data Lake Modernization Master Data Management Patient Data Analytics Know How Pharma Commercial Data - US Pharma Commercial Data - EU

Posted 1 month ago

Apply

12.0 - 17.0 years

20 - 25 Lacs

Noida

Work from Office

Position Summary Overall 12+ years of quality engineering experience with DWH/ETL for enterprise grade applications Hands on experience with functional, non-functional and automation of products Hands on experience with leverage LLMs/GenAI for improving efficiency & effectiveness of overall delivery process Job Responsibilities Leading end-to-end QE for product suite Authoring QE test strategy for a release and executing it for a release Driving quality releases by closely working with development, PMs, DevOps, support and business teams Achieving automation coverage for product suite with good line coverage Manage risks and resolves issues that affect release scope, schedule and quality Work with product teams to understand impacts of branches and code merges, etc. Lead and co-ordinate the release activities including the execution of overall Ability to lead team of SDETs and help them in addressing their issues Mentoring and coaching members in the team Education BE/B.Tech Master of Computer Application Work Experience Overall 12+ years of strong hands-on experience with DWH/ETL for enterprise grade applications Behavioural Competencies Teamwork & Leadership Motivation to Learn and Grow Ownership Cultural Fit Talent Management Technical Competencies Lifescience Knowledge AWS Data Pipeline Azure Data Factory Data Governance Data Modelling Data Privacy Data Security Data Validation Testing Tools Data Visualisation Databricks Snowflake Amazon Redshift MS SQL Server Performance Testing

Posted 1 month ago

Apply

5.0 - 10.0 years

11 - 15 Lacs

Gurugram

Work from Office

Position Summary This is the Requisition for Employee Referrals Campaign and JD is Generic. We are looking for Associates with 5+ years of experience in delivering solutions around Data Engineering, Big data analytics and data lakes, MDM, BI, and data visualization. Experienced to Integrate and standardize structured and unstructured data to enable faster insights using cloud technology. Enabling data-driven insights across the enterprise. Job Responsibilities He/she should be able to design implement and deliver complex Data Warehousing/Data Lake, Cloud Data Management, and Data Integration project assignments. Technical Design and Development – Expertise in any of the following skills. Any ETL tools (Informatica, Talend, Matillion, Data Stage), andhosting technologies like the AWS stack (Redshift, EC2) is mandatory. Any BI toolsamong Tablau, Qlik & Power BI and MSTR. Informatica MDM, Customer Data Management. Expert knowledge of SQL with the capability to performance tune complex SQL queries in tradition and distributed RDDMS systems is must. Experience across Python, PySpark and Unix/Linux Shell Scripting. Project Managementis must to have. Should be able create simple to complex project plans in Microsoft Project Plan and think in advance about potential risks and mitigation plans as per project plan. Task Management – Should be able to onboard team on the project plan and delegate tasks to accomplish milestones as per plan. Should be comfortable in discussing and prioritizing work items with team members in an onshore-offshore model. Handle Client Relationship – Manage client communication and client expectations independently or with support of reporting manager. Should be able to deliver results back to the Client as per plan. Should have excellent communication skills. Education Bachelor of Technology Master's Equivalent - Engineering Work Experience Overall, 5- 7years of relevant experience inData Warehousing, Data management projects with some experience in the Pharma domain. We are hiring for following roles across Data management tech stacks - ETL toolsamong Informatica, IICS/Snowflake,Python& Matillion and other Cloud ETL. BI toolsamong Power BI and Tableau. MDM - Informatica/ Raltio, Customer Data Management. Azure cloud Developer using Data Factory and Databricks Data Modeler-Modelling of data - understanding source data, creating data models for landing, integration. Python/PySpark -Spark/ PySpark Design, Development, and Deployment

Posted 1 month ago

Apply

4.0 - 8.0 years

13 - 18 Lacs

Noida

Work from Office

Position Summary To be a technology expert architecting solutions and mentoring people in BI / Reporting processes with prior expertise in the Pharma domain. Job Responsibilities Independently, he/she should be able to drive and deliver complex reporting and BI project assignments in PowerBI on AWS/Azure Cloud. Should be able to design and deliver across Power BI services, Power Query, DAX, and data modelling concepts. Should be able to write complex SQLs focusing on Data Aggregation and analytic calculations used in the reporting KPIs. Be able to analyse the data and understand the requirements directly from customer or from project teams across pharma commercial data sets Should be able to drive the team on the day-to-day tasks in alignment with the project plan and collaborate with team to accomplish milestones as per plan. Should be comfortable in discussing and prioritizing work items in an onshore-offshore model. Able to think analytically, use a systematic and logical approach to analyse data, problems, and situations. Manage client communication and client expectations independently. Should be able to deliver results back to the Client as per plan. Should have excellent communication skills . Education BE/B.Tech Master of Computer Application Work Experience Should have 4-8 years of working on experience in developing Power BI reports. Must have proficiency in Power BI services, Power Query, DAX, and data modelling concepts. Should have experience in design techniques such as UI designing and creating mock-ups/intuitive visualizations seamless user experience. Should have expertise in writing complex SQLs focusing on Data Aggregation and analytic calculations used for deriving the reporting KPIs. Strong understanding of data integration, ETL processes, data warehousing , preferably on AWS Redshift and/or Snowflake. Excellent problem-solving skills with the ability to troubleshoot and resolve technical issues. Strong communication and interpersonal skills, with the ability to collaborate effectively with cross-functional teams. Good to have experience in the Pharma Commercial data sets and related KPIs for Sale Performance, Managed Market, Customer 360, Patient Journey etc. Good to have experience and additional know-how on other reporting tools. Behavioural Competencies Teamwork & Leadership Motivation to Learn and Grow Ownership Cultural Fit Talent Management Technical Competencies Problem Solving Lifescience Knowledge Communication Capability Building / Thought Leadership Power BI SQL Business Intelligence(BI) Snowflake

Posted 1 month ago

Apply

5.0 - 10.0 years

12 - 17 Lacs

Hyderabad

Work from Office

Job Area: Information Technology Group, Information Technology Group > IT Data Engineer General Summary: Developer will play an integral role in the PTEIT Machine Learning Data Engineering team. Design, develop and support data pipelines in a hybrid cloud environment to enable advanced analytics. Design, develop and support CI/CD of data pipelines and services. - 5+ years of experience with Python or equivalent programming using OOPS, Data Structures and Algorithms - Develop new services in AWS using server-less and container-based services. - 3+ years of hands-on experience with AWS Suite of services (EC2, IAM, S3, CDK, Glue, Athena, Lambda, RedShift, Snowflake, RDS) - 3+ years of expertise in scheduling data flows using Apache Airflow - 3+ years of strong data modelling (Functional, Logical and Physical) and data architecture experience in Data Lake and/or Data Warehouse - 3+ years of experience with SQL databases - 3+ years of experience with CI/CD and DevOps using Jenkins - 3+ years of experience with Event driven architecture specially on Change Data Capture - 3+ years of Experience in Apache Spark, SQL, Redshift (or) Big Query (or) Snowflake, Databricks - Deep understanding building the efficient data pipelines with data observability, data quality, schema drift, alerting and monitoring. - Good understanding of the Data Catalogs, Data Governance, Compliance, Security, Data sharing - Experience in building the reusable services across the data processing systems. - Should have the ability to work and contribute beyond defined responsibilities - Excellent communication and inter-personal skills with deep problem-solving skills. Minimum Qualifications: 3+ years of IT-related work experience with a Bachelor's degree in Computer Engineering, Computer Science, Information Systems or a related field. OR 5+ years of IT-related work experience without a Bachelors degree. 2+ years of any combination of academic or work experience with programming (e.g., Java, Python). 1+ year of any combination of academic or work experience with SQL or NoSQL Databases. 1+ year of any combination of academic or work experience with Data Structures and algorithms. 5 years of Industry experience and minimum 3 years experience in Data Engineering development with highly reputed organizations- Proficiency in Python and AWS- Excellent problem-solving skills- Deep understanding of data structures and algorithms- Proven experience in building cloud native software preferably with AWS suit of services- Proven experience in design and develop data models using RDBMS (Oracle, MySQL, etc.) Desirable - Exposure or experience in other cloud platforms (Azure and GCP) - Experience working on internals of large-scale distributed systems and databases such as Hadoop, Spark - Working experience on Data Lakehouse platforms (One House, Databricks Lakehouse) - Working experience on Data Lakehouse File Formats (Delta Lake, Iceberg, Hudi) Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field.

Posted 1 month ago

Apply

5.0 - 8.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Role Purpose The purpose of this role is to design, test and maintain software programs for operating systems or applications which needs to be deployed at a client end and ensure its meet 100% quality assurance parameters Responsibilities: Design and implement the data modeling, data ingestion and data processing for various datasets Design, develop and maintain ETL Framework for various new data source Develop data ingestion using AWS Glue/ EMR, data pipeline using PySpark, Python and Databricks. Build orchestration workflow using Airflow & databricks Job workflow Develop and execute adhoc data ingestion to support business analytics. Proactively interact with vendors for any questions and report the status accordingly Explore and evaluate the tools/service to support business requirement Ability to learn to create a data-driven culture and impactful data strategies. Aptitude towards learning new technologies and solving complex problem. Qualifications: Minimum of bachelors degree. Preferably in Computer Science, Information system, Information technology. Minimum 5 years of experience on cloud platforms such as AWS, Azure, GCP. Minimum 5 year of experience in Amazon Web Services like VPC, S3, EC2, Redshift, RDS, EMR, Athena, IAM, Glue, DMS, Data pipeline & API, Lambda, etc. Minimum of 5 years of experience in ETL and data engineering using Python, AWS Glue, AWS EMR /PySpark and Airflow for orchestration. Minimum 2 years of experience in Databricks including unity catalog, data engineering Job workflow orchestration and dashboard generation based on business requirements Minimum 5 years of experience in SQL, Python, and source control such as Bitbucket, CICD for code deployment. Experience in PostgreSQL, SQL Server, MySQL & Oracle databases. Experience in MPP such as AWS Redshift, AWS EMR, Databricks SQL warehouse & compute cluster. Experience in distributed programming with Python, Unix Scripting, MPP, RDBMS databases for data integration Experience building distributed high-performance systems using Spark/PySpark, AWS Glue and developing applications for loading/streaming data into Databricks SQL warehouse & Redshift. Experience in Agile methodology Proven skills to write technical specifications for data extraction and good quality code. Experience with big data processing techniques using Sqoop, Spark, hive is additional plus Experience in data visualization tools including PowerBI, Tableau. Nice to have experience in UI using Python Flask framework anglular Mandatory Skills: Python for Insights. Experience: 5-8 Years.

Posted 1 month ago

Apply

10.0 - 15.0 years

15 - 30 Lacs

Noida, Pune, Bengaluru

Work from Office

Roles and responsibilities Work closely with the Product Owners and stake holders to design the Technical Architecture for data platform to meet the requirements of the proposed solution. Work with the leadership to set the standards for software engineering practices within the machine learning engineering team and support across other disciplines Play an active role in leading team meetings and workshops with clients. Choose and use the right analytical libraries, programming languages, and frameworks for each task. Help the Data Engineering team produce high-quality code that allows us to put solutions into production Create and own the technical product backlogs for products, help the team to close the backlogs in right time. Refactor code into reusable libraries, APIs, and tools. Help us to shape the next generation of our products. What Were Looking For Total experience in data management area for 10 + years’ experience in the implementation of modern data ecosystems in AWS/Cloud platforms. Strong experience with AWS ETL/File Movement tools (GLUE, Athena, Lambda, Kinesis and other AWS integration stack) Strong experience with Agile Development, SQL Strong experience with Two or Three AWS database technologies (Redshift, Aurora, RDS,S3 & other AWS Data Service ) covering security, policies, access management Strong programming Experience with Python and Spark Strong learning curve for new technologies Experience with Apache Airflow & other automation stack. Excellent with Data Modeling. Excellent oral and written communication skills. A high level of intellectual curiosity, external perspective, and innovation interest Strong analytical, problem solving and investigative skills Experience in applying quality and compliance requirements. Experience with security models and development on large data sets

Posted 1 month ago

Apply

6.0 - 10.0 years

7 - 15 Lacs

Pune, Bengaluru

Work from Office

Role & responsibilities Essential Skills: Experience: 6 to 10 yrs - Technical Expertise: Proficiency in AWS services such as Amazon S3, Redshift, EMR, Glue, Lambda, and Kinesis. Strong skills in SQL and experience with scripting languages like Python or Java. - Data Engineering Experience: Hands on experience in building and maintaining data pipelines, data modeling, and working with big data technologies. - Problem-Solving Skills: Ability to analyze complex data issues and develop effective solutions to optimize data processing and storage. - Communication and Collaboration: Strong interpersonal skills to work effectively with cross-functional teams and communicate technical concepts to non-technical stakeholders. Educational Qualifications A bachelor's degree in computer science, information technology, or a related field is typically required. Relevant AWS certifications, such as AWS Certified Data Analytics Specialty, are advantageous

Posted 1 month ago

Apply

7.0 - 12.0 years

11 - 15 Lacs

Bengaluru

Work from Office

Position Summary: We are seeking a highly skilled ETL QA Engineer with at least 6 years of experience in ETL/data pipeline testing on the AWS cloud stack , specifically with Redshift, AWS Glue, S3 , and related data integration tools. The ideal candidate should be proficient in SQL , capable of reviewing and validating stored procedures , and should have the ability to automate ETL test cases using Python or suitable automation frameworks . Strong communication skills are essential, and web application testing exposure is a plus. Technical Skills Required: SQL Expertise : Ability to write, debug, and optimize complex SQL queries. Validate data across source systems, staging areas, and reporting layers. Experience with stored procedure review and validation. ETL Testing Experience : Hands-on experience with AWS Glue , Redshift , S3 , and data pipelines. Validate transformations, data flow accuracy, and pipeline integrity. ETL Automation : Ability to automate ETL tests using Python , PyTest , or other scripting frameworks. Nice to have exposure to TestNG , Selenium , or similar automation tools for testing UIs or APIs related to data validation. Cloud Technologies : Deep understanding of the AWS ecosystem , especially around ETL and data services. Familiarity with orchestration (e.g., Step Functions, Lambda), security, and logging. Health Check Automation : Build SQL and Python-based health check scripts to monitor pipeline sanity and data integrity. Reporting Tools (Nice to have): Exposure to tools like Jaspersoft , Tableau , Power BI , etc. for report layout and aggregation validation. Root Cause Analysis : Strong debugging skills to trace data discrepancies and report logical/data errors to development teams. Communication : Must be able to communicate clearly with both technical and non-technical stakeholders. Roles and Responsibilities Key Responsibilities: Design and execute test plans and test cases for validating ETL pipelines and data transformations. Ensure accuracy and integrity of data in transactional databases , staging zones , and data warehouses (Redshift) . Review stored procedures and SQL scripts to validate transformation logic. Automate ETL test scenarios using Python or other test automation tools as applicable. Implement health check mechanisms for automated validation of daily pipeline jobs. Investigate data issues and perform root cause analysis. Validate reports and dashboards, ensuring correct filters, aggregations, and visualizations. Collaborate with developers, analysts, and business teams to understand requirements and ensure complete test coverage. Report testing progress and results clearly and timely. Nice to Have: Web testing experience using Selenium or Appium. Experience in API testing and validation of data exposed via APIs.

Posted 1 month ago

Apply

5.0 - 8.0 years

18 - 30 Lacs

Hyderabad

Work from Office

AWS Data Engineer with Glue, Terraform, Business Intelligence (Tableau) development * Design, develop & maintain AWS data pipelines using Glue, Lambda & Redshift * Collaborate with BI team on ETL processes & dashboard creation with Tableau

Posted 1 month ago

Apply

7.0 - 12.0 years

1 - 5 Lacs

Bengaluru

Work from Office

Req ID: 325298 We are currently seeking a AWS Redshift administrator Engineer to join our team in Bangalore, Karntaka (IN-KA), India (IN). Job Duties: "¢ Administer and maintain scalable cloud environments and applications for data organization. "¢ Understanding business objectives of the company and creating cloud-based solutions to facilitate those objectives. "¢ Implement Infrastructure as Code and deploy code using Terraform, Gitlab "¢ Install and maintain software, services, and application by identifying system requirements. "¢ Hands-on AWS Services and DB and Server troubleshooting experience. "¢ Extensive database experience with RDS, AWS Redshift, MySQL "¢ Maintains environment by identifying system requirements, installing upgrades and monitoring system performance. "¢ Knowledge of day-to-day database operations, deployments, and development "¢ Experienced in Snowflake "¢ Knowledge of SQL and Performance tuning "¢ Knowledge of Linux Shell Scripting or Python "¢ Migrate system from one AWS cloud to another AWS account "¢ Hands-on DB and Server troubleshooting experience "¢ Maintains system performance by performing system monitoring and analysis and performance tuning. "¢ Troubleshooting system hardware, software, and operating and system management systems. "¢ Secures web system by developing system access, monitoring, control, and evaluation. "¢ Testing disaster recovery policies and procedures; completing back-ups; and maintaining documentation. "¢ Upgrades system and services and developing, testing, evaluating, and installing enhancements and new software. "¢ Communicating with internal teams, like EIMO, Operations, and Cloud Architect "¢ Communicate with stakeholders and build applications to meet project needs. Minimum Skills Required: "¢ Bachelor"™s degree in computer science or engineering "¢ Minimum of 7 years of experience in System, platform, and AWS cloud administration "¢ Minimum of 5 to 7 years of Database administration and AWS experience using latest AWS technologies "“ AWS EC2, Redshift, VPC, S3, AWS RDS "¢ Experience with Java, Python, Redshift, MySQL, or equivalent database tools "¢ Experience with Agile software development using JIRA "¢ Experience in multiple OS platforms with strong emphasis on Linux and Windows systems "¢ Experience with OS-level scripting environment such as KSH shell., PowerShell "¢ Experience with version management tools and CICD pipeline "¢ In-depth knowledge of the TCP / IP protocol suite, security architecture, securing and hardening Operating Systems, Networks, Databases and Applications. "¢ Advanced SQL knowledge and experience working with relational databases, query authoring (SQL) , query performance tuning. "¢ Experience supporting and optimizing data pipelines and data sets. "¢ Knowledge of the Incident Response life cycle "¢ AWS solution architect certifications. "¢ Strong written and verbal communication skills.

Posted 1 month ago

Apply

1.0 - 4.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Req ID: 321498 We are currently seeking a Data Engineer to join our team in Bangalore, Karntaka (IN-KA), India (IN). Job Duties"¢ Work closely with Lead Data Engineer to understand business requirements, analyse and translate these requirements into technical specifications and solution design. "¢ Work closely with Data modeller to ensure data models support the solution design "¢ Develop , test and fix ETL code using Snowflake, Fivetran, SQL, Stored proc. "¢ Analysis of the data and ETL for defects/service tickets (for solution in production ) raised and service tickets. "¢ Develop documentation and artefacts to support projects Minimum Skills Required"¢ ADF "¢ Fivetran (orchestration & integration) "¢ SQL "¢ Snowflake DWH

Posted 1 month ago

Apply

4.0 - 5.0 years

6 - 10 Lacs

Chennai

Work from Office

We are currently seeking a Data Visualization Expert - Quick sight to join our team in Chennai, Tamil Ndu (IN-TN), India (IN). What awaits you/ Job Profile Location Bangalore and Chennai, Hybrid mode,Immediate to 10 Days Notice period Develop reports using Amazon Quicksight Data Visualization DevelopmentDesign and develop data visualizations using Amazon Quicksight to present complex data in a clear and understandable format. Create interactive dashboards and reports that allow end-users to explore data and draw meaningful conclusions. Data AnalysisCollaborate with data analysts and business stakeholders to understand data requirements, gather insights, and transform raw data into actionable visualizations. Dashboard User Interface (UI) and User Experience (UX)Ensure that the data visualizations are user-friendly, intuitive, and aesthetically pleasing. Optimize the user experience by incorporating best practices in UI/UX design. Data IntegrationWork closely with data engineers and data architects to ensure seamless integration of data sources into Quicksight, enabling real-time and up-to-date visualizations. Performance OptimizationIdentify and address performance bottlenecks in data queries and visualization rendering to ensure quick and responsive dashboards. Data Security and GovernanceEnsure compliance with data security policies and governance guidelines when handling sensitive data within Quicksight. Training and DocumentationProvide training and support to end-users and stakeholders on how to interact with and interpret visualizations effectively. Create detailed documentation of the visualization development process. Stay Updated with Industry TrendsKeep up to date with the latest data visualization trends, technologies, and best practices to continuously enhance the quality and impact of visualizations. Using the Agile Methodology, attending daily standups and use of the Agile tools Collaborating with cross-functional teams and stakeholders to ensure data security, privacy, and compliance with regulations. using Scrum/Kanban Proficiency in Software Development best practices - Secure coding standards, Unit testing frameworks, Code coverage, Quality gates. Ability to lead and deliver change in a very productive way Lead Technical discussions with customers to find the best possible solutions. W orking closely with the Project Manager, Solution Architect and managing client communication (as and when required) What should you bring along Must Have Person should have relevant work experience in analytics, reporting and business intelligence tools. 4-5 years of hands-on experience in data visualization. Relatively 2-year Experience developing visualization using Amazon Quicksight. Experience working with various data sources and databases. Ability to work with large datasets and design efficient data models for visualization. Nice to Have AI Project implementation and AI methods. Must have technical skill Quick sight , SQL , AWS Good to have Technical skills Tableau, Data Engineer

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies