Jobs
Interviews

44 Aws Redshift Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 12.0 years

0 Lacs

haryana

On-site

You should have 8-10 years of operational knowledge in Microservices and .Net Fullstack, with experience in C# or Python development, as well as Docker. Additionally, experience with PostgreSQL or Oracle is required. Knowledge of AWS services such as S3 is necessary, and familiarity with AWS Kinesis and AWS Redshift is preferred. A strong desire to learn new technologies and skills is highly valued. Experience with unit testing and Test-Driven Development (TDD) methodology is considered an asset. You should possess strong team spirit, analytical skills, and the ability to synthesize information. Having a passion for Software Craftsmanship, a culture of excellence, and writing Clean Code is important. Fluency in English is required due to the multicultural and international nature of the team. In this role, you will have the opportunity to develop your technical skills in C# .NET and/or Python, Oracle, PostgreSQL, AWS, ELK (Elasticsearch, Logstash, Kibana), GIT, GitHub, TeamCity, Docker, and Ansible.,

Posted 21 hours ago

Apply

10.0 - 18.0 years

0 Lacs

indore, madhya pradesh

On-site

You should possess a BTech degree in computer science, engineering, or a related field of study, or have 12+ years of related work experience. Additionally, you should have at least 7 years of design and implementation experience with large-scale data-centric distributed applications. It is essential to have professional experience in architecting and operating cloud-based solutions, with a good understanding of core disciplines such as compute, networking, storage, security, and databases. A strong grasp of data engineering concepts like storage, governance, cataloging, data quality, and data modeling is required. Familiarity with various architecture patterns like data lake, data lake house, and data mesh is also important. You should have a good understanding of Data Warehousing concepts and hands-on experience with tools like Hive, Redshift, Snowflake, and Teradata. Experience in migrating or transforming legacy customer solutions to the cloud is highly valued. Moreover, experience working with services like AWS EMR, Glue, DMS, Kinesis, RDS, Redshift, Dynamo DB, Document DB, SNS, SQS, Lambda, EKS, and Data Zone is necessary. A thorough understanding of Big Data ecosystem technologies such as Hadoop, Spark, Hive, and HBase, along with other relevant tools and technologies, is expected. Knowledge in designing analytical solutions using AWS cognitive services like Textract, Comprehend, Rekognition, and Sagemaker is advantageous. You should also have experience with modern development workflows like git, continuous integration/continuous deployment pipelines, static code analysis tooling, and infrastructure-as-code. Proficiency in a programming or scripting language like Python, Java, or Scala is required. Possessing an AWS Professional/Specialty certification or relevant cloud expertise is a plus. In this role, you will be responsible for driving innovation within the Data Engineering domain by designing reusable and reliable accelerators, blueprints, and libraries. You should be capable of leading a technology team, fostering an innovative mindset, and enabling fast-paced deliveries. Adapting to new technologies, learning quickly, and managing high ambiguity are essential skills for this position. You will collaborate with business stakeholders, participate in various architectural, design, and status calls, and showcase good presentation skills when interacting with executives, IT Management, and developers. Furthermore, you will drive technology/software sales or pre-sales consulting discussions, ensure end-to-end ownership of tasks, and maintain high-quality software development with complete documentation and traceability. Fulfilling organizational responsibilities, sharing knowledge and experience with other teams/groups, conducting technical training sessions, and producing whitepapers, case studies, and blogs are also part of this role. The ideal candidate for this position should have 10 to 18 years of experience and be able to reference the job with the number 12895.,

Posted 1 day ago

Apply

8.0 - 12.0 years

0 Lacs

haryana

On-site

You should have 8-10 years of operational knowledge in Microservices and .Net Fullstack, C# or Python development, along with experience in Docker. Additionally, experience with PostgreSQL or Oracle is required. Knowledge of AWS services such as S3 is a must, and familiarity with AWS Kinesis and AWS Redshift is desirable. A genuine interest in mastering new technologies is essential for this role. Experience with unit testing and Test-Driven Development (TDD) methodology will be considered as assets. Strong team spirit, analytical skills, and the ability to synthesize information are key qualities we are looking for. Having a passion for Software Craftsmanship, a culture of excellence, and writing Clean Code is highly valued. Being fluent in English is important as you will be working in a multicultural and international team. In this role, you will have the opportunity to develop your technical skills in the following areas: C# .NET and/or Python programming, Oracle and PostgreSQL databases, AWS services, ELK (Elasticsearch, Logstash, Kibana) stack, as well as version control tools like GIT and GitHub, continuous integration with TeamCity, containerization with Docker, and automation using Ansible.,

Posted 1 day ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

As an Associate Technical Product Analyst - Global Data & Analytics Platform at McDonald's Corporation in Hyderabad, you will be an integral part of the Global Technology Enterprise Products & Platforms (EPP) Team. In this role, you will focus on data management & operations within the Global Data & Analytics Platform (GDAP) to support integrations with core Corporate Accounting/Financial/Reporting applications. Your vision will align with McDonald's goal to be a people-led, product-centric, forward-thinking, and trusted technology partner. Your responsibilities will include supporting the Technical Product Management leadership in technical/IT-related delivery topics such as trade-offs in implementation approaches and tech stack selection. You will provide technical guidance for developers/squad members, manage the output of internal/external squads to ensure adherence to McDonald's standards, participate in roadmap and backlog preparation, and maintain technical process flows and solution architecture diagrams at the product level. Additionally, you will lead acceptance criteria creation, validate development work, support hiring and development of engineers, and act as a technical developer as needed. To excel in this role, you should possess a Bachelor's degree in computer science or engineering, along with at least 3 years of hands-on experience designing and implementing solutions using AWS RedShift and Talend. Experience in data warehouse is a plus, as is familiarity with accounting and financial solutions across different industries. Knowledge of Agile software development processes, collaborative problem-solving skills, and excellent communication abilities are essential for success in this position. Preferred qualifications include proficiency in SQL, data integration tools, and scripting languages, as well as a strong understanding of Talend, AWS Redshift, and other AWS services. Experience with RESTful APIs, microservices architecture, DevOps practices, and tools like Jenkins and GitHub is highly desirable. Additionally, foundational expertise in security standards, cloud architecture, and Oracle cloud security will be advantageous. This full-time role based in Hyderabad, India, offers a hybrid work mode. If you are a detail-oriented individual with a passion for leveraging technology to drive business outcomes and are eager to contribute to a global team dedicated to innovation and excellence, we invite you to apply for the position of Associate Technical Product Analyst at McDonald's Corporation.,

Posted 2 days ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we're counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non-Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring knowledge of data engineering, cloud infrastructure and platform engineering, platform operations, and production support using ground-breaking cloud and big data technologies. The ideal candidate with 6-8 years of experience will possess strong technical skills, an eagerness to learn, a keen interest on 3 key pillars that our team supports i.e. Financial Crime, Financial Risk, and Compliance technology transformation, the ability to work collaboratively in a fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skill sets as a foundation. In this role, you will: - Ingest and provision raw datasets, enriched tables, and/or curated, re-usable data assets to enable a variety of use cases. - Drive improvements in the reliability and frequency of data ingestion, including increasing real-time coverage. - Support and enhance data ingestion infrastructure and pipelines. - Design and implement data pipelines that collect data from disparate sources across the enterprise and external sources and deliver it to our data platform. - Extract Transform and Load (ETL) workflows, using both advanced data manipulation tools and programmatically manipulate data throughout our data flows, ensuring data is available at each stage in the data flow and in the form needed for each system, service, and customer along said data flow. - Identify and onboard data sources using existing schemas and, where required, conduct exploratory data analysis to investigate and provide solutions. - Evaluate modern technologies, frameworks, and tools in the data engineering space to drive innovation and improve data processing capabilities. Core/Must-Have Skills: - 3-8 years of expertise in designing and implementing data warehouses, data lakes using Oracle Tech Stack (ETL: ODI, SSIS, DB: PLSQL, and AWS Redshift). - At least 4+ years of experience in managing data extraction, transformation, and loading various sources using Oracle Data Integrator with exposure to other tools like SSIS. - At least 4+ years of experience in Database Design and Dimension modeling using Oracle PLSQL, Microsoft SQL Server. - Experience in developing ETL processes - ETL control tables, error logging, auditing, data quality, etc. Should implement reusability, parameterization workflow design, etc. - Advanced working SQL Knowledge and experience working with relational and NoSQL databases as well as working familiarity with a variety of databases (Oracle, SQL Server, Neo4J). - Strong analytical and critical thinking skills, with the ability to identify and resolve issues in data pipelines and systems. - Expertise in data modeling and DB Design with skills in performance tuning. - Experience with OLAP, OLTP databases, and data structuring/modeling with an understanding of key data points. - Experience building and optimizing data pipelines on Azure Databricks or AWS Glue or Oracle Cloud. - Create and Support ETL Pipelines and table schemas to facilitate the accommodation of new and existing data sources for the Lakehouse. - Experience with data visualization (Power BI/Tableau) and SSRS. Good to Have: - Experience working in Financial Crime, Financial Risk, and Compliance technology transformation domains. - Certification on any cloud tech stack preferred Microsoft Azure. - In-depth knowledge and hands-on experience with data engineering, Data Warehousing, and Delta Lake on-prem (Oracle RDBMS, Microsoft SQL Server) and cloud (Azure or AWS or Oracle Cloud). - Ability to script (Bash, Azure CLI), Code (Python, C#), query (SQL, PLSQL, T-SQL) coupled with software versioning control systems (e.g., GitHub) AND ci/cd systems. - Design and development of systems for the maintenance of the Azure/AWS Lakehouse, ETL process, business Intelligence, and data ingestion pipelines for AI/ML use cases. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people, and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform, and operate. Working across assurance, consulting, law, strategy, tax, and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.,

Posted 2 days ago

Apply

5.0 - 10.0 years

15 - 25 Lacs

Hyderabad/Secunderabad, Bangalore/Bengaluru, Delhi / NCR

Hybrid

Ready to shape the future of work? At Genpact, we dont just adapt to changewe drive it. AI and digital innovation are redefining industries, and were leading the charge. Genpacts AI Gigafactory , our industry-first accelerator, is an example of how were scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team thats shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Senior Principal Consultant, AWS DataLake! Responsibilities Having knowledge on DataLake on AWS services with exposure to creating External Tables and spark programming. The person shall be able to work on python programming. Writing effective and scalable Python codes for automations, data wrangling and ETL. ¢ Designing and implementing robust applications and work on Automations using python codes. ¢ Debugging applications to ensure low-latency and high-availability. ¢ Writing optimized custom SQL queries ¢ Experienced in team and client handling ¢ Having prowess in documentation related to systems, design, and delivery. ¢ Integrate user-facing elements into applications ¢ Having the knowledge of External Tables, Data Lake concepts. ¢ Able to do task allocation, collaborate on status exchanges and getting things to successful closure. ¢ Implement security and data protection solutions ¢ Must be capable of writing SQL queries for validating dashboard outputs ¢ Must be able to translate visual requirements into detailed technical specifications ¢ Well versed in handling Excel, CSV, text, json other unstructured file formats using python. ¢ Expertise in at least one popular Python framework (like Django, Flask or Pyramid) ¢ Good understanding and exposure on any Git, Bamboo, Confluence and Jira. ¢ Good in Dataframes and SQL ANSI using pandas. ¢ Team player, collaborative approach and excellent communication skills Qualifications we seek in you! Minimum Qualifications ¢BE/B Tech/ MCA ¢Excellent written and verbal communication skills ¢Good knowledge of Python, Pyspark Preferred Qualifications/ Skills ¢ Strong ETL knowledge on any ETL tool good to have. ¢ Good to have knowledge on AWS cloud and Snowflake. ¢ Having knowledge of PySpark is a plus. Why join Genpact? Be a transformation leader Work at the cutting edge of AI, automation, and digital innovation Make an impact Drive change for global enterprises and solve business challenges that matter Accelerate your career Get hands-on experience, mentorship, and continuous learning opportunities Work with the best Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Lets build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.

Posted 3 days ago

Apply

6.0 - 10.0 years

0 Lacs

thiruvananthapuram, kerala

On-site

You are an experienced Data Engineer with at least 6 years of relevant experience. In this role, you will be working as part of a team to develop Data and Analytics solutions. Your responsibilities will include participating in the development of cloud data warehouses, data as a service, and business intelligence solutions. You should be able to provide forward-thinking solutions in data integration and ensure the delivery of a quality product. Experience in developing Modern Data Warehouse solutions using Azure or AWS Stack is required. To be successful in this role, you should have a Bachelor's degree in computer science & engineering or equivalent demonstrable experience. It is desirable to have Cloud Certifications in Data, Analytics, or Ops/Architect space. Your primary skills should include: - 6+ years of experience as a Data Engineer, with a key/lead role in implementing large data solutions - Programming experience in Scala or Python, SQL - Minimum of 1 year of experience in MDM/PIM Solution Implementation with tools like Ataccama, Syndigo, Informatica - Minimum of 2 years of experience in Data Engineering Pipelines, Solutions implementation in Snowflake - Minimum of 2 years of experience in Data Engineering Pipelines, Solutions implementation in Databricks - Working knowledge of some AWS and Azure Services like S3, ADLS Gen2, AWS Redshift, AWS Glue, Azure Data Factory, Azure Synapse - Demonstrated analytical and problem-solving skills - Excellent written and verbal communication skills in English Your secondary skills should include familiarity with Agile Practices, Version control platforms like GIT, CodeCommit, problem-solving skills, ownership mentality, and a proactive approach rather than reactive. This is a permanent position based in Trivandrum/Bangalore. If you meet the requirements and are looking for a challenging opportunity in the field of Data Engineering, we encourage you to apply before the close date on 11-10-2024.,

Posted 3 days ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a Data and Solution Architect at our company, you will play a crucial role in participating in requirements definition, analysis, and designing logical and physical data models for various data models such as Dimensional Data Model, NoSQL, or Graph Data Model. You will lead data discovery discussions with the Business in Joint Application Design (JAD) sessions and translate business requirements into logical and physical data modeling solutions. It will be your responsibility to conduct data model reviews with project team members and capture technical metadata using data modeling tools. Your expertise will be essential in ensuring that the database designs efficiently support Business Intelligence (BI) and end-user requirements. You will collaborate closely with ETL/Data Engineering teams to create data process pipelines for data ingestion and transformation. Additionally, you will work with Data Architects for data model management, documentation, and version control. Staying updated with industry trends and standards will be crucial in driving continual improvement and enhancement of existing systems. To excel in this role, you must possess strong data analysis and data profiling skills. Your experience in conceptual, logical, and physical data modeling for Very Large Database (VLDB) Data Warehouse and Graph DB will be highly valuable. Hands-on experience with modeling tools like ERWIN or other industry-standard tools is required. Proficiency in both normalized and dimensional model disciplines and techniques is essential. A minimum of 3 years" experience in Oracle Database along with hands-on experience in Oracle SQL, PL/SQL, or Cypher is expected. Exposure to tools such as Databricks Spark, Delta Technologies, Informatica ETL, and other industry-leading tools will be beneficial. Good knowledge or experience with AWS Redshift and Graph DB design and management is desired. Working knowledge of AWS Cloud technologies, particularly on services like VPC, EC2, S3, DMS, and Glue, will be advantageous. You should hold a Bachelor's degree in Software Engineering, Computer Science, or Information Systems (or equivalent experience). Excellent verbal and written communication skills are necessary, including the ability to describe complex technical concepts in relatable terms. Your ability to manage and prioritize multiple workstreams confidently and make decisions about prioritization will be crucial. A data-driven mentality, self-motivation, responsibility, conscientiousness, and detail-oriented approach are highly valued. In terms of education and experience, a Bachelor's degree in Computer Science, Engineering, or relevant fields along with 3+ years of experience as a Data and Solution Architect supporting Enterprise Data and Integration Applications or a similar role for large-scale enterprise solutions is required. You should have at least 3 years of experience in Big Data Infrastructure and tuning experience in Lakehouse Data Ecosystem, including Data Lake, Data Warehouses, and Graph DB. Possessing AWS Solutions Architect Professional Level certifications will be advantageous. Extensive experience in data analysis on critical enterprise systems like SAP, E1, Mainframe ERP, SFDC, Adobe Platform, and eCommerce systems is preferred. If you are someone who thrives in a dynamic environment and enjoys collaborating with enthusiastic individuals, this role is perfect for you. Join our team and be a part of our exciting journey towards innovation and excellence!,

Posted 5 days ago

Apply

5.0 - 10.0 years

15 - 25 Lacs

Hyderabad/Secunderabad, Bangalore/Bengaluru, Delhi / NCR

Hybrid

Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose the relentless pursuit of a world that works better for people – we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant-Data Engineer, AWS+Python, Spark, Kafka for ETL! Responsibilities Develop, deploy, and manage ETL pipelines using AWS services, Python, Spark, and Kafka. Integrate structured and unstructured data from various data sources into data lakes and data warehouses. Design and deploy scalable, highly available, and fault-tolerant AWS data processes using AWS data services (Glue, Lambda, Step, Redshift) Monitor and optimize the performance of cloud resources to ensure efficient utilization and cost-effectiveness. Implement and maintain security measures to protect data and systems within the AWS environment, including IAM policies, security groups, and encryption mechanisms. Migrate the application data from legacy databases to Cloud based solutions (Redshift, DynamoDB, etc) for high availability with low cost Develop application programs using Big Data technologies like Apache Hadoop, Apache Spark, etc with appropriate cloud-based services like Amazon AWS, etc. Build data pipelines by building ETL processes (Extract-Transform-Load) Implement backup, disaster recovery, and business continuity strategies for cloud-based applications and data. Responsible for analysing business and functional requirements which involves a review of existing system configurations and operating methodologies as well as understanding evolving business needs Analyse requirements/User stories at the business meetings and strategize the impact of requirements on different platforms/applications, convert the business requirements into technical requirements Participating in design reviews to provide input on functional requirements, product designs, schedules and/or potential problems Understand current application infrastructure and suggest Cloud based solutions which reduces operational cost, requires minimal maintenance but provides high availability with improved security Perform unit testing on the modified software to ensure that the new functionality is working as expected while existing functionalities continue to work in the same way Coordinate with release management, other supporting teams to deploy changes in production environment Qualifications we seek in you! Minimum Qualifications Experience in designing, implementing data pipelines, build data applications, data migration on AWS Strong experience of implementing data lake using AWS services like Glue, Lambda, Step, Redshift Experience of Databricks will be added advantage Strong experience in Python and SQL Proven expertise in AWS services such as S3, Lambda, Glue, EMR, and Redshift. Advanced programming skills in Python for data processing and automation. Hands-on experience with Apache Spark for large-scale data processing. Experience with Apache Kafka for real-time data streaming and event processing. Proficiency in SQL for data querying and transformation. Strong understanding of security principles and best practices for cloud-based environments. Experience with monitoring tools and implementing proactive measures to ensure system availability and performance. Excellent problem-solving skills and ability to troubleshoot complex issues in a distributed, cloud-based environment. Strong communication and collaboration skills to work effectively with cross-functional teams. Preferred Qualifications/ Skills Master’s Degree-Computer Science, Electronics, Electrical. AWS Data Engineering & Cloud certifications, Databricks certifications Experience with multiple data integration technologies and cloud platforms Knowledge of Change & Incident Management process Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.

Posted 1 week ago

Apply

5.0 - 10.0 years

15 - 25 Lacs

Hyderabad/Secunderabad, Bangalore/Bengaluru, Delhi / NCR

Hybrid

Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose the relentless pursuit of a world that works better for people – we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant-Data Engineer, AWS+Python, Spark, Kafka for ETL! Responsibilities Develop, deploy, and manage ETL pipelines using AWS services, Python, Spark, and Kafka. Integrate structured and unstructured data from various data sources into data lakes and data warehouses. Design and deploy scalable, highly available, and fault-tolerant AWS data processes using AWS data services (Glue, Lambda, Step, Redshift) Monitor and optimize the performance of cloud resources to ensure efficient utilization and cost-effectiveness. Implement and maintain security measures to protect data and systems within the AWS environment, including IAM policies, security groups, and encryption mechanisms. Migrate the application data from legacy databases to Cloud based solutions (Redshift, DynamoDB, etc) for high availability with low cost Develop application programs using Big Data technologies like Apache Hadoop, Apache Spark, etc with appropriate cloud-based services like Amazon AWS, etc. Build data pipelines by building ETL processes (Extract-Transform-Load) Implement backup, disaster recovery, and business continuity strategies for cloud-based applications and data. Responsible for analysing business and functional requirements which involves a review of existing system configurations and operating methodologies as well as understanding evolving business needs Analyse requirements/User stories at the business meetings and strategize the impact of requirements on different platforms/applications, convert the business requirements into technical requirements Participating in design reviews to provide input on functional requirements, product designs, schedules and/or potential problems Understand current application infrastructure and suggest Cloud based solutions which reduces operational cost, requires minimal maintenance but provides high availability with improved security Perform unit testing on the modified software to ensure that the new functionality is working as expected while existing functionalities continue to work in the same way Coordinate with release management, other supporting teams to deploy changes in production environment Qualifications we seek in you! Minimum Qualifications Experience in designing, implementing data pipelines, build data applications, data migration on AWS Strong experience of implementing data lake using AWS services like Glue, Lambda, Step, Redshift Experience of Databricks will be added advantage Strong experience in Python and SQL Proven expertise in AWS services such as S3, Lambda, Glue, EMR, and Redshift. Advanced programming skills in Python for data processing and automation. Hands-on experience with Apache Spark for large-scale data processing. Experience with Apache Kafka for real-time data streaming and event processing. Proficiency in SQL for data querying and transformation. Strong understanding of security principles and best practices for cloud-based environments. Experience with monitoring tools and implementing proactive measures to ensure system availability and performance. Excellent problem-solving skills and ability to troubleshoot complex issues in a distributed, cloud-based environment. Strong communication and collaboration skills to work effectively with cross-functional teams. Preferred Qualifications/ Skills Master’s Degree-Computer Science, Electronics, Electrical. AWS Data Engineering & Cloud certifications, Databricks certifications Experience with multiple data integration technologies and cloud platforms Knowledge of Change & Incident Management process Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.

Posted 1 week ago

Apply

9.0 - 12.0 years

20 - 22 Lacs

Mumbai

Work from Office

Responsibilities : 9-12 Years Experience in Data Analytics and reporting, Data bricks, Power BI, Snowflake the Data Analytics Manager will lead the delivery of advanced analytics solutions across our Global team ensuring the team delivers high-quality, scalable data solutions using technologies such as Snowflake, Power BI, Microsoft Fabric, Azure, AWS, Python, SQL, and R. Lead and manage a team of 15 data analysts, engineers, and scientists, providing day-to-day direction, performance management, and career development. The successful candidate will foster a high-performance culture through coaching, mentoring, and continuous development, while ensuring alignment with business goals and data governance standards Act as a hands-on technical leader, guiding the team in the design and implementation of data solutions using Snowflake, Azure Synapse, AWS Redshift, and Microsoft Fabric. Oversee the development of dashboards and analytics products using Power BI, ensuring they meet business requirements and usability standards Drive the adoption of advanced analytics and machine learning models using Python, SQL, and R to support forecasting, segmentation, and operational insights. Establish and maintain team standards for code quality, documentation, testing, and version control. Design and deliver structured training plans to upskill team members in cloud platforms, analytics tools, and certifications (e.g., PL-300, SnowPro Core). Conduct regular performance reviews, and development planning to support individual growth and team capability. Collaborate with business stakeholders to translate requirements into actionable data solutions and ensure timely delivery Promote a culture of innovation, continuous improvement, and knowledge sharing within the team. Support the Head of Data Analytics in strategic planning, resource forecasting, and delivery governance Act as a hands-on technical leader, guiding the team in the design and implementation of data solutions using Snowflake, Azure Synapse, AWS Redshift, and Microsoft Fabric. Contact Person: Anupriya Yugesh Email ID: anupriya@gojobs.biz

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

haryana

On-site

You will be working at our office in Gurgaon, India. We are seeking highly qualified candidates from various locations. As a Tableau Developer, you should have at least 3 to 4+ years of IT experience with a focus on Tableau design and development, particularly in large enterprise-level applications. Your responsibilities will include delivering Analytic and canned reports, developing data visualizations and reporting, and presenting data in Tableau reports & dashboards using optimal technical methods. Your role will involve creating high-impact analytics such as heat maps and KPIs to assist the business in making informed decisions. You will also be responsible for maintaining Tableau reports, working on enhancements to existing reports, and creating new reports based on business requirements. Additionally, you should be proficient in presenting Tableau solutions to the business, modifying solutions based on feedback, and developing compelling visual analytics for easy interpretation by stakeholders. Sharing knowledge and building team capabilities are essential parts of this role. Your technical skills should include using custom SQL for complex data pulls, leveraging Tableau platform technologies to design proof of concept solutions, and creating advanced BI visualizations. Experience with AWS Redshift and open-source tools would be advantageous. You will also be involved in overseeing the Tableau Server infrastructure, from user setup to performance tuning in a high-availability environment. Knowledge of Data Warehousing concepts, relational and dimensional data modeling, and SQL reporting skills are required. The qualities we value in candidates include adaptability, intuitive brilliance, empathy, compassion, warmth, good humor, teamwork, superior communication skills, and a willingness to work in a startup environment. If you are proactive, detail-oriented, and passionate about Tableau development, we encourage you to apply for this role.,

Posted 1 week ago

Apply

9.0 - 13.0 years

0 Lacs

hyderabad, telangana

On-site

You will be responsible for designing, installing, configuring, and maintaining Oracle Databases (19c, 12c, 11g) in on-premises and cloud environments (AWS/Azure). Your duties will include administering and optimizing Oracle Real Application Clusters (RAC), Automatic Storage Management (ASM), Data Guard, and Exadata systems. You will need to plan, execute, and validate database upgrades, patching, and migrations with minimal downtime. It will be your task to implement and manage robust backup and recovery strategies, ensuring data integrity and swift restoration capabilities. Monitoring database performance, identifying bottlenecks, and implementing tuning strategies to guarantee optimal performance and efficiency will be crucial. You are expected to act as the primary Oracle database subject matter expert for various projects, providing technical guidance and solutions for database-related requirements. Collaboration with development, operations, and cloud teams to design and implement database solutions meeting application needs and architectural standards is essential. Furthermore, you will manage and optimize cloud database services such as AWS RDS, Azure SQL Database, and gain expertise in AWS Redshift for data warehousing solutions. Developing and maintaining scripts for automation of routine database tasks, monitoring, and alerting using tools like Oracle Enterprise Manager (OEM) is part of the role. Ensuring database security, integrity, and compliance with relevant standards and policies is a critical aspect of the position. In addition, you will troubleshoot and resolve complex database issues, often under pressure. Mentoring junior team members and contributing to the knowledge base of the database are also expected from you. Qualifications: - 9 to 13 years of progressive experience in database administration and engineering. - Strong hands-on experience with Oracle Database versions 19c, 12c, and 11g. - In-depth expertise in Oracle RAC, ASM, Data Guard, and RMAN for backup and recovery. - Experience with Exadata engineered systems is highly desirable. - Proven experience with Oracle Enterprise Manager (OEM) for monitoring, management, and performance tuning. - Strong practical experience with Cloud platforms, specifically AWS and Azure. - Good working knowledge of AWS RDS and Azure database services. - Solid experience with AWS Redshift or other data warehousing solutions is a significant plus. - Proficiency in writing and optimizing complex SQL and PL/SQL queries. - Experience with database performance tuning and capacity planning. - Familiarity with scripting languages (e.g., Bash, Python, Perl) for automation.,

Posted 1 week ago

Apply

1.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

As an Associate Manager - Data IntegrationOps, you will play a crucial role in supporting and managing data integration and operations programs within our data organization. Your responsibilities will involve maintaining and optimizing data integration workflows, ensuring data reliability, and supporting operational excellence. To succeed in this position, you will need a solid understanding of enterprise data integration, ETL/ELT automation, cloud-based platforms, and operational support. Your primary duties will include assisting in the management of Data IntegrationOps programs, aligning them with business objectives, data governance standards, and enterprise data strategies. You will also be involved in monitoring and enhancing data integration platforms through real-time monitoring, automated alerting, and self-healing capabilities to improve uptime and system performance. Additionally, you will help develop and enforce data integration governance models, operational frameworks, and execution roadmaps to ensure smooth data delivery across the organization. Collaboration with cross-functional teams will be essential to optimize data movement across cloud and on-premises platforms, ensuring data availability, accuracy, and security. You will also contribute to promoting a data-first culture by aligning with PepsiCo's Data & Analytics program and supporting global data engineering efforts across sectors. Continuous improvement initiatives will be part of your responsibilities to enhance the reliability, scalability, and efficiency of data integration processes. Furthermore, you will be involved in supporting data pipelines using ETL/ELT tools such as Informatica IICS, PowerCenter, DDH, SAP BW, and Azure Data Factory under the guidance of senior team members. Developing API-driven data integration solutions using REST APIs and Kafka, deploying and managing cloud-based data platforms like Azure Data Services, AWS Redshift, and Snowflake, and participating in implementing DevOps practices using tools like Terraform, GitOps, Kubernetes, and Jenkins will also be part of your role. Your qualifications should include at least 9 years of technology work experience in a large-scale, global organization, preferably in the CPG (Consumer Packaged Goods) industry. You should also have 4+ years of experience in Data Integration, Data Operations, and Analytics, as well as experience working in cross-functional IT organizations. Leadership/management experience supporting technical teams and hands-on experience in monitoring and supporting SAP BW processes are also required qualifications for this role. In summary, as an Associate Manager - Data IntegrationOps, you will be responsible for supporting and managing data integration and operations programs, collaborating with cross-functional teams, and ensuring the efficiency and reliability of data integration processes. Your expertise in enterprise data integration, ETL/ELT automation, cloud-based platforms, and operational support will be key to your success in this role.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

kolkata, west bengal

On-site

You must have knowledge in Azure Datalake, Azure function, Azure Databricks, Azure Data Factory, and PostgreSQL. Working knowledge in Azure DevOps and Git flow would be an added advantage. Alternatively, you should have working knowledge in AWS Kinesis, AWS EMR, AWS Glue, AWS RDS, AWS Athena, and AWS RedShift. Demonstrable expertise in working with timeseries data is essential. Experience in delivering data engineering/data science projects in Industry 4.0 is an added advantage. Knowledge of Palantir is required. You must possess strong problem-solving skills with a focus on sustainable and reusable development. Proficiency in using statistical computer languages like Python/PySpark, Pandas, Numpy, seaborn/matplotlib is necessary. Knowledge in Streamlit.io is a plus. Familiarity with Scala, GoLang, Java, and big data tools such as Hadoop, Spark, Kafka is beneficial. Experience with relational databases like Microsoft SQL Server, MySQL, PostGreSQL, Oracle, and NoSQL databases including Hadoop, Cassandra, MongoDB is expected. Proficiency in data pipeline and workflow management tools like Azkaban, Luigi, Airflow is required. Experience in building and optimizing big data pipelines, architectures, and data sets is crucial. You should possess strong analytical skills related to working with unstructured datasets. Provide innovative solutions to data engineering problems, document technology choices, and integration patterns. Apply best practices for project delivery with clean code. Demonstrate innovation and proactiveness in meeting project requirements. Reporting to: Director- Intelligent Insights and Data Strategy Travel: Must be willing to be deployed at client locations worldwide for long and short terms, flexible for shorter durations within India and abroad.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

About Birlasoft: Birlasoft is a global leader in Cloud, AI, and Digital technologies, leveraging domain expertise to provide innovative enterprise solutions. With a consultative and design-thinking approach, Birlasoft empowers societies worldwide, enhancing business efficiency and productivity. As a part of the multibillion-dollar diversified CKA Birla Group, Birlasoft, comprising of 12,000+ professionals, is dedicated to upholding the Group's 170-year legacy of fostering sustainable communities. Job Title: AWS Redshift Expert Role Overview: We are looking for a highly skilled AWS Redshift Expert to join our data engineering team. This role is pivotal in supporting our AWS ProServe engagements and internal analytics initiatives. The ideal candidate should possess in-depth knowledge of Redshift architecture, performance tuning, and integration with BI tools like Looker. You will collaborate closely with cross-functional teams, including AWS tech leads, data analysts, and client stakeholders, to ensure the development of scalable, secure, and high-performing data solutions. Key Responsibilities: - Design, deploy, and manage AWS Redshift clusters for large-scale data warehousing. - Optimize query performance using DISTKEY, SORTKEY, and materialized views. - Work with BI teams to enhance LookML models and boost dashboard performance. - Conduct performance benchmarking and establish automated alerts for performance degradation. - Lead data migration projects from platforms like BigQuery to Redshift. - Ensure the implementation of data security, compliance, and backup/recovery protocols. - Provide technical leadership in client interviews and solution discussions. Required Skills & Experience: - Minimum of 5 years of experience in data engineering, with at least 3 years specializing in AWS Redshift. - Hands-on expertise in Redshift performance tuning and workload management. - Familiarity with BI tools such as Looker, Power BI, and semantic layer optimization. - Proficiency in cloud architecture and AWS services like EC2, S3, IAM, and VPC. - Excellent communication skills for effective interaction with clients and internal leadership.,

Posted 1 week ago

Apply

1.0 - 5.0 years

0 Lacs

karnataka

On-site

Capgemini Invent is the digital innovation, consulting, and transformation brand of the Capgemini Group, a global business line that combines market-leading expertise in strategy, technology, data science, and creative design to help CxOs envision and build what's next for their businesses. In this role, you should have developed/worked on at least one Gen AI project and have experience in data pipeline implementation with cloud providers such as AWS, Azure, or GCP. You should also be familiar with cloud storage, cloud database, cloud data warehousing, and Data lake solutions like Snowflake, BigQuery, AWS Redshift, ADLS, and S3. Additionally, a good understanding of cloud compute services, load balancing, identity management, authentication, and authorization in the cloud is essential. Your profile should include a good knowledge of infrastructure capacity sizing, costing of cloud services to drive optimized solution architecture, leading to optimal infra investment vs. performance and scaling. You should be able to contribute to making architectural choices using various cloud services and solution methodologies. Proficiency in programming using Python is required along with expertise in cloud DevOps practices such as infrastructure as code, CI/CD components, and automated deployments on the cloud. Understanding networking, security, design principles, and best practices in the cloud is also important. At Capgemini, we value flexible work arrangements to provide support for maintaining a healthy work-life balance. You will have opportunities for career growth through various career growth programs and diverse professions tailored to support you in exploring a world of opportunities. Additionally, you can equip yourself with valuable certifications in the latest technologies such as Generative AI. Capgemini is a global business and technology transformation partner with a rich heritage of over 55 years. We have a diverse team of 340,000 members in more than 50 countries, working together to accelerate the dual transition to a digital and sustainable world while creating tangible impact for enterprises and society. Trusted by clients to unlock the value of technology, we deliver end-to-end services and solutions leveraging strengths from strategy and design to engineering, fueled by market-leading capabilities in AI, cloud, and data, combined with deep industry expertise and partner ecosystem. Our global revenues in 2023 were reported at 22.5 billion.,

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

You are an experienced BI Architect with a strong background in Power BI and the Microsoft Azure ecosystem. Your main responsibility will be to design, implement, and enhance business intelligence solutions that aid in strategic decision-making within the organization. You will play a crucial role in leading the BI strategy, architecture, and governance processes, while also guiding a team of BI developers and Data analysts. Your key responsibilities will include designing and implementing scalable BI solutions using Power BI and Azure services, defining BI architecture, data models, security models, and best practices for enterprise reporting. You will collaborate closely with business stakeholders to gather requirements and transform them into data-driven insights. Additionally, you will oversee data governance, metadata management, and Power BI workspace design, optimizing Power BI datasets, reports, and dashboards for performance and usability. Furthermore, you will be expected to establish standards for data visualization, development lifecycle, version control, and deployment. As a mentor to BI developers, you will ensure adherence to coding and architectural standards, integrate Power BI with other applications using APIs, Power Automate, or embedded analytics, and monitor and troubleshoot production BI systems to maintain high availability and data accuracy. To qualify for this role, you should have a minimum of 12 years of overall experience with at least 7 years of hands-on experience with Power BI, including expertise in data modeling, DAX, M/Power Query, custom visuals, and performance tuning. Strong familiarity with Azure services such as Azure SQL Database, Azure Data Lake, Azure Functions, and Azure DevOps is essential. You must also possess a solid understanding of data warehousing, ETL, and dimensional modeling concepts, along with proficiency in SQL, data transformation, and data governance principles. Experience in managing enterprise-level Power BI implementations with large user bases and complex security requirements, excellent communication and stakeholder management skills, the ability to lead cross-functional teams, and influence BI strategy across departments are also prerequisites for this role. Knowledge of Microsoft Fabric architecture and its components, a track record of managing BI teams of 6 or more, and the capability to provide technical leadership and team development are highly desirable. In addition, having the Microsoft Fabric Certification DP 600 and PL-300 would be considered a bonus for this position.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

About Birlasoft: Birlasoft is a global leader in Cloud, AI, and Digital technologies, combining domain expertise with enterprise solutions to empower societies worldwide and enhance business efficiency and productivity. As part of the multibillion-dollar diversified CKA Birla Group, Birlasoft is committed to upholding the Group's 170-year heritage of building sustainable communities with a workforce of 12,000+ professionals. Job Title: AWS Redshift Expert Role Overview: We are looking for a highly skilled AWS Redshift Expert to join our data engineering team. This role plays a critical part in supporting our AWS ProServe engagements and internal analytics initiatives. The ideal candidate should possess in-depth expertise in Redshift architecture, performance tuning, and integration with BI tools such as Looker. Working closely with cross-functional teams, including AWS tech leads, data analysts, and client stakeholders, you will ensure the development of scalable, secure, and high-performing data solutions. Key Responsibilities: - Design, deploy, and manage AWS Redshift clusters for enterprise-scale data warehousing. - Optimize query performance using DISTKEY, SORTKEY, and materialized views. - Collaborate with BI teams to refactor LookML models and enhance dashboard performance. - Conduct performance benchmarking and set up automated alerts for degradation. - Lead data migration efforts from platforms like BigQuery to Redshift. - Ensure data security, compliance, and backup/recovery protocols are effectively implemented. - Provide technical leadership during client interviews and solution discussions. Required Skills & Experience: - 5+ years of experience in data engineering with a minimum of 3 years focused on AWS Redshift. - Hands-on experience in Redshift performance tuning and workload management. - Familiarity with BI tools like Looker, Power BI, and semantic layer optimization. - Proficiency in cloud architecture and AWS services (EC2, S3, IAM, VPC). - Excellent communication skills to engage with clients and internal leadership.,

Posted 2 weeks ago

Apply

8.0 - 12.0 years

12 - 18 Lacs

Noida

Work from Office

General Roles & Responsibilities: Technical Leadership: Demonstrate leadership, and ability to guide business and technology teams in adoption of best practices and standards Design & Development: Design, develop, and maintain robust, scalable, and high-performance data estate Architecture: Architect and design robust data solutions that meet business requirements & include scalability, performance, and security. Quality: Ensure the quality of deliverables through rigorous reviews, and adherence to standards. Agile Methodologies: Actively participate in agile processes, including planning, stand-ups, retrospectives, and backlog refinement. Collaboration: Work closely with system architects, data engineers, data scientists, data analysts, cloud engineers and other business stakeholders to determine optimal solution & architecture that is future-proof too. Innovation: Stay updated with the latest industry trends and technologies, and drive continuous improvement initiatives within the development team. Documentation: Create and maintain technical documentation, including design documents, and architectural user guides. Technical Responsibilities: Optimize data pipelines for performance and efficiency. Work with Databricks clusters and configuration management tools. Use appropriate tools in the cloud data lake development and deployment. Developing/implementing cloud infrastructure to support current and future business needs. Provide technical expertise and ownership in the diagnosis and resolution of issues. Ensure all cloud solutions exhibit a higher level of cost efficiency, performance, security, scalability, and reliability. Manage cloud data lake development and deployment on AWSDatabricks. Manage and create workspaces, configure cloud resources, view usage data, and manage account identities, settings, and subscriptions in Databricks Required Technical Skills: Experience & Proficiency with Databricks platform - Delta Lake storage, Spark (PySpark, Spark SQL). Must be well versed with Databricks Lakehouse, Unity Catalog concept and its implementation in enterprise environments. Familiarity of data design pattern - medallion architecture to organize data in a Lakehouse. Experience & Proficiency with AWS Data Services S3, Glue, Athena, Redshift etc.| Airflow scheduling Proficiency in SQL and experience with relational databases. Proficiency in at least one programming language (e.g., Python, Java) for data processing and scripting. Experience with DevOps practices - AWS DevOps for CI/CD, Terraform/CDK for infrastructure as code Good understanding of data principles, Cloud Data Lake design & development including data ingestion, data modeling and data distribution. Jira: Proficient in using Jira for managing projects and tracking progress. Other Skills: Strong communication and interpersonal skills. Collaborate with data stewards, data owners, and IT teams for effective implementation Understanding of business processes and terminology preferably Logistics Experienced with Scrum and Agile Methodologies Qualification Bachelors degree in information technology or a related field. Equivalent experience may be considered. Overall experience of 8-12 years in Data Engineering Mandatory Competencies Data Science and Machine Learning - Data Science and Machine Learning - Databricks Data on Cloud - Azure Data Lake (ADL) Agile - Agile Data Analysis - Data Analysis Big Data - Big Data - Pyspark Data on Cloud - AWS S3 Data on Cloud - Redshift ETL - ETL - AWS Glue Python - Python DevOps - CI/CD Beh - Communication and collaboration Cloud - Azure - Azure Data Factory (ADF), Azure Databricks, Azure Data Lake Storage, Event Hubs, HDInsight Database - Database Programming - SQL Agile - Agile - SCRUM QA/QE - QA Analytics - Data Analysis Cloud - AWS - AWS S3, S3 glacier, AWS EBS Cloud - AWS - Tensorflow on AWS, AWS Glue, AWS EMR, Amazon Data Pipeline, AWS Redshift Programming Language - Python - Python Shell Development Tools and Management - Development Tools and Management - CI/CD Cloud - AWS - AWS Lambda,AWS EventBridge, AWS Fargate

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 - 1 Lacs

Hyderabad, Navi Mumbai, Pune

Work from Office

Role & responsibilities Key Responsibilities: Design, develop, and deploy interactive dashboards and visualizations using TIBCO Spotfire . Work with stakeholders to gather business requirements and translate them into scalable BI solutions. Optimize Spotfire performance and apply best practices in visualization and data storytelling. Integrate data from multiple sources such as SQL databases, APIs, Excel, SAP , or cloud platforms. Implement advanced analytics using IronPython scripting , data functions , and R/Statistical integration . Conduct data profiling, cleansing, and validation to ensure accuracy and consistency. Support end-users with training, troubleshooting, and dashboard enhancements. Must-Have Skills: 58 years of experience in BI and Data Visualization . Minimum 4 years hands-on with TIBCO Spotfire including custom expressions and calculated columns. Strong knowledge of data modeling , ETL processes , and SQL scripting . Expertise in IronPython scripting for interactivity and automation within Spotfire. Experience working with large datasets and performance tuning visualizations. Good to Have: Experience with R , Python , or Statistica for advanced analytics in Spotfire. Familiarity with cloud-based data platforms (AWS Redshift, Snowflake, Azure Synapse). Understanding of data governance , metadata management , and access controls . Exposure to other BI tools like Tableau, Power BI , or QlikView .

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

Genpact is a global professional services and solutions firm with a workforce of 125,000+ employees spanning across 30+ countries. Driven by innate curiosity, entrepreneurial agility, and the desire to create lasting value for clients, we serve and transform leading enterprises, including Fortune Global 500 companies. Our purpose, the relentless pursuit of a world that works better for people, powers our operations. We specialize in deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. We are currently seeking applications for the role of Lead Consultant - ETL Manual Tester. We are looking for an experienced Test Manager to oversee and manage testing activities for our Data/ETL (Extract, Transform, Load) program. The ideal candidate will ensure the quality and reliability of our data processing systems. This role involves developing testing strategies, managing a team of test engineers, and collaborating with other departments to ensure the successful delivery of the program. Responsibilities include: Test Strategy & Planning: - Develop and implement a comprehensive testing strategy for the Data transformation/ETL program. - Plan, design, and manage the execution of test cases, scripts, and procedures for data validation and ETL processes. - Oversee the preparation of test data and environments to meet complex data workflow requirements. Team Management & Leadership: - Lead and mentor a team of test engineers, setting clear goals and expectations and providing regular feedback. - Foster a culture of quality and continuous improvement within the team. - Coordinate with project managers, data engineers, and business analysts for effective communication and issue resolution. Testing & Quality Assurance: - Identify defects and issues in data processing and ETL workflows. - Implement and maintain quality assurance policies and procedures for data integrity and reliability. - Monitor and report on testing activities, including test results, defect tracking, and quality metrics. Stakeholder Engagement: - Act as the primary contact for all testing-related activities within the Data/ETL program. - Communicate testing progress, risks, and outcomes to program stakeholders. - Collaborate with business users to align testing strategies with business objectives. Technology & Tools: - Stay updated on the latest testing methodologies, tools, and technologies related to data and ETL processes. - Recommend and implement tools and technologies to enhance testing efficiency. - Ensure the testing team is trained and proficient in using testing tools and technologies. Qualifications we seek in you: - Bachelor's degree in computer science, Information Technology, or related field. - Experience in a testing role focusing on data and ETL processes. - Proven experience managing a testing team for large-scale data projects. - Strong understanding of data modeling, ETL processes, and data warehousing principles. - Proficiency in SQL and database technologies. - Experience with test automation tools and frameworks. - Excellent analytical, problem-solving, and communication skills. - Ability to work collaboratively in a team environment and manage multiple priorities. Preferred Skills: - Experience with cloud-based data warehousing solutions (AWS Redshift, Google BigQuery, Azure Synapse Analytics). - Knowledge of Agile methodologies and working in an Agile environment. If you meet the qualifications mentioned above and are passionate about testing and quality assurance, we invite you to apply for the Lead Consultant - ETL Manual Tester position based in Hyderabad, India.,

Posted 2 weeks ago

Apply

6.0 - 8.0 years

25 - 30 Lacs

Bengaluru

Work from Office

Role & responsibilities Mandate skills- Python, AWS, Data Modeler, SQL and Devops (Good to Have not Mandate) Please avoid candidates Qualified from the Universities of Hyderabad & Telangana You Can consider Hyderabad and Telangana Candidates those are working in only tier one companies Job Description- The ideal candidate will have 6 to 8 years of experience in data modelling and architecture with deep expertise in Python, AWS cloud stack , data warehousing , and enterprise data modelling tools . This individual will be responsible for designing and creating enterprise-grade data models and driving the implementation of Layered Scalable Architecture or Medallion Architecture to support robust, scalable, and high-quality data marts across multiple business units. This role will involve managing complex datasets from systems like PoS, ERP, CRM, and external sources, while optimizing performance and cost. You will also provide strategic leadership on data modelling standards, governance, and best practices, ensuring the foundation for analytics and reporting is solid and future ready. Key Responsibilities: Design and deliver conceptual, logical, and physical data models using tools like ERWin . Implement Layered Scalable Architecture / Medallion Architecture for building scalable, standardized data marts. Optimize performance and cost of AWS-based data infrastructure (Redshift, S3, Glue, Lambda, etc.). Collaborate with cross-functional teams (IT, business, analysts) to gather data requirements and ensure model alignment with KPIs and business logic. Develop and optimize SQL code, materialized views, stored procedures in AWS Redshift . Ensure data governance, lineage, and quality mechanisms are established across systems. Lead and mentor technical teams in an Agile project delivery model. Manage data layer creation and documentation: data dictionary, ER diagrams, purpose mapping. Identify data gaps and availability issues with respect to source systems. Required Skills & Qualifications: Bachelors or Masters degree in Computer Science, IT, or related field (B.E./B.Tech/M.E./M.Tech/MCA) . Minimum 8 years of experience in data modeling and architecture. Proficiency with data modeling tools such as ERWin , with strong knowledge of forward and reverse engineering . Deep expertise in SQL (including advanced SQL, stored procedures, performance tuning). Strong experience in Python, data warehousing , RDBMS , and ETL tools like AWS Glue , IBM DataStage , or SAP Data Services . Hands-on experience with AWS services : Redshift, S3, Glue, RDS, Lambda, Bedrock, and Q. Good understanding of reporting tools such as Tableau , Power BI , or AWS QuickSight . Exposure to DevOps/CI-CD pipelines , AI/ML , Gen AI , NLP , and polyglot programming is a plus. Familiarity with data governance tools (e.g., ORION/EIIG). Domain knowledge in Retail , Manufacturing , HR , or Finance preferred. Excellent written and verbal communication skills. Certifications (Preferred) Good to have AWS Certification (e.g., AWS Certified Solutions Architect or Data Analytics Specialty ) Data Governance or Data Modelling Certifications (e.g., CDMP , Databricks , or TOGAF ) Mandatory Skills Python, AWS, Technical Architecture, AIML, SQL, Data Warehousing, Data Modelling Preferred candidate profile Share resumes on Sakunthalaa@valorcrest.in

Posted 3 weeks ago

Apply

5.0 - 7.0 years

19 Lacs

Kolkata, Mumbai, Hyderabad

Work from Office

Reporting to Global Head of Data Operations Role purpose As a Data Engineer, you will be a driving force towards data engineering excellence. Working with other data engineers, analysts, and the architecture function, youll be involved in the building out of a modern data platform using a number of cutting-edge technologies, and in a multi cloud environment, Youll get the opportunity to spread your knowledge and skills across multiple areas, with involvement in a range of different functional areas. As the business grows, we want our staff to grow with us, so therell be plenty of opportunity to learn and upskill in areas such as data pipelines, data integrations, data preparation, data models, analytical and reporting marts. Also, whilst work is often following business requirements and design concepts, youll play a huge part in the continuous development and maturing of design patterns and automation process for others to follow. Accountabilities and main responsibilities In this role, you will be delivering solutions and patterns through Agile methodologies as part of a squad. Youll be collaborating with customers, partners and peers, and will help to identify data requirements. Wed also rely on you to: Help break down large problems into smaller iterative steps Contribute to defining the prioritisation of your squads backlog Build out the modern data platform (data pipelines, data integrations, data preparation, data models, analytical and reporting marts) based on business requirements using agreed design patterns Help determine the most appropriate tool, method and design pattern in order to satisfy the requirement Proactively suggest improvements where they see issues Learn how to prepare our data in order to surface it for use within APIs Learn how to Document, support, manage and maintain the modern data platform built within your squad Learn how to provide guidance and training to downstream consumers of data on how best to use the data in our platform Learn how to support and build new data APIs Contribute to evangelising and educating within Sanne about the better use and value of data Comply with all Sanne policies Any other duties in the scope of the role that the company requires. Qualifications and skills Technical Skills: Data Warehousing and Data Modelling Data Lakes (AWS Lake Formation, Azure Data Lake) Cloud Data Warehouses (AWS Redshift, Azure Synapse, Snowflake) ETL/ELT/ Pipeline tools (AWS Glue, Azure Data Factory, FiveTran, Stitch) Data Message Bus/Pub Sub systems (AWS SNS & SQS Azure ASQ, Kafka, RabbitMQ) Data Programming languages (SQL, Python, Scala, Java) Cloud Workflow Service (AWS Step Functions, Azure Logic Apps, Camuda) Interactive Query Services (AWS Athena, Azure DL Analytics) Event and schedule management (AWS Lambda Functions, Azure Functions) Traditional Microsoft BI Stack (SQLServer, SSIS, SSAS, SSRS) Reporting and visualisation tools (Power BI, QuickSight, Mode) NoSQL & Graph DBs (AWS Neptune, Azure Cosmos, Neo4j) NoSQL & Graph DBs (AWS Neptune, Azure Cosmos, Neo4j) (Desirable) API Management (Desirable) Core Skills: Excellent communication and interpersonal skills Critical Thinking and research capabilities Strong problem-solving skills Ability to plan, and manage your own work loads Work well on own initiative as well as part of a bigger team Working knowledge of Agile Software Development Lifecycles.

Posted 1 month ago

Apply

4.0 - 6.0 years

11 - 12 Lacs

Mumbai

Work from Office

Notice Period: Immediate iSource Services is hiring for one of their client for the position of Tableau developer About the Role - We are seeking an experienced Tableau Developer with 4+ years of experience to work in Mumbai. The candidate should have a strong background in data visualization, analytics, and business intelligence to drive insights for the organization. Responsibilities: Develop interactive Tableau dashboards and reports based on business requirements. Connect, clean, and transform data from multiple sources for visualization. Optimize dashboards for performance and usability. Work closely with business and technical teams to gather requirements. Implement best practices for data visualization and storytelling. Automate data refreshes and ensure data accuracy. Collaborate with data engineers and analysts for efficient data modeling. Requirements: 4+ years of experience in Tableau development and data visualization. Proficiency in SQL, data modeling, and ETL processes. Experience with data sources like SQL Server, Snowflake, or AWS Redshift. Strong understanding of data warehousing concepts. Ability to analyze and interpret complex data sets. Experience in Python/R (preferred but not mandatory). Excellent communication and stakeholder management skills.

Posted 1 month ago

Apply
Page 1 of 2
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies