Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
510.0 years
0 Lacs
Bhopal, Madhya Pradesh, India
On-site
Role : Data Engineers (510 Years of Experience) Experience : 510 years Location : Gurgaon, Pune, Bangalore, Chennai, Jaipur and Bhopal Skills : Python/Scala, SQL, ETL, Big Data (Spark, Kafka, Hive), Cloud (AWS/Azure/GCP), Data Warehousing Responsibilities : Build and maintain robust, scalable data pipelines and systems. Design and implement ETL processes to support analytics and reporting. Optimize data workflows for performance and scalability. Collaborate with data scientists, analysts, and engineering teams. Ensure data quality, governance, and security compliance. Required Skills Strong experience with Python/Scala, SQL, and ETL tools. Hands-on with Big Data technologies (Hadoop, Spark, Kafka, Hive, etc. Proficiency in Cloud Platforms (AWS/GCP/Azure). Experience with data warehousing (e.g., Redshift, Snowflake, BigQuery). Familiarity with CI/CD pipelines and version control systems. Nice To Have Experience with Airflow, Databricks, or dbt. Knowledge of real-time data processing (ref:hirist.tech) Show more Show less
Posted 4 days ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Design, develop, and maintain ETL (Extract, Transform, Load) pipelines using AWS services like Glue, Lambda, Step Functions, and Data Pipeline. Automate data ingestion from various sources such as databases, APIs, logs, and streaming data. Optimize data processing for performance and cost efficiency Work with Amazon Redshift, Athena, or Snowflake to build and optimize data warehouses for analytics. Optimize queries, indexing, and partitioning for performance improvements. Work with Data Scientists, Analysts, and Software Engineers to deliver data solutions. Understand business requirements and translate them into scalable data solutions. Maintain documentation and ensure best practices for data engineering in AWS. Strong SQL skills Show more Show less
Posted 4 days ago
1.0 - 3.0 years
0 Lacs
Greater Kolkata Area
On-site
Role : Machine Learning Engineer Key Responsibilities Collaborate with data scientists to support end-to-end ML model development, including data preparation, feature engineering, training, and evaluation. Build and maintain automated pipelines for data ingestion, transformation, and model scoring using Python and SQL. Assist in model deployment using CI/CD pipelines (e.g., Jenkins) and ensure smooth integration with production systems. Develop tools and scripts to support model monitoring, logging, and retraining workflows. Work with data from relational databases (RDS, Redshift) and preprocess it for model consumption. Analyze pipeline performance and model behavior; identify opportunities for optimization and refactoring. Contribute to the development of a feature store and standardized processes to support reproducible data science. Required Skills & Experience 1-3 years of hands-on experience in Python programming for data science or ML engineering tasks. Solid understanding of machine learning workflows, including model training, validation, deployment, and monitoring. Proficient in SQL and working with structured data from sources like Redshift, RDS, etc. Familiarity with ETL pipelines and data transformation best practices. Basic understanding of ML model deployment strategies and CI/CD tools like Jenkins. Strong analytical mindset with the ability to interpret and debug data/model issues. Preferred Qualifications Exposure to frameworks like scikit-learn, XGBoost, LightGBM, or similar. Knowledge of ML lifecycle tools (e.g., MLflow, Ray). Familiarity with cloud platforms (AWS preferred) and scalable infrastructure. (ref:hirist.tech) Show more Show less
Posted 4 days ago
7.0 - 15.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
DataArchitecture Design: Develop and maintain a comprehensive data architecture strategy that aligns with the business objectives and technology landscape. DataModeling:Createand managelogical, physical, and conceptual data models to support various business applications and analytics. DatabaseDesign: Design and implement database solutions, including data warehouses, data lakes, and operational databases. DataIntegration: Oversee the integration of data from disparate sources into unified, accessible systems using ETL/ELT processes. DataGovernance:Implementand enforce data governance policies and procedures to ensure data quality, consistency, and security. TechnologyEvaluation: Evaluate and recommend data management tools, technologies, and best practices to improve data infrastructure and processes. Collaboration: Work closely with data engineers, data scientists, business analysts, and other stakeholders to understand data requirements and deliver effective solutions. Trusted by the world’s leading brands Documentation:Createand maintain documentation related to data architecture, data flows, data dictionaries, and system interfaces. PerformanceTuning: Optimize database performance through tuning, indexing, and query optimization. Security: Ensure data security and privacy by implementing best practices for data encryption, access controls, and compliance with relevant regulations (e.g., GDPR, CCPA) Requirements Helpingproject teams withsolutions architecture,troubleshooting, and technical implementation assistance. Proficiency in SQL and database management systems (e.g., MySQL, PostgreSQL, Oracle, SQL Server). Minimum7to15yearsofexperienceindataarchitecture or related roles. Experiencewithbig data technologies (e.g., Hadoop, Spark, Kafka, Airflow). Expertisewithcloud platforms (e.g., AWS, Azure, Google Cloud) and their data services. Knowledgeofdataintegration tools (e.g., Informatica, Talend, FiveTran, Meltano). Understandingofdatawarehousing concepts and tools (e.g., Snowflake, Redshift, Synapse, BigQuery). Experiencewithdata governanceframeworks and tools. Show more Show less
Posted 4 days ago
130.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Associate Director, Data Engineering Based in Hyderabad, join a global healthcare biopharma company and be part of a 130-year legacy of success backed by ethical integrity, forward momentum, and an inspiring mission to achieve new milestones in global healthcare. Led an Organization driven by digital technology and data-backed approaches that supports a diversified portfolio of prescription medicines, vaccines, and animal health products. Drive innovation and execution excellence. Be the leaders who have a passion for using data, analytics, and insights to drive decision-making, which will allow us to tackle some of the world's greatest health threats. Our Technology centers focus on creating a space where teams can come together to deliver business solutions that save and improve lives. AN integral part of the our company's IT operating model, Tech centers are globally distributed locations where each IT division has employees to enable our digital transformation journey and drive business outcomes. These locations, in addition to the other sites, are essential to supporting our business and strategy. A focused group of leaders in each tech center helps to ensure we can manage and improve each location, from investing in growth, success, and well-being of our people, to making sure colleagues from each IT division feel a sense of belonging to managing critical emergencies. And together, we must leverage the strength of our team to collaborate globally to optimize connections and share best practices across the Tech Centers. Role Overview As the Associate Director, Data Engineering, your role will focus on business intelligence at the organization to enhance data-driven decision-making across the organization. This role is crucial for transforming data into valuable insights that drive business performance, support strategic initiatives and ultimately contribute to our company's mission to use science to improve and save lives around the world. What Will You Do In This Role You will develop and ensure that business intelligence activities are efficient and effective, enabling timely access to accurate data for informed decision-making, and focused on automation, controls, and data quality. Design, develop and maintain data pipelines to extract data from a variety of sources and populate data lake and data warehouse. Collaborate with Data Analyst, Data scientists, Machine Learning Engineers to identify and transform data for ingestion, exploration, and modeling. Work with data governance team and implement data quality checks and maintain data catalogs. Use Orchestration, logging, and monitoring tools to build resilient pipelines. Use test driven development methodology when building ELT/ETL pipelines. Understand and apply concepts like data lake, data warehouse, lake-house, data mesh and data-fabric where relevant. Develop data models for cloud data warehouses like Redshift and Snowflake. Develop pipelines to ingest data into cloud data warehouses. You will investigate enterprise data requirements where there is some complexity and ambiguity and plan own data modeling and design activities, selecting appropriate techniques and the correct level of detail for meeting assigned objectives. You will define and implement data engineering strategies that align with organizational goals and data governance standards. You will play a lead role in agile engineering and consulting, providing guidance on for complex data and unplanned data challenges. You will collaborate in the formulation of analytics policies, standards, and best practices to ensure consistency and compliance across the organization. Encourages a culture of continuous learning, constructive collaboration, and innovation within the team. What Should You Have Bachelor's degree in Computer Science/Engineering, Data Sciences, Bioinformatics, Biostatistics or any other computational quantitative science. Minimum of 5-7 years of developing data pipelines & data infrastructure, ideally within a drug development or life sciences context. Expert in software / data engineering practices (including versioning, release management, deployment of datasets, agile & related software tools). Strong software development skills in R and Python, SQL, PySpark. Agile working knowledge. Strong working knowledge of at least one large-scale data processing technology (e.g. High-performance computing, distributed computing), databases and underlying technology (cloud or on-prem environments, containerization, distributed storage & databases). Strong interpersonal and communication skills (verbal and written) effectively bridging scientific and business needs; experience working in a matrix environment. Proven record of delivering high-quality results in quantitative sciences and/or a solid publication track record. Our technology teams operate as business partners, proposing ideas and innovative solutions that enable new organizational capabilities. We collaborate internationally to deliver services and solutions that help everyone be more productive and enable innovation. Who we are We are known as Merck & Co., Inc., Rahway, New Jersey, USA in the United States and Canada and MSD everywhere else. For more than a century, we have been inventing for life, bringing forward medicines and vaccines for many of the world's most challenging diseases. Today, our company continues to be at the forefront of research to deliver innovative health solutions and advance the prevention and treatment of diseases that threaten people and animals around the world. What we look for Imagine getting up in the morning for a job as important as helping to save and improve lives around the world. Here, you have that opportunity. You can put your empathy, creativity, digital mastery, or scientific genius to work in collaboration with a diverse group of colleagues who pursue and bring hope to countless people who are battling some of the most challenging diseases of our time. Our team is constantly evolving, so if you are among the intellectually curious, join us—and start making your impact today. Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Regular Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Not Applicable Shift Valid Driving License Hazardous Material(s) Required Skills Business Intelligence (BI), Database Administration, Data Engineering, Data Management, Data Modeling, Data Visualization, Design Applications, Information Management, Software Development, Software Development Life Cycle (SDLC), System Designs Preferred Skills Job Posting End Date 07/14/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R336586 Show more Show less
Posted 4 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Step into role of a Senior Data Engineer. At Barclays, innovation isn’t encouraged, its expected. As a Senior Data Engineer you will build and maintain the systems that collect, store, process, and analyse data, such as data pipelines, data warehouses and data lakes to ensure that all data is accurate, accessible, and secure. To be a successful Senior Data Engineer, you should have experience with: Hands on experience to work with large scale data platforms & in development of cloud solutions in AWS data platform with proven track record in driving business success. Strong understanding of AWS and distributed computing paradigms, ability to design and develop data ingestion programs to process large data sets in Batch mode using Glue, Lambda, S3, redshift and snowflake and data bricks. Ability to develop data ingestion programs to ingest real-time data from LIVE sources using Apache Kafka, Spark Streaming and related technologies. Hands on programming experience in python and PY-Spark. Understanding of Dev Ops Pipelines using Jenkins, GitLab & should be strong in data modelling and Data architecture concepts & well versed with Project management tools and Agile Methodology. Sound knowledge of data governance principles and tools (alation/glue data quality, mesh), Capable of suggesting solution architecture for diverse technology applications. Additional Relevant Skills Given Below Are Highly Valued Experience working in financial services industry & working in various Settlements and Sub ledger functions like PNS, Stock Record and Settlements, PNL. Knowledge in BPS, IMPACT & Gloss products from Broadridge & creating ML model using python, Spark & Java. You may be assessed on key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen, strategic thinking and digital and technology, as well as job-specific technical skills. This role is based in Pune. Purpose of the role To build and maintain the systems that collect, store, process, and analyse data, such as data pipelines, data warehouses and data lakes to ensure that all data is accurate, accessible, and secure. Accountabilities Build and maintenance of data architectures pipelines that enable the transfer and processing of durable, complete and consistent data. Design and implementation of data warehoused and data lakes that manage the appropriate data volumes and velocity and adhere to the required security measures. Development of processing and analysis algorithms fit for the intended data complexity and volumes. Collaboration with data scientist to build and deploy machine learning models. Vice President Expectations To contribute or set strategy, drive requirements and make recommendations for change. Plan resources, budgets, and policies; manage and maintain policies/ processes; deliver continuous improvements and escalate breaches of policies/procedures.. If managing a team, they define jobs and responsibilities, planning for the department’s future needs and operations, counselling employees on performance and contributing to employee pay decisions/changes. They may also lead a number of specialists to influence the operations of a department, in alignment with strategic as well as tactical priorities, while balancing short and long term goals and ensuring that budgets and schedules meet corporate requirements.. If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others.. OR for an individual contributor, they will be a subject matter expert within own discipline and will guide technical direction. They will lead collaborative, multi-year assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialisation to complete assignments. They will train, guide and coach less experienced specialists and provide information affecting long term profits, organisational risks and strategic decisions.. Advise key stakeholders, including functional leadership teams and senior management on functional and cross functional areas of impact and alignment. Manage and mitigate risks through assessment, in support of the control and governance agenda. Demonstrate leadership and accountability for managing risk and strengthening controls in relation to the work your team does. Demonstrate comprehensive understanding of the organisation functions to contribute to achieving the goals of the business. Collaborate with other areas of work, for business aligned support areas to keep up to speed with business activity and the business strategies. Create solutions based on sophisticated analytical thought comparing and selecting complex alternatives. In-depth analysis with interpretative thinking will be required to define problems and develop innovative solutions. Adopt and include the outcomes of extensive research in problem solving processes. Seek out, build and maintain trusting relationships and partnerships with internal and external stakeholders in order to accomplish key business objectives, using influencing and negotiating skills to achieve outcomes. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave. Show more Show less
Posted 4 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Join Barclays as a Senior Data Engineer. At Barclays, we are building the bank of tomorrow. As a Strategy and Transformation Lead you will build and maintain the systems that collect, store, process, and analyse data, such as data pipelines, data warehouses and data lakes to ensure that all data is accurate, accessible, and secure. To be a successful Senior Data Engineer, you should have experience with: Hands on experience to work with large scale data platforms & in development of cloud solutions in AWS data platform with proven track record in driving business success. Strong understanding of AWS and distributed computing paradigms, ability to design and develop data ingestion programs to process large data sets in Batch mode using Glue, Lambda, S3, redshift and snowflake and data bricks. Ability to develop data ingestion programs to ingest real-time data from LIVE sources using Apache Kafka, Spark Streaming and related technologies. Hands on programming experience in python and PY-Spark. Understanding of Dev Ops Pipelines using Jenkins, GitLab & should be strong in data modelling and Data architecture concepts & well versed with Project management tools and Agile Methodology. Sound knowledge of data governance principles and tools (alation/glue data quality, mesh), Capable of suggesting solution architecture for diverse technology applications. Additional Relevant Skills Given Below Are Highly Valued Experience working in financial services industry & working in various Settlements and Sub ledger functions like PNS, Stock Record and Settlements, PNL. Knowledge in BPS, IMPACT & Gloss products from Broadridge & creating ML model using python, Spark & Java. You may be assessed on key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen, strategic thinking and digital and technology, as well as job-specific technical skills. This role is based in Pune. Purpose of the role To build and maintain the systems that collect, store, process, and analyse data, such as data pipelines, data warehouses and data lakes to ensure that all data is accurate, accessible, and secure. Accountabilities Build and maintenance of data architectures pipelines that enable the transfer and processing of durable, complete and consistent data. Design and implementation of data warehoused and data lakes that manage the appropriate data volumes and velocity and adhere to the required security measures. Development of processing and analysis algorithms fit for the intended data complexity and volumes. Collaboration with data scientist to build and deploy machine learning models. Vice President Expectations To contribute or set strategy, drive requirements and make recommendations for change. Plan resources, budgets, and policies; manage and maintain policies/ processes; deliver continuous improvements and escalate breaches of policies/procedures.. If managing a team, they define jobs and responsibilities, planning for the department’s future needs and operations, counselling employees on performance and contributing to employee pay decisions/changes. They may also lead a number of specialists to influence the operations of a department, in alignment with strategic as well as tactical priorities, while balancing short and long term goals and ensuring that budgets and schedules meet corporate requirements.. If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others.. OR for an individual contributor, they will be a subject matter expert within own discipline and will guide technical direction. They will lead collaborative, multi-year assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialisation to complete assignments. They will train, guide and coach less experienced specialists and provide information affecting long term profits, organisational risks and strategic decisions.. Advise key stakeholders, including functional leadership teams and senior management on functional and cross functional areas of impact and alignment. Manage and mitigate risks through assessment, in support of the control and governance agenda. Demonstrate leadership and accountability for managing risk and strengthening controls in relation to the work your team does. Demonstrate comprehensive understanding of the organisation functions to contribute to achieving the goals of the business. Collaborate with other areas of work, for business aligned support areas to keep up to speed with business activity and the business strategies. Create solutions based on sophisticated analytical thought comparing and selecting complex alternatives. In-depth analysis with interpretative thinking will be required to define problems and develop innovative solutions. Adopt and include the outcomes of extensive research in problem solving processes. Seek out, build and maintain trusting relationships and partnerships with internal and external stakeholders in order to accomplish key business objectives, using influencing and negotiating skills to achieve outcomes. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave. Show more Show less
Posted 4 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Join us as a Data Engineering Lead This is an exciting opportunity to use your technical expertise to collaborate with colleagues and build effortless, digital first customer experiences You’ll be simplifying the bank through developing innovative data driven solutions, inspiring to be commercially successful through insight, and keeping our customers’ and the bank’s data safe and secure Participating actively in the data engineering community, you’ll deliver opportunities to support our strategic direction while building your network across the bank We’re recruiting for multiple roles across a range to levels, up to and including experienced managers What you'll do We’ll look to you to demonstrate technical and people leadership to drive value for the customer through modelling, sourcing and data transformation. You’ll be working closely with core technology and architecture teams to deliver strategic data solutions, while driving Agile and DevOps adoption in the delivery of data engineering, leading a team of data engineers. We’ll Also Expect You To Be Working with Data Scientists and Analytics Labs to translate analytical model code to well tested production ready code Helping to define common coding standards and model monitoring performance best practices Owning and delivering the automation of data engineering pipelines through the removal of manual stages Developing comprehensive knowledge of the bank’s data structures and metrics, advocating change where needed for product development Educating and embedding new data techniques into the business through role modelling, training and experiment design oversight Leading and delivering data engineering strategies to build a scalable data architecture and customer feature rich dataset for data scientists Leading and developing solutions for streaming data ingestion and transformations in line with streaming strategy The skills you'll need To be successful in this role, you’ll need to be an expert level programmer and data engineer with a qualification in Computer Science or Software Engineering. You’ll also need a strong understanding of data usage and dependencies with wider teams and the end customer, as well as extensive experience in extracting value and features from large scale data. We'll also expect you to have knowledge of of big data platforms like Snowflake, AWS Redshift, Postgres, MongoDB, Neo4J and Hadoop, along with good knowledge of cloud technologies such as Amazon Web Services, Google Cloud Platform and Microsoft Azure You’ll Also Demonstrate Knowledge of core computer science concepts such as common data structures and algorithms, profiling or optimisation An understanding of machine-learning, information retrieval or recommendation systems Good working knowledge of CICD tools Knowledge of programming languages in data engineering such as Python or PySpark, SQL, Java, and Scala An understanding of Apache Spark and ETL tools like Informatica PowerCenter, Informatica BDM or DEI, Stream Sets and Apache Airflow Knowledge of messaging, event or streaming technology such as Apache Kafka Experience of ETL technical design, automated data quality testing, QA and documentation, data warehousing, data modelling and data wrangling Extensive experience using RDMS, ETL pipelines, Python, Hadoop and SQL Show more Show less
Posted 4 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Join Barclays as a Senior Data Engineer. At Barclays, we are building the bank of tomorrow. As a Strategy and Transformation Lead you will build and maintain the systems that collect, store, process, and analyse data, such as data pipelines, data warehouses and data lakes to ensure that all data is accurate, accessible, and secure. To be a successful Senior Data Engineer, you should have experience with: Hands on experience to work with large scale data platforms & in development of cloud solutions in AWS data platform with proven track record in driving business success. Strong understanding of AWS and distributed computing paradigms, ability to design and develop data ingestion programs to process large data sets in Batch mode using Glue, Lambda, S3, redshift and snowflake and data bricks. Ability to develop data ingestion programs to ingest real-time data from LIVE sources using Apache Kafka, Spark Streaming and related technologies. Hands on programming experience in python and PY-Spark. Understanding of Dev Ops Pipelines using Jenkins, GitLab & should be strong in data modelling and Data architecture concepts & well versed with Project management tools and Agile Methodology. Sound knowledge of data governance principles and tools (alation/glue data quality, mesh), Capable of suggesting solution architecture for diverse technology applications. Additional Relevant Skills Given Below Are Highly Valued Experience working in financial services industry & working in various Settlements and Sub ledger functions like PNS, Stock Record and Settlements, PNL. Knowledge in BPS, IMPACT & Gloss products from Broadridge & creating ML model using python, Spark & Java. You may be assessed on key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen, strategic thinking and digital and technology, as well as job-specific technical skills. This role is based in Pune. Purpose of the role To build and maintain the systems that collect, store, process, and analyse data, such as data pipelines, data warehouses and data lakes to ensure that all data is accurate, accessible, and secure. Accountabilities Build and maintenance of data architectures pipelines that enable the transfer and processing of durable, complete and consistent data. Design and implementation of data warehoused and data lakes that manage the appropriate data volumes and velocity and adhere to the required security measures. Development of processing and analysis algorithms fit for the intended data complexity and volumes. Collaboration with data scientist to build and deploy machine learning models. Vice President Expectations To contribute or set strategy, drive requirements and make recommendations for change. Plan resources, budgets, and policies; manage and maintain policies/ processes; deliver continuous improvements and escalate breaches of policies/procedures.. If managing a team, they define jobs and responsibilities, planning for the department’s future needs and operations, counselling employees on performance and contributing to employee pay decisions/changes. They may also lead a number of specialists to influence the operations of a department, in alignment with strategic as well as tactical priorities, while balancing short and long term goals and ensuring that budgets and schedules meet corporate requirements.. If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others.. OR for an individual contributor, they will be a subject matter expert within own discipline and will guide technical direction. They will lead collaborative, multi-year assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialisation to complete assignments. They will train, guide and coach less experienced specialists and provide information affecting long term profits, organisational risks and strategic decisions.. Advise key stakeholders, including functional leadership teams and senior management on functional and cross functional areas of impact and alignment. Manage and mitigate risks through assessment, in support of the control and governance agenda. Demonstrate leadership and accountability for managing risk and strengthening controls in relation to the work your team does. Demonstrate comprehensive understanding of the organisation functions to contribute to achieving the goals of the business. Collaborate with other areas of work, for business aligned support areas to keep up to speed with business activity and the business strategies. Create solutions based on sophisticated analytical thought comparing and selecting complex alternatives. In-depth analysis with interpretative thinking will be required to define problems and develop innovative solutions. Adopt and include the outcomes of extensive research in problem solving processes. Seek out, build and maintain trusting relationships and partnerships with internal and external stakeholders in order to accomplish key business objectives, using influencing and negotiating skills to achieve outcomes. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave. Back to nav Share job X(Opens in new tab or window) Facebook(Opens in new tab or window) LinkedIn(Opens in new tab or window) Show more Show less
Posted 4 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Step into role of a Senior Data Engineer. At Barclays, innovation isn’t encouraged, its expected. As a Senior Data Engineer you will build and maintain the systems that collect, store, process, and analyse data, such as data pipelines, data warehouses and data lakes to ensure that all data is accurate, accessible, and secure. To be a successful Senior Data Engineer, you should have experience with: Hands on experience to work with large scale data platforms & in development of cloud solutions in AWS data platform with proven track record in driving business success. Strong understanding of AWS and distributed computing paradigms, ability to design and develop data ingestion programs to process large data sets in Batch mode using Glue, Lambda, S3, redshift and snowflake and data bricks. Ability to develop data ingestion programs to ingest real-time data from LIVE sources using Apache Kafka, Spark Streaming and related technologies. Hands on programming experience in python and PY-Spark. Understanding of Dev Ops Pipelines using Jenkins, GitLab & should be strong in data modelling and Data architecture concepts & well versed with Project management tools and Agile Methodology. Sound knowledge of data governance principles and tools (alation/glue data quality, mesh), Capable of suggesting solution architecture for diverse technology applications. Additional Relevant Skills Given Below Are Highly Valued Experience working in financial services industry & working in various Settlements and Sub ledger functions like PNS, Stock Record and Settlements, PNL. Knowledge in BPS, IMPACT & Gloss products from Broadridge & creating ML model using python, Spark & Java. You may be assessed on key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen, strategic thinking and digital and technology, as well as job-specific technical skills. This role is based in Pune. Purpose of the role To build and maintain the systems that collect, store, process, and analyse data, such as data pipelines, data warehouses and data lakes to ensure that all data is accurate, accessible, and secure. Accountabilities Build and maintenance of data architectures pipelines that enable the transfer and processing of durable, complete and consistent data. Design and implementation of data warehoused and data lakes that manage the appropriate data volumes and velocity and adhere to the required security measures. Development of processing and analysis algorithms fit for the intended data complexity and volumes. Collaboration with data scientist to build and deploy machine learning models. Vice President Expectations To contribute or set strategy, drive requirements and make recommendations for change. Plan resources, budgets, and policies; manage and maintain policies/ processes; deliver continuous improvements and escalate breaches of policies/procedures.. If managing a team, they define jobs and responsibilities, planning for the department’s future needs and operations, counselling employees on performance and contributing to employee pay decisions/changes. They may also lead a number of specialists to influence the operations of a department, in alignment with strategic as well as tactical priorities, while balancing short and long term goals and ensuring that budgets and schedules meet corporate requirements.. If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others.. OR for an individual contributor, they will be a subject matter expert within own discipline and will guide technical direction. They will lead collaborative, multi-year assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialisation to complete assignments. They will train, guide and coach less experienced specialists and provide information affecting long term profits, organisational risks and strategic decisions.. Advise key stakeholders, including functional leadership teams and senior management on functional and cross functional areas of impact and alignment. Manage and mitigate risks through assessment, in support of the control and governance agenda. Demonstrate leadership and accountability for managing risk and strengthening controls in relation to the work your team does. Demonstrate comprehensive understanding of the organisation functions to contribute to achieving the goals of the business. Collaborate with other areas of work, for business aligned support areas to keep up to speed with business activity and the business strategies. Create solutions based on sophisticated analytical thought comparing and selecting complex alternatives. In-depth analysis with interpretative thinking will be required to define problems and develop innovative solutions. Adopt and include the outcomes of extensive research in problem solving processes. Seek out, build and maintain trusting relationships and partnerships with internal and external stakeholders in order to accomplish key business objectives, using influencing and negotiating skills to achieve outcomes. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave. Back to nav Share job X(Opens in new tab or window) Facebook(Opens in new tab or window) LinkedIn(Opens in new tab or window) Show more Show less
Posted 4 days ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Join us as a Data Engineering Lead This is an exciting opportunity to use your technical expertise to collaborate with colleagues and build effortless, digital first customer experiences You’ll be simplifying the bank through developing innovative data driven solutions, inspiring to be commercially successful through insight, and keeping our customers’ and the bank’s data safe and secure Participating actively in the data engineering community, you’ll deliver opportunities to support our strategic direction while building your network across the bank We’re recruiting for multiple roles across a range to levels, up to and including experienced managers What you'll do We’ll look to you to demonstrate technical and people leadership to drive value for the customer through modelling, sourcing and data transformation. You’ll be working closely with core technology and architecture teams to deliver strategic data solutions, while driving Agile and DevOps adoption in the delivery of data engineering, leading a team of data engineers. We’ll Also Expect You To Be Working with Data Scientists and Analytics Labs to translate analytical model code to well tested production ready code Helping to define common coding standards and model monitoring performance best practices Owning and delivering the automation of data engineering pipelines through the removal of manual stages Developing comprehensive knowledge of the bank’s data structures and metrics, advocating change where needed for product development Educating and embedding new data techniques into the business through role modelling, training and experiment design oversight Leading and delivering data engineering strategies to build a scalable data architecture and customer feature rich dataset for data scientists Leading and developing solutions for streaming data ingestion and transformations in line with streaming strategy The skills you'll need To be successful in this role, you’ll need to be an expert level programmer and data engineer with a qualification in Computer Science or Software Engineering. You’ll also need a strong understanding of data usage and dependencies with wider teams and the end customer, as well as extensive experience in extracting value and features from large scale data. We'll also expect you to have knowledge of of big data platforms like Snowflake, AWS Redshift, Postgres, MongoDB, Neo4J and Hadoop, along with good knowledge of cloud technologies such as Amazon Web Services, Google Cloud Platform and Microsoft Azure You’ll Also Demonstrate Knowledge of core computer science concepts such as common data structures and algorithms, profiling or optimisation An understanding of machine-learning, information retrieval or recommendation systems Good working knowledge of CICD tools Knowledge of programming languages in data engineering such as Python or PySpark, SQL, Java, and Scala An understanding of Apache Spark and ETL tools like Informatica PowerCenter, Informatica BDM or DEI, Stream Sets and Apache Airflow Knowledge of messaging, event or streaming technology such as Apache Kafka Experience of ETL technical design, automated data quality testing, QA and documentation, data warehousing, data modelling and data wrangling Extensive experience using RDMS, ETL pipelines, Python, Hadoop and SQL Show more Show less
Posted 4 days ago
5.0 - 7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We’re Hiring: Consultant – Insights & Analytics at Chryselys Location: Hyderabad Department: Insights & Analytics Job Type: Full-time Reports To: Manager About Us Chryselys is a Pharma Analytics & Business consulting company that delivers data-driven insights leveraging AI-powered, cloud-native platforms to achieve high-impact transformations. We specialize in digital technologies and advanced data science techniques that provide strategic and operational insights. Who We Are People - Our team of industry veterans, advisors and senior strategists have diverse backgrounds and have worked at top tier companies. Quality - Our goal is to deliver the value of a big five consulting company without the big five cost. Technology - Our solutions are Business centric built on cloud native technologies. Role Overview As a Field Force Operations Consultant at Chryselys, you will leverage your expertise in commercial model design, sales force sizing, territory alignment, and deployment to optimize field force operations and processes. You will work closely with cross-functional teams, including client stakeholders and analytics experts, to define execution KPIs, maximize sales impact, and deliver actionable insights through advanced reporting and dashboards. Your role will also involve segmentation and targeting, incentive compensation processes, and planning for call activities and non-personal promotions. With hands-on experience in tools like Qlik, Power BI, and Tableau, along with technologies such as SQL, you will ensure impactful storytelling and effective stakeholder management while supporting clients across the U.S. and Europe. Key Responsibilities Capabilities and experience in field force operations and processes related to commercial model design and structure, sales force sizing and optimization, Territory alignment and deployment Good understanding of commercial operations and analytics as a domain Expertise with SF/FF datasets for creating dashboards and reports for multiple user personas Ability to define FF execution and measurement KPIs to maximize sales impact Understanding and expertise in call activity planning, non-personal promotions Good knowledge of segmentation & targeting and incentive compensation processes Hands-on experience with tools like Qlik/Power BI/Tableau and technologies like Python/SQL Stakeholder management abilities and storytelling skills Experience in working with pharma clients across US and Europe What You Bring Education: Bachelor's or master's degree in data science, statistics, computer science, engineering, or a related field with a strong academic record. Experience: 5-7 years of experience in field force operations, particularly in the pharmaceutical or healthcare industry, working with key datasets Skills: Strong experience with SQL and cloud-based data processing environments such as AWS (Redshift, Athena, S3) Demonstrated ability to build data visualizations and communicate insights through tools like PowerBI, Tableau, Qlik, QuickSight, or similar. Strong analytical skills, with experience in analogue analysis Ability to manage multiple projects, prioritize tasks, and meet deadlines in a fast-paced environment. Excellent communication and presentation skills, with the ability to explain complex data science concepts to non-technical stakeholders. A strong problem-solving mindset, with the ability to adapt and innovate in a dynamic consulting environment. How To Apply Ready to make an impact? Apply now by clicking [here] or visit our careers page at https://chryselys.com/chryselys-career/ Please include your resume and a cover letter detailing why you’re the perfect fit for this role. Equal Employment Opportunity Chryselys is proud to be an Equal Employment Opportunity Employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. Connect With Us Follow us for updates and more opportunities: https://www.linkedin.com/company/chryselys/mycompany/ Discover more about our team and culture: www.chryselys.com Show more Show less
Posted 4 days ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description Come be a part of a rapidly expanding $35 billion-dollar global business. At Amazon Business, a fast-growing startup passionate about building solutions, we set out every day to innovate and disrupt the status quo. We stand at the intersection of tech & retail in the B2B space developing innovative purchasing and procurement solutions to help businesses and organizations thrive. At Amazon Business, we strive to be the most recognized and preferred strategic partner for smart business buying. Bring your insight, imagination and a healthy disregard for the impossible. Join us in building and celebrating the value of Amazon Business to buyers and sellers of all sizes and industries. Unlock your career potential. Key job responsibilities Own generating actionable insights through the development of metrics and dashboards. Analyze relevant business information, and uncover trends and correlations to develop insights that can materially improve our product and strategy decisions. Provide insights to our Canada business and Product Management teams as new initiatives are being identified, prioritized, implemented and deployed. Develop clear communications for recommended actions. Establish new, scalable, efficient, automated processes for tracking and reporting on progress of initiatives. About The Team We are central Amazon Business Marketing Analytics (ABMA) team for WW AB Marketing team. Our vision is to simplify and accelerate data driven decision making for AB Marketing by providing cost effective, easy & timely access to high quality data. Our core responsibilities towards AB marketing includes a) providing systems(infrastructure) & workflows that allow ingestion, storage, processing and querying of data b) building ready-to-use datasets for easy and faster access to the data c) automating standard business analysis / reporting/ dashboarding d) empowering business with self-service tools for deep dives & insights seeking. The value construct of these capabilities is to enhance the speed of business decision-making by reducing the effort & time to access actionable data. Basic Qualifications 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with one or more industry analytics visualization tools (e.g. Excel, Tableau, QuickSight, MicroStrategy, PowerBI) and statistical methods (e.g. t-test, Chi-squared) Experience with scripting language (e.g., Python, Java, or R) Preferred Qualifications Master's degree, or Advanced technical degree Knowledge of data modeling and data pipeline design Experience with statistical analysis, co-relation analysis Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - Amazon Dev Center India - Hyderabad - A85 Job ID: A2930198 Show more Show less
Posted 4 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
What You'll Do We are looking for an experienced Senior Data Engineer to join our Data Operations team. The ideal candidate will have expertise in Python, Snowflake, SQL, modern ETL tools, and business intelligence platforms such as Power BI. You will require experience integrating SaaS applications such as Salesforce, Zuora, and NetSuite using REST APIs. You will build and maintain data pipelines, developing data models, and ensuring seamless data integrations that support business analytics and reporting. The role requires flexibility to collaborate in US time zones as needed. You will report to Manager, Finance Applications. What Your Responsibilities Will Be Design, develop, and maintain scalable data pipelines and workflows using modern ETL tools and Python. Build and optimize SQL queries and data models on Snowflake to support analytics and reporting needs. Integrate with SaaS platforms such as Salesforce, Zuora, and NetSuite using APIs or native connectors. You will develop and support dashboards and reports using Power BI and other reporting tools. Work with data analysts, business users, and other engineering teams to gather requirements and deliver high-quality solutions. Ensure data quality, accuracy, and consistency across systems and datasets. Write clean, well-documented, and testable code with a focus on performance and reliability. Participate in peer code reviews and contribute to best practices in data engineering. Be available for meetings and collaboration in US time zones as required. What You’ll Need To Be Successful You have 5+ years' experience in data engineering field, with deep SQL knowledge. Experience in Snowflake - SQL, Python, AWS Services, Power BI, ETL Tools (DBT, Airflow) is must. Proficiency in Python for data transformation and scripting. Proficiency in writing complex SQL queries, Stored Procedures. Experience in Data Warehouse, data modeling and ETL design concepts. Have integrated SaaS systems like Salesforce, Zuora, NetSuite along with Relational Databases, REST API, FTP/SFTP...etc. Knowledge of AWS technologies (EC2, S3, RDS, Redshift...etc.) Excellent communication skills, with the ability to translate technical issues for non-technical stakeholders. Flexibility to work during US business hours as required for team meetings and collaboration. How We’ll Take Care Of You Total Rewards In addition to a great compensation package, paid time off, and paid parental leave, many Avalara employees are eligible for bonuses. Health & Wellness Benefits vary by location but generally include private medical, life, and disability insurance. Inclusive culture and diversity Avalara strongly supports diversity, equity, and inclusion, and is committed to integrating them into our business practices and our organizational culture. We also have a total of 8 employee-run resource groups, each with senior leadership and exec sponsorship. What You Need To Know About Avalara We’re Avalara. We’re defining the relationship between tax and tech. We’ve already built an industry-leading cloud compliance platform, processing nearly 40 billion customer API calls and over 5 million tax returns a year, and this year we became a billion-dollar business . Our growth is real, and we’re not slowing down until we’ve achieved our mission - to be part of every transaction in the world. We’re bright, innovative, and disruptive, like the orange we love to wear. It captures our quirky spirit and optimistic mindset. It shows off the culture we’ve designed, that empowers our people to win. Ownership and achievement go hand in hand here. We instill passion in our people through the trust we place in them. We’ve been different from day one. Join us, and your career will be too. We’re An Equal Opportunity Employer Supporting diversity and inclusion is a cornerstone of our company — we don’t want people to fit into our culture, but to enrich it. All qualified candidates will receive consideration for employment without regard to race, color, creed, religion, age, gender, national orientation, disability, sexual orientation, US Veteran status, or any other factor protected by law. If you require any reasonable adjustments during the recruitment process, please let us know. Show more Show less
Posted 4 days ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Company Overview Viraaj HR Solutions is a dynamic recruitment agency dedicated to connecting talented individuals with leading organizations across India. We focus on delivering exceptional talent solutions while fostering a culture of collaboration, integrity, and excellence. Our mission is to empower businesses by providing them with the right skill sets to succeed in their industry, while promoting growth and enhancing employee satisfaction. We pride ourselves on our professional approach to hiring and talent management. Job Title: AWS Data Engineer Location: On-site in India We are seeking a skilled AWS Data Engineer to join our team. In this role, you will be responsible for designing and implementing data solutions on AWS infrastructure. You will work closely with cross-functional teams to ensure data integrity and availability, while also developing and optimizing data pipelines to support analytics and reporting needs. Role Responsibilities Design, develop and implement scalable data architectures on AWS. Develop ETL processes for data ingestion from various sources. Optimize AWS data pipelines for performance and cost efficiency. Collaborate with data analysts to understand data requirements. Create and maintain data models for efficient query performance. Conduct data quality assessments and implement necessary improvements. Implement data security and compliance measures on AWS. Maintain documentation for all systems and processes. Monitor and troubleshoot data-related issues on AWS. Integrate third-party services and APIs for data retrieval. Train and mentor junior data engineers. Participate in code reviews and maintain software engineering best practices. Stay up-to-date with AWS features and best practices in data engineering. Contribute to refining data engineering standards and processes. Support migration of legacy data systems to AWS. Qualifications Bachelor’s degree in Computer Science or related field. 3+ years of experience in data engineering or similar roles. Extensive experience with AWS services (S3, Redshift, Lambda, Glue). Strong proficiency in SQL and relational databases. Experience with Python and/or Java for data processing. Knowledge of Big Data technologies like Hadoop or Spark. Familiarity with data warehousing concepts and design. Experience in data modeling and database design. Strong analytical and problem-solving skills. Ability to work collaboratively in a team environment. Excellent communication skills, both verbal and written. Experience with analytics tools and reporting. Understanding of data privacy and security regulations. Hands-on experience with CI/CD processes. Ability to adapt to fast-paced environments. Certifications in AWS (e.g., AWS Certified Data Analytics) are a plus. Skills: data modeling,java,data analysis,scala,aws,ci/cd,python,hadoop,sql proficiency,problem solving,data engineering,sql,big data technologies,data warehousing,etl,spark Show more Show less
Posted 4 days ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Company Overview Viraaj HR Solutions is a dynamic recruitment agency dedicated to connecting talented individuals with leading organizations across India. We focus on delivering exceptional talent solutions while fostering a culture of collaboration, integrity, and excellence. Our mission is to empower businesses by providing them with the right skill sets to succeed in their industry, while promoting growth and enhancing employee satisfaction. We pride ourselves on our professional approach to hiring and talent management. Job Title: AWS Data Engineer Location: On-site in India We are seeking a skilled AWS Data Engineer to join our team. In this role, you will be responsible for designing and implementing data solutions on AWS infrastructure. You will work closely with cross-functional teams to ensure data integrity and availability, while also developing and optimizing data pipelines to support analytics and reporting needs. Role Responsibilities Design, develop and implement scalable data architectures on AWS. Develop ETL processes for data ingestion from various sources. Optimize AWS data pipelines for performance and cost efficiency. Collaborate with data analysts to understand data requirements. Create and maintain data models for efficient query performance. Conduct data quality assessments and implement necessary improvements. Implement data security and compliance measures on AWS. Maintain documentation for all systems and processes. Monitor and troubleshoot data-related issues on AWS. Integrate third-party services and APIs for data retrieval. Train and mentor junior data engineers. Participate in code reviews and maintain software engineering best practices. Stay up-to-date with AWS features and best practices in data engineering. Contribute to refining data engineering standards and processes. Support migration of legacy data systems to AWS. Qualifications Bachelor’s degree in Computer Science or related field. 3+ years of experience in data engineering or similar roles. Extensive experience with AWS services (S3, Redshift, Lambda, Glue). Strong proficiency in SQL and relational databases. Experience with Python and/or Java for data processing. Knowledge of Big Data technologies like Hadoop or Spark. Familiarity with data warehousing concepts and design. Experience in data modeling and database design. Strong analytical and problem-solving skills. Ability to work collaboratively in a team environment. Excellent communication skills, both verbal and written. Experience with analytics tools and reporting. Understanding of data privacy and security regulations. Hands-on experience with CI/CD processes. Ability to adapt to fast-paced environments. Certifications in AWS (e.g., AWS Certified Data Analytics) are a plus. Skills: data modeling,java,data analysis,scala,aws,ci/cd,python,hadoop,sql proficiency,problem solving,data engineering,sql,big data technologies,data warehousing,etl,spark Show more Show less
Posted 4 days ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Overview Viraaj HR Solutions is a dynamic recruitment agency dedicated to connecting talented individuals with leading organizations across India. We focus on delivering exceptional talent solutions while fostering a culture of collaboration, integrity, and excellence. Our mission is to empower businesses by providing them with the right skill sets to succeed in their industry, while promoting growth and enhancing employee satisfaction. We pride ourselves on our professional approach to hiring and talent management. Job Title: AWS Data Engineer Location: On-site in India We are seeking a skilled AWS Data Engineer to join our team. In this role, you will be responsible for designing and implementing data solutions on AWS infrastructure. You will work closely with cross-functional teams to ensure data integrity and availability, while also developing and optimizing data pipelines to support analytics and reporting needs. Role Responsibilities Design, develop and implement scalable data architectures on AWS. Develop ETL processes for data ingestion from various sources. Optimize AWS data pipelines for performance and cost efficiency. Collaborate with data analysts to understand data requirements. Create and maintain data models for efficient query performance. Conduct data quality assessments and implement necessary improvements. Implement data security and compliance measures on AWS. Maintain documentation for all systems and processes. Monitor and troubleshoot data-related issues on AWS. Integrate third-party services and APIs for data retrieval. Train and mentor junior data engineers. Participate in code reviews and maintain software engineering best practices. Stay up-to-date with AWS features and best practices in data engineering. Contribute to refining data engineering standards and processes. Support migration of legacy data systems to AWS. Qualifications Bachelor’s degree in Computer Science or related field. 3+ years of experience in data engineering or similar roles. Extensive experience with AWS services (S3, Redshift, Lambda, Glue). Strong proficiency in SQL and relational databases. Experience with Python and/or Java for data processing. Knowledge of Big Data technologies like Hadoop or Spark. Familiarity with data warehousing concepts and design. Experience in data modeling and database design. Strong analytical and problem-solving skills. Ability to work collaboratively in a team environment. Excellent communication skills, both verbal and written. Experience with analytics tools and reporting. Understanding of data privacy and security regulations. Hands-on experience with CI/CD processes. Ability to adapt to fast-paced environments. Certifications in AWS (e.g., AWS Certified Data Analytics) are a plus. Skills: data modeling,java,data analysis,scala,aws,ci/cd,python,hadoop,sql proficiency,problem solving,data engineering,sql,big data technologies,data warehousing,etl,spark Show more Show less
Posted 4 days ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 9 to 11 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills An inclination to mentor; an ability to lead and deliver medium sized components independently Technical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data : Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management : Expertise around Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design : Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages : Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps : Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Data Governance: A strong grasp of principles and practice including data quality, security, privacy and compliance Technical Skills (Valuable) Ab Initio : Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud : Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls : Exposure to data validation, cleansing, enrichment and data controls Containerization : Fair understanding of containerization platforms like Docker, Kubernetes File Formats : Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others : Experience of using a Job scheduler e.g., Autosys. Exposure to Business Intelligence tools e.g., Tableau, Power BI Certification on any one or more of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 4 days ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Company Overview Viraaj HR Solutions is a dynamic recruitment agency dedicated to connecting talented individuals with leading organizations across India. We focus on delivering exceptional talent solutions while fostering a culture of collaboration, integrity, and excellence. Our mission is to empower businesses by providing them with the right skill sets to succeed in their industry, while promoting growth and enhancing employee satisfaction. We pride ourselves on our professional approach to hiring and talent management. Job Title: AWS Data Engineer Location: On-site in India We are seeking a skilled AWS Data Engineer to join our team. In this role, you will be responsible for designing and implementing data solutions on AWS infrastructure. You will work closely with cross-functional teams to ensure data integrity and availability, while also developing and optimizing data pipelines to support analytics and reporting needs. Role Responsibilities Design, develop and implement scalable data architectures on AWS. Develop ETL processes for data ingestion from various sources. Optimize AWS data pipelines for performance and cost efficiency. Collaborate with data analysts to understand data requirements. Create and maintain data models for efficient query performance. Conduct data quality assessments and implement necessary improvements. Implement data security and compliance measures on AWS. Maintain documentation for all systems and processes. Monitor and troubleshoot data-related issues on AWS. Integrate third-party services and APIs for data retrieval. Train and mentor junior data engineers. Participate in code reviews and maintain software engineering best practices. Stay up-to-date with AWS features and best practices in data engineering. Contribute to refining data engineering standards and processes. Support migration of legacy data systems to AWS. Qualifications Bachelor’s degree in Computer Science or related field. 3+ years of experience in data engineering or similar roles. Extensive experience with AWS services (S3, Redshift, Lambda, Glue). Strong proficiency in SQL and relational databases. Experience with Python and/or Java for data processing. Knowledge of Big Data technologies like Hadoop or Spark. Familiarity with data warehousing concepts and design. Experience in data modeling and database design. Strong analytical and problem-solving skills. Ability to work collaboratively in a team environment. Excellent communication skills, both verbal and written. Experience with analytics tools and reporting. Understanding of data privacy and security regulations. Hands-on experience with CI/CD processes. Ability to adapt to fast-paced environments. Certifications in AWS (e.g., AWS Certified Data Analytics) are a plus. Skills: data modeling,java,data analysis,scala,aws,ci/cd,python,hadoop,sql proficiency,problem solving,data engineering,sql,big data technologies,data warehousing,etl,spark Show more Show less
Posted 4 days ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Company Overview Viraaj HR Solutions is a dynamic recruitment agency dedicated to connecting talented individuals with leading organizations across India. We focus on delivering exceptional talent solutions while fostering a culture of collaboration, integrity, and excellence. Our mission is to empower businesses by providing them with the right skill sets to succeed in their industry, while promoting growth and enhancing employee satisfaction. We pride ourselves on our professional approach to hiring and talent management. Job Title: AWS Data Engineer Location: On-site in India We are seeking a skilled AWS Data Engineer to join our team. In this role, you will be responsible for designing and implementing data solutions on AWS infrastructure. You will work closely with cross-functional teams to ensure data integrity and availability, while also developing and optimizing data pipelines to support analytics and reporting needs. Role Responsibilities Design, develop and implement scalable data architectures on AWS. Develop ETL processes for data ingestion from various sources. Optimize AWS data pipelines for performance and cost efficiency. Collaborate with data analysts to understand data requirements. Create and maintain data models for efficient query performance. Conduct data quality assessments and implement necessary improvements. Implement data security and compliance measures on AWS. Maintain documentation for all systems and processes. Monitor and troubleshoot data-related issues on AWS. Integrate third-party services and APIs for data retrieval. Train and mentor junior data engineers. Participate in code reviews and maintain software engineering best practices. Stay up-to-date with AWS features and best practices in data engineering. Contribute to refining data engineering standards and processes. Support migration of legacy data systems to AWS. Qualifications Bachelor’s degree in Computer Science or related field. 3+ years of experience in data engineering or similar roles. Extensive experience with AWS services (S3, Redshift, Lambda, Glue). Strong proficiency in SQL and relational databases. Experience with Python and/or Java for data processing. Knowledge of Big Data technologies like Hadoop or Spark. Familiarity with data warehousing concepts and design. Experience in data modeling and database design. Strong analytical and problem-solving skills. Ability to work collaboratively in a team environment. Excellent communication skills, both verbal and written. Experience with analytics tools and reporting. Understanding of data privacy and security regulations. Hands-on experience with CI/CD processes. Ability to adapt to fast-paced environments. Certifications in AWS (e.g., AWS Certified Data Analytics) are a plus. Skills: data modeling,java,data analysis,scala,aws,ci/cd,python,hadoop,sql proficiency,problem solving,data engineering,sql,big data technologies,data warehousing,etl,spark Show more Show less
Posted 4 days ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 9 to 11 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills An inclination to mentor; an ability to lead and deliver medium sized components independently Technical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data : Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management : Expertise around Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design : Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages : Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps : Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Data Governance: A strong grasp of principles and practice including data quality, security, privacy and compliance Technical Skills (Valuable) Ab Initio : Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud : Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls : Exposure to data validation, cleansing, enrichment and data controls Containerization : Fair understanding of containerization platforms like Docker, Kubernetes File Formats : Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others : Experience of using a Job scheduler e.g., Autosys. Exposure to Business Intelligence tools e.g., Tableau, Power BI Certification on any one or more of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 4 days ago
13.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Company Description Organizations everywhere struggle under the crushing costs and complexities of “solutions” that promise to simplify their lives. To create a better experience for their customers and employees. To help them grow. Software is a choice that can make or break a business. Create better or worse experiences. Propel or throttle growth. Business software has become a blocker instead of ways to get work done. There’s another option. Freshworks. With a fresh vision for how the world works. At Freshworks, we build uncomplicated service software that delivers exceptional customer and employee experiences. Our enterprise-grade solutions are powerful, yet easy to use, and quick to deliver results. Our people-first approach to AI eliminates friction, making employees more effective and organizations more productive. Over 72,000 companies, including Bridgestone, New Balance, Nucor, S&P Global, and Sony Music, trust Freshworks’ customer experience (CX) and employee experience (EX) software to fuel customer loyalty and service efficiency. And, over 4,500 Freshworks employees make this possible, all around the world. Fresh vision. Real impact. Come build it with us. Job Description We are looking for a BI Architect with 13+ years of experience to lead the design and implementation of scalable BI and data architecture solutions. The role involves driving data modeling, cloud-based pipelines, migration projects, and data lake initiatives using technologies like AWS, Kafka, Spark, SQL, and Python. Experience with EDW modeling and architecture is a strong plus. Key Responsibilities ● Design and develop scalable BI and data models to support enterprise analytics. ● Lead data platform migration from legacy BI systems to modern cloud architectures. ● Architect and manage data lakes, batch and streaming pipelines, and real-time integrations via Kafka and APIs. ● Support data governance, quality, and access control initiatives. ● Partner with data engineers, analysts, and business stakeholders to deliver reliable, high-performing data solutions. ● Contribute to architecture decisions and platform scalability planning Qualifications ● Should have 13 - 19 years of relevant experience. ● 10+ years in BI, data engineering, or data architecture roles. ● Proficiency in SQL, Python, Apache Spark, and Kafka. ● Strong hands-on experience with AWS data services (e.g., S3, Redshift, Glue, EMR). ● Track record of leading data migration and modernization projects. ● Solid understanding of data governance, security, and scalable pipeline design. ● Excellent collaboration and communication skills. Good to Have ● Experience with enterprise data warehouse (EDW) modeling and architecture. ● Familiarity with BI tools like Power BI, Tableau, Looker, or Quicksight. ● Knowledge of lakehouse, data mesh, or modern data stack concepts. Additional Information At Freshworks, we are creating a global workplace that enables everyone to find their true potential, purpose, and passion irrespective of their background, gender, race, sexual orientation, religion and ethnicity. We are committed to providing equal opportunity for all and believe that diversity in the workplace creates a more vibrant, richer work environment that advances the goals of our employees, communities and the business. Show more Show less
Posted 4 days ago
13.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Company Description Organizations everywhere struggle under the crushing costs and complexities of “solutions” that promise to simplify their lives. To create a better experience for their customers and employees. To help them grow. Software is a choice that can make or break a business. Create better or worse experiences. Propel or throttle growth. Business software has become a blocker instead of ways to get work done. There’s another option. Freshworks. With a fresh vision for how the world works. At Freshworks, we build uncomplicated service software that delivers exceptional customer and employee experiences. Our enterprise-grade solutions are powerful, yet easy to use, and quick to deliver results. Our people-first approach to AI eliminates friction, making employees more effective and organizations more productive. Over 72,000 companies, including Bridgestone, New Balance, Nucor, S&P Global, and Sony Music, trust Freshworks’ customer experience (CX) and employee experience (EX) software to fuel customer loyalty and service efficiency. And, over 4,500 Freshworks employees make this possible, all around the world. Fresh vision. Real impact. Come build it with us. Job Description We are looking for a BI Architect with 13+ years of experience to lead the design and implementation of scalable BI and data architecture solutions. The role involves driving data modeling, cloud-based pipelines, migration projects, and data lake initiatives using technologies like AWS, Kafka, Spark, SQL, and Python. Experience with EDW modeling and architecture is a strong plus. Key Responsibilities ● Design and develop scalable BI and data models to support enterprise analytics. ● Lead data platform migration from legacy BI systems to modern cloud architectures. ● Architect and manage data lakes, batch and streaming pipelines, and real-time integrations via Kafka and APIs. ● Support data governance, quality, and access control initiatives. ● Partner with data engineers, analysts, and business stakeholders to deliver reliable, high-performing data solutions. ● Contribute to architecture decisions and platform scalability planning Qualifications ● Should have 13 - 19 years of relevant experience. ● 10+ years in BI, data engineering, or data architecture roles. ● Proficiency in SQL, Python, Apache Spark, and Kafka. ● Strong hands-on experience with AWS data services (e.g., S3, Redshift, Glue, EMR). ● Track record of leading data migration and modernization projects. ● Solid understanding of data governance, security, and scalable pipeline design. ● Excellent collaboration and communication skills. Good to Have ● Experience with enterprise data warehouse (EDW) modeling and architecture. ● Familiarity with BI tools like Power BI, Tableau, Looker, or Quicksight. ● Knowledge of lakehouse, data mesh, or modern data stack concepts. Additional Information At Freshworks, we are creating a global workplace that enables everyone to find their true potential, purpose, and passion irrespective of their background, gender, race, sexual orientation, religion and ethnicity. We are committed to providing equal opportunity for all and believe that diversity in the workplace creates a more vibrant, richer work environment that advances the goals of our employees, communities and the business. Show more Show less
Posted 4 days ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Achieving our goals starts with supporting yours. Grow your career, access top-tier health and wellness benefits, build lasting connections with your team and our customers, and travel the world using our extensive route network. Come join us to create what’s next. Let’s define tomorrow, together. Description Description - External United's Kinective Media Data Engineering team designs, develops, and maintains massively scaling ad- technology solutions brought to life with innovative architectures, data analytics, and digital solutions. Our Values : At United Airlines, we believe that inclusion propels innovation and is the foundation of all that we do. Our Shared Purpose: "Connecting people. Uniting the world." drives us to be the best airline for our employees, customers, and everyone we serve, and we can only do that with a truly diverse and inclusive workforce. Our team spans the globe and is made up of diverse individuals all working together with cutting-edge technology to build the best airline in the history of aviation. With multiple employee-run "Business Resource Group" communities and world-class benefits like health insurance, parental leave, and space available travel, United is truly a one-of-a-kind place to work that will make you feel welcome and accepted. Come join our team and help us make a positive impact on the world. Job Overview And Responsibilities Data Engineering organization is responsible for driving data driven insights & innovation to support the data needs for commercial projects with a digital focus. Data Engineer will be responsible to partner with various teams to define and execute data acquisition, transformation, processing and make data actionable for operational and analytics initiatives that create sustainable revenue and share growth. Execute unit tests and validating expected results to ensure accuracy & integrity of data and applications through analysis, coding, writing clear documentation and problem resolution. This role will also drive the adoption of data processing and analysis within the AWS environment and help cross train other members of the team. Leverage strategic and analytical skills to understand and solve customer and business centric questions. Coordinate and guide cross-functional projects that involve team members across all areas of the enterprise, vendors, external agencies and partners Leverage data from a variety of sources to develop data marts and insights that provide a comprehensive understanding of the business. Develop and implement innovative solutions leading to automation Use of Agile methodologies to manage projects Mentor and train junior engineers. This position is offered on local terms and conditions. Expatriate assignments and sponsorship for employment visas, even on a time-limited visa status, will not be awarded. This position is for United Airlines Business Services Pvt. Ltd - a wholly owned subsidiary of United Airlines Inc. Qualifications Qualifications - External Required BS/BA, in computer science or related STEM field 2+ years of IT experience in software development 2+ years of development experience using Java, Python, Scala 2+ years of experience with Big Data technologies like PySpark, Hadoop, Hive, HBASE, Kafka, Nifi 2+ years of experience with database systems like redshift,MS SQL Server, Oracle, Teradata. Creative, driven, detail-oriented individuals who enjoy tackling tough problems with data and insights Individuals who have a natural curiosity and desire to solve problems are encouraged to apply 2+ years of IT experience in software development 2+ years of development experience using Java, Python, Scala Must be legally authorized to work in India for any employer without sponsorship Successful completion of interview required to meet job qualification Reliable, punctual attendance is an essential function of the position Must be legally authorized to work in India for any employer without sponsorship Must be fluent in English (written and spoken) Successful completion of interview required to meet job qualification Reliable, punctual attendance is an essential function of the position Preferred Masters in computer science or related STEM field Experience with cloud based systems like AWS, AZURE or Google Cloud Certified Developer / Architect on AWS Strong experience with continuous integration & delivery using Agile methodologies Data engineering experience with transportation/airline industry Strong problem-solving skills Strong knowledge in Big Data GGN00002011 Show more Show less
Posted 4 days ago
15.0 years
0 Lacs
Delhi, India
On-site
About The Role We are seeking a highly experienced Principal Presales Architect with deep expertise in AWS cloud services to lead strategic engagements with enterprise customers. This role is at the intersection of technology leadership and customer engagement, requiring a deep understanding of IaaS, PaaS, SaaS , and data platform services , with a focus on delivering business value through cloud adoption and digital transformation. You will be a key contributor to the sales and solutioning lifecycle, working alongside business development, account executives, product, and engineering teams. This role also involves driving cloud-native architectures , conducting deep technical workshops, and influencing executive stakeholders. Key Responsibilities Presales & Customer Engagement Act as the technical lead in strategic sales opportunities, supporting cloud transformation deals across verticals. Design and present end-to-end cloud solutions tailored to client needs, with a focus on AWS architectures (compute, networking, storage, databases, analytics, security, and DevOps). Deliver technical presentations, POCs, and solution workshops to executive and technical stakeholders. Collaborate with sales teams to develop proposals, RFP responses, solution roadmaps , and TCO/ROI analysis . Drive early-stage discovery sessions to identify business objectives, technical requirements, and success metrics. Own the solution blueprint and ensure alignment across technical, business, and operational teams. Architecture & Technology Leadership Architect scalable, secure, and cost-effective solutions using AWS services including EC2, Lambda, S3, RDS, Redshift, EKS, and others. Lead design of data platforms and AI/ML pipelines , leveraging AWS services like Redshift, SageMaker, Glue, Athena, EMR , and integrating with 3rd party tools when needed. Evaluate and recommend multi-cloud integration strategies (Azure/GCP experience is a strong plus). Guide customers on cloud migration, modernization, DevOps, and CI/CD pipelines . Collaborate with product and delivery teams to align proposed solutions with delivery capabilities and innovations. Stay current with industry trends, emerging technologies , and AWS service releases , integrating new capabilities into customer solutions. Required Skills & Qualifications Technical Expertise 15+ years in enterprise IT or architecture roles, with 10+ years in cloud solutioning/presales , primarily focused on AWS. In-depth knowledge of AWS IaaS/PaaS/SaaS , including services across compute, storage, networking, databases, security, AI/ML, and observability. Hands-on experience in architecting and deploying data lake/data warehouse solutions using Redshift , Glue, Lake Formation, and other data ecosystem components. Proficiency in designing AI/ML solutions using SageMaker , Bedrock, TensorFlow, PyTorch, or equivalent frameworks. Understanding of multi-cloud architectures and hybrid cloud solutions; hands-on experience with Azure or GCP is an advantage. Strong command of solution architecture best practices , cost optimization , cloud security , and compliance frameworks. Presales & Consulting Skills Proven success in technical sales roles involving complex cloud solutions and data platforms . Strong ability to influence C-level executives and technical stakeholders . Excellent communication, presentation, and storytelling skills to articulate complex technical solutions in business terms. Experience with proposal development, RFx responses, and pricing strategy . Strong analytical and problem-solving capabilities with a customer-first mindset. Show more Show less
Posted 4 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The job market for redshift professionals in India is growing rapidly as more companies adopt cloud data warehousing solutions. Redshift, a powerful data warehouse service provided by Amazon Web Services, is in high demand due to its scalability, performance, and cost-effectiveness. Job seekers with expertise in redshift can find a plethora of opportunities in various industries across the country.
The average salary range for redshift professionals in India varies based on experience and location. Entry-level positions can expect a salary in the range of INR 6-10 lakhs per annum, while experienced professionals can earn upwards of INR 20 lakhs per annum.
In the field of redshift, a typical career path may include roles such as: - Junior Developer - Data Engineer - Senior Data Engineer - Tech Lead - Data Architect
Apart from expertise in redshift, proficiency in the following skills can be beneficial: - SQL - ETL Tools - Data Modeling - Cloud Computing (AWS) - Python/R Programming
As the demand for redshift professionals continues to rise in India, job seekers should focus on honing their skills and knowledge in this area to stay competitive in the job market. By preparing thoroughly and showcasing their expertise, candidates can secure rewarding opportunities in this fast-growing field. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2