Jobs
Interviews

745 Amazon Redshift Jobs - Page 29

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

9 - 14 years

1 - 4 Lacs

Hyderabad

Work from Office

ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. ABOUT THE ROLE Role Description We are seeking a Reference Data Management Senior Analyst who a s the Reference Data Product team member of the Enterprise Data Management organization, will be responsible for managing and promoting the use of reference data, partnering with business Subject Mater Experts on creation of vocabularies / taxonomies and ontologies, and developing analytic solutions using semantic technologies . Roles & Responsibilities Work with Reference Data Product Owner, external resources and other engineers as part of the product team? ? Develop and maintain semantically appropriate concepts ? ? Identify and address conceptual gaps in both content and taxonomy ? Maintain ontology source vocabularies for new or edited codes? ? Support product teams to help them leverage taxonomic solutions ? Analyze the data from public/internal datasets. ? Develop a Data Model/schema for taxonomy. ? Create a taxonomy in Semaphore Ontology Editor. ? Perform Bulk-import data templates into Semaphore to add/update terms in taxonomies. ? Prepare SPARQL queries to generate adhoc reports. ? Perform Gap Analysis on current and updated data? ? Maintain taxonomies in Semaphore through Change Management process. ? Develop and optimize automated data ingestion / pipelines through Python/ PySpark when APIs are available ? Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs ? Identify and resolve complex data-related challenges ? Participate in sprint planning meetings and provide estimations on technical implementation . Basic Qualifications and Experience Master’s degree with 6 years of experience in Business, Engineering, IT or related field OR Bachelor’s degree with 8 years of experience in Business, Engineering, IT or related field OR Diploma with 9+ years of experience in Business, Engineering, IT or related field Functional Skills: Must-Have Skills: Knowledge of controlled vocabularies, classification, ontology and taxonomy? ? Experience in ontology development using Semaphore, or a similar tool? ? Hands on experience writing SPARQL queries on graph data ? Excellent problem-solving skills and the ability to work with large, complex datasets ? Understanding of data modeling, data warehousing, and data integration concepts Good-to-Have Skills: Hands on experience writing SQL using any RDBMS (Redshift, Postgres, MySQL, Teradata, Oracle, etc.). ? Experience using cloud services such as AWS or Azure or GCP? ? Experience working in Product Teams environment ? Knowledge of Python/R, Databricks, cloud data platforms ? Knowledge of NLP (Natural Language Processing) and AI (Artificial Intelligence) for extracting and standardizing controlled vocabularies. ? Strong understanding of data governance frameworks, tools, and best practices ? Professional Certifications Databricks Certificate preferred? SAFe ® Practitioner Certificate preferred? Any Data Analysis certification (SQL , Python ) Any cloud certification (AWS or AZURE) Soft Skills: Strong analytical abilities to assess and improve master data processes and solutions. Excellent verbal and written communication skills, with the ability to convey complex data concepts clearly to technical and non-technical stakeholders. Effective problem-solving skills to address data-related issues and implement scalable solutions. Ability to work effectively with global, virtual teams EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 2 months ago

Apply

1 - 4 years

2 - 6 Lacs

Hyderabad

Work from Office

Role Description? In this vital role as Specialist IS Engineer, you will be responsible for designing, developing, and maintaining software applications and solutions that meet business needs and ensuring the availability and performance of critical systems and applications. You would be responsible for maintains scalable software solutions to address complex business needs, with a focus on building robust Software applications, integrations, and reporting. This role ensures system performance and availability while minimizing downtime through automation and proactive incident management. ? The ideal candidate will collaborate with product owners, architects, and engineers to design, implement and manage next-generation metrics engine and data governance capability, including metadata and reference data management for analytics. Additional responsibilities include designing interfaces, workflows, and process models, and deploying integrations in development and production environments, ensuring compliance and operational excellence. ? This position is perfect for a collaborative, detail-oriented professional passionate about enhancing clinical operations through advanced engineering and integrations. ? ? Roles & Responsibilities? ? Collaborate closely with product owners, data architects, business SMEs and engineers to develop and deliver high-quality solutions, enhancing and maintaining integrations across clinical systems. ? Design and architect the next-generation metrics engine on modern infrastructure to support operational analytics leveraging cloud technologies. ? Design and implement a new data governance capability incorporating essential features like metadata and reference data management for analytics. ? Take ownership of complex software projects from conception to deployment, managing scope, risk, and timelines. ? Utilize rapid prototyping skills to quickly translate concepts into working solutions and code. ? Leverage modern AI/ML technologies to enable predictive analytics, NLP/NLQ capabilities, and enhance the overall data analytics process. ? Analyze functional and technical requirements of applications, translating them into software architecture and design specifications. ? Develop and execute unit tests, integration tests, and other testing strategies to ensure software quality and reliability. ? Integrate systems and platforms to ensure seamless data flow, functionality, and interoperability. ? Provide ongoing support and maintenance for applications, ensuring smooth and efficient operation. ? Collaborate on building advanced analytics capabilities to empower data-driven decision-making and operational insights. ? Provide technical guidance and mentorship to junior developers, fostering team growth and skill development. ? Basic Qualifications and Experience? Doctorate Degree OR ? Master’s degree with 4 - 6 years of experience in Computer Science, IT or related field? OR ? ? Bachelor’s degree with 6 - 8 years of experience in Computer Science, IT or related field? OR ? Diploma with 10 - 12 years of experience in Computer Science, IT or related field? ? Functional Skills: ? Must-Have Skills ? Strong understanding of cloud platforms (e.g., AWS, GCP, Azure) and containerization technologies (e.g., Docker, Kubernetes) ? Hands on experience writing SQL using any RDBMS (Redshift, Postgres, MySQL, Teradata, Oracle, etc.) ? ? Hands on e xperience in programming (e.g., SQL, C++, JavaScript, XML). Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark ( PySpark , SparkSQL ), workflow orchestration, performance tuning on big data processing ? ? Good-to-Have Skills: ? Experience with software DevOps CI/CD tools, such Git, Jenkins, Linux, and Shell Script ?? ? Experience with Spark, Hive, Kafka, Kinesis, Spark Streaming, and Airflow ? ? Experience with data flow pipelines to extract, transform, and load data from various data sources in various forms, including custom ETL pipelines ?? ? Experience working in an agile environment (i.e. user stories, iterative development, etc.) Soft Skills: ? Excellent analytical and troubleshooting skills ? Strong verbal and written communication skills ? Ability to work effectively with global, virtual teams ? High degree of initiative and self-motivation ? Ability to manage multiple priorities successfully ? Team-oriented, with a focus on achieving team goals ? Strong presentation and public speaking skills ?

Posted 2 months ago

Apply

3 - 5 years

4 - 8 Lacs

Gurugram

Work from Office

AHEAD builds platforms for digital business. By weaving together advances in cloud infrastructure, automation and analytics, and software delivery, we help enterprises deliver on the promise of digital transformation. AtAHEAD, we prioritize creating a culture of belonging,where all perspectives and voices are represented, valued, respected, and heard. We create spaces to empower everyone to speak up, make change, and drive the culture at AHEAD. We are an equal opportunity employer,anddo not discriminatebased onan individual's race, national origin, color, gender, gender identity, gender expression, sexual orientation, religion, age, disability, maritalstatus,or any other protected characteristic under applicable law, whether actual or perceived. We embraceall candidatesthatwillcontribute to the diversification and enrichment of ideas andperspectives atAHEAD. Data Engineer (Internally known as a Sr. Associate Technical Consultant) AHEAD is looking for a Technical Consultant Data Engineer to work closely with our dynamic project teams (both on-site and remotely). This Data Engineer will be responsible for hands-on engineering of Data platforms that support our clients advanced analytics, data science, and other data engineering initiatives. This consultant will build, and support modern data environments that reside in the public cloud or multi-cloud enterprise architectures. The Data Engineer will have responsibility for working on a variety of data projects. This includes orchestrating pipelines using modern Data Engineering tools/architectures as well as design and integration of existing transactional processing systems. As a Data Engineer, you will implement data pipelines to enable analytics and machine learning on rich datasets. Responsibilities: A Data Engineer should be able to build, operationalize and monitor data processing systems Create robust and automated pipelines to ingest and process structured and unstructured data from various source systems into analytical platforms using batch and streaming mechanisms leveraging cloud native toolset Implement custom applications using tools such as Kinesis, Lambda and other cloud native tools as required to address streaming use cases Engineers and supports data structures including but not limited to SQL and NoSQL databases Engineers and maintain ELT processes for loading data lake (Snowflake, Cloud Storage, Hadoop) Leverages the right tools for the right job to deliver testable, maintainable, and modern data solutions Respond to customer/team inquiries and assist in troubleshooting and resolving challenges Works with other scrum team members to estimate and deliver work inside of a sprint Research data questions, identifies root causes, and interacts closely with business users and technical resources Qualifications: 3+ years of professional technical experience 3+ years of hands-on Data Warehousing. 3+ years of experience building highly scalable data solutions using Hadoop, Spark, Databricks, Snowflake 2+ years of programming languages such as Python 3+ years of experience working in cloud environments (Azure) 2 years of experience in Redshift Strong client-facing communication and facilitation skills Key Skills: Python, Azure Cloud, Redshift, NoSQL, Git, ETL/ELT, Spark, Hadoop, Data Warehouse, Data Lake, Data Engineering, Snowflake, SQL/RDBMS, OLAP Why AHEAD: Through our daily work and internal groups like Moving Women AHEAD and RISE AHEAD, we value and benefit from diversity of people, ideas, experience, and everything in between. We fuel growth by stacking our office with top-notch technologies in a multi-million-dollar lab, by encouraging cross department training and development, sponsoring certifications and credentials for continued learning. USA Employment Benefits include - Medical, Dental, and Vision Insurance - 401(k) - Paid company holidays - Paid time off - Paid parental and caregiver leave - Plus more! See benefits https://www.aheadbenefits.com/ for additional details. The compensation range indicated in this posting reflects the On-Target Earnings (OTE) for this role, which includes a base salary and any applicable target bonus amount. This OTE range may vary based on the candidates relevant experience, qualifications, and geographic location.

Posted 2 months ago

Apply

3 - 5 years

4 - 9 Lacs

Gurugram

Work from Office

AHEAD builds platforms for digital business. By weaving together advances in cloud infrastructure, automation and analytics, and software delivery, we help enterprises deliver on the promise of digital transformation. AtAHEAD, we prioritize creating a culture of belonging,where all perspectives and voices are represented, valued, respected, and heard. We create spaces to empower everyone to speak up, make change, and drive the culture at AHEAD. We are an equal opportunity employer,anddo not discriminatebased onan individual's race, national origin, color, gender, gender identity, gender expression, sexual orientation, religion, age, disability, maritalstatus,or any other protected characteristic under applicable law, whether actual or perceived. We embraceall candidatesthatwillcontribute to the diversification and enrichment of ideas andperspectives atAHEAD. Data Engineer (Internally known as a Technical Consultant) AHEAD is looking for a Technical Consultant Data Engineer to work closely with our dynamic project teams (both on-site and remotely). This Data Engineer will be responsible for hands-on engineering of Data platforms that support our clients advanced analytics, data science, and other data engineering initiatives. This consultant will build, and support modern data environments that reside in the public cloud or multi-cloud enterprise architectures. The Data Engineer will have responsibility for working on a variety of data projects. This includes orchestrating pipelines using modern Data Engineering tools/architectures as well as design and integration of existing transactional processing systems. As a Data Engineer, you will implement data pipelines to enable analytics and machine learning on rich datasets. Responsibilities: A Data Engineer should be able to build, operationalize and monitor data processing systems Create robust and automated pipelines to ingest and process structured and unstructured data from various source systems into analytical platforms using batch and streaming mechanisms leveraging cloud native toolset Implement custom applications using tools such as Kinesis, Lambda and other cloud native tools as required to address streaming use cases Engineers and supports data structures including but not limited to SQL and NoSQL databases Engineers and maintain ELT processes for loading data lake (Snowflake, Cloud Storage, Hadoop) Leverages the right tools for the right job to deliver testable, maintainable, and modern data solutions Respond to customer/team inquiries and assist in troubleshooting and resolving challenges Works with other scrum team members to estimate and deliver work inside of a sprint Research data questions, identifies root causes, and interacts closely with business users and technical resources Qualifications: 3+ years of professional technical experience 3+ years of hands-on Data Warehousing. 3+ years of experience building highly scalable data solutions using Hadoop, Spark, Databricks, Snowflake 2+ years of programming languages such as Python 3+ years of experience working in cloud environments (Azure) 2 years of experience in Redshift Strong client-facing communication and facilitation skills Key Skills: Python, Azure Cloud, Redshift, NoSQL, Git, ETL/ELT, Spark, Hadoop, Data Warehouse, Data Lake, Data Engineering, Snowflake, SQL/RDBMS, OLAP Why AHEAD: Through our daily work and internal groups like Moving Women AHEAD and RISE AHEAD, we value and benefit from diversity of people, ideas, experience, and everything in between. We fuel growth by stacking our office with top-notch technologies in a multi-million-dollar lab, by encouraging cross department training and development, sponsoring certifications and credentials for continued learning. USA Employment Benefits include - Medical, Dental, and Vision Insurance - 401(k) - Paid company holidays - Paid time off - Paid parental and caregiver leave - Plus more! See benefits https://www.aheadbenefits.com/ for additional details. The compensation range indicated in this posting reflects the On-Target Earnings (OTE) for this role, which includes a base salary and any applicable target bonus amount. This OTE range may vary based on the candidates relevant experience, qualifications, and geographic location.

Posted 2 months ago

Apply

5 - 10 years

10 - 14 Lacs

Noida

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Microsoft Power Business Intelligence (BI) Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your day will involve overseeing the application development process and ensuring seamless communication among team members and stakeholders. Roles & Responsibilities: Expected to be an SME Collaborate and manage the team to perform Responsible for team decisions Engage with multiple teams and contribute on key decisions Provide solutions to problems for their immediate team and across multiple teams Lead the application development process Ensure effective communication among team members and stakeholders Identify and address any issues or roadblocks in the development process Professional & Technical Skills: Must To Have Skills: Proficiency in Microsoft Power Business Intelligence (BI) Strong understanding of data visualization tools such as Power BI Experience with implementing various BI solutions Hands-on experience in designing and configuring BI applications Solid grasp of data analysis and interpretation Additional Information: The candidate should have a minimum of 5 years of experience in Microsoft Power Business Intelligence (BI) This position is based at our Noida office A 15 years full-time education is required Qualification 15 years full time education

Posted 2 months ago

Apply

5 - 10 years

4 - 8 Lacs

Bengaluru

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Snowflake Data Warehouse Good to have skills : Python (Programming Language), Data Building Tool Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will be responsible for designing, developing, and maintaining data solutions for data generation, collection, and processing. You will create data pipelines, ensure data quality, and implement ETL processes to migrate and deploy data across systems. Your day will involve working on data solutions and collaborating with teams to optimize data processes. Roles & Responsibilities: Expected to be an SME Collaborate and manage the team to perform Responsible for team decisions Engage with multiple teams and contribute on key decisions Provide solutions to problems for their immediate team and across multiple teams Develop and maintain data pipelines Ensure data quality and integrity Implement ETL processes Professional & Technical Skills: Must To Have Skills: Proficiency in Snowflake Data Warehouse Good To Have Skills: Experience with Data Building Tool Strong understanding of data architecture Proficiency in SQL and database management Experience with cloud data platforms Knowledge of data modeling Additional Information: The candidate should have a minimum of 5 years of experience in Snowflake Data Warehouse This position is based at our Bengaluru office A 15 years full time education is required Qualification 15 years full time education

Posted 2 months ago

Apply

7 - 12 years

4 - 8 Lacs

Bengaluru

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Snowflake Data Warehouse Good to have skills : Python (Programming Language), Data Building Tool Minimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will be responsible for designing, developing, and maintaining data solutions for data generation, collection, and processing. Your role involves creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across systems. Roles & Responsibilities: Expected to be an SME. Collaborate and manage the team to perform. Responsible for team decisions. Engage with multiple teams and contribute on key decisions. Provide solutions to problems for their immediate team and across multiple teams. Develop and maintain data solutions for data generation, collection, and processing. Create data pipelines to ensure efficient data flow. Implement ETL processes for data migration and deployment. Professional & Technical Skills: Must To Have Skills: Proficiency in Snowflake Data Warehouse. Good To Have Skills: Experience with Data Building Tool, Python (Programming Language). Strong understanding of data architecture and data modeling. Experience in developing and optimizing ETL processes. Knowledge of cloud data platforms and services. Additional Information: The candidate should have a minimum of 7.5 years of experience in Snowflake Data Warehouse. This position is based at our Bengaluru office. A 15 years full time education is required. Qualification 15 years full time education

Posted 2 months ago

Apply

5 - 10 years

7 - 12 Lacs

Kolkata

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Google BigQuery Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years or more of full time education Summary:As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements using Google BigQuery. Your typical day will involve collaborating with cross-functional teams, analyzing business requirements, and developing scalable solutions to meet the needs of our clients. Roles & Responsibilities:- Design, build, and configure applications to meet business process and application requirements using Google BigQuery.- Collaborate with cross-functional teams to analyze business requirements and develop scalable solutions to meet the needs of our clients.- Develop and maintain technical documentation, including design documents, test plans, and user manuals.- Ensure the quality of deliverables by conducting thorough testing and debugging of applications. Professional & Technical Skills:- Must To Have Skills:Proficiency in Google BigQuery.- Good To Have Skills:Experience with other cloud-based data warehousing solutions such as Amazon Redshift or Snowflake.- Strong understanding of SQL and database design principles.- Experience with ETL tools and processes.- Experience with programming languages such as Python or Java. Additional Information:- The candidate should have a minimum of 5 years of experience in Google BigQuery.- The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful data-driven solutions.- This position is based at our Bengaluru office. Qualifications 15 years or more of full time education

Posted 2 months ago

Apply

3 - 8 years

9 - 13 Lacs

Ahmedabad

Work from Office

Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. You will play a crucial role in shaping the data platform components. Roles & Responsibilities: Expected to perform independently and become an SME. Required active participation/contribution in team discussions. Contribute in providing solutions to work related problems. Collaborate with cross-functional teams to design and implement data platform solutions. Develop and maintain data pipelines for efficient data processing. Implement data security and privacy measures to protect sensitive information. Optimize data storage and retrieval processes for improved performance. Conduct regular data platform performance monitoring and troubleshooting. Professional & Technical Skills: Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. Strong understanding of cloud-based data platforms. Experience with data modeling and database design. Hands-on experience with ETL processes and tools. Knowledge of data governance and compliance standards. Additional Information: The candidate should have a minimum of 3 years of experience in Databricks Unified Data Analytics Platform. This position is based at our Ahmedabad office. A 15 years full time education is required. Qualifications 15 years full time education

Posted 2 months ago

Apply

12 - 17 years

14 - 19 Lacs

Pune

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Data Engineering Good to have skills : Oracle Procedural Language Extensions to SQL (PLSQL), AWS Redshift, Tableau Minimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. You will be responsible for ensuring that the applications are developed according to the specified requirements and standards, and that they are delivered on time and within budget. Your typical day will involve collaborating with the team to design and develop applications, configuring and customizing applications based on business needs, and troubleshooting and resolving any issues that arise during the development process. You will also be involved in testing and deploying applications, as well as providing support and maintenance for existing applications. Roles & Responsibilities: Excellent SQL skills with experience in building and interpreting complex queries and Create logical and physical data models and Experience in advanced SQL programming Advanced working SQL knowledge and experience working with relational databases, and query authoring (SQL) Must experience to design, code, test, and analyze applications leveraging RDBMS (Redshift, MySQL & MS SQL SERVER databases). Assist Delivery and Operations team with customization requests and technical feasibility responses to the clients Expert experience with performance tuning and optimization and stored procedures Work with BRMs directly for Planning, Solutioning, Assessment, Urgent issues and Consultation. Represent BRM in meetings when there is time conflict or unavailable. Making sure resolve all blocker so offshore operations run smooth at offshore time. Provide a better understanding to offshore team, resolve conflict and understanding gaps. Dealing with cultural differences and making communication is easier. Taking initiatives, continuous improvement & drive best practices that has worked well in the past. Building bridge outside the project boundary and helping other Vendors like PWC, DK, Beghou to work together to achieve client deliverables. Build stand operations process & continuous improvement to help EISAI IT and business to make decision Professional & Technical Skills: Resource should have experience in Data Engineering, Data Quality, AWS Redshift, SQL, Tableau, Enterprise Data Warehouse, Jira, Service Now, Confluence, UNIX shell scripting, Python Must To Have Skills:Proficiency in Data Engineering. Good To Have Skills:Experience with Oracle Procedural Language Extensions to SQL (PLSQL), AWS Redshift, Tableau. Strong understanding of statistical analysis and machine learning algorithms. Experience with data visualization tools such as Tableau or Power BI. Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Functional/Industry skills: LS-Pharma on commercial datasets Additional Information: The candidate should have a minimum of 12 years of experience in Data Engineering. This position is based at our Pune office. A 15 years full time education is required. Qualifications 15 years full time education

Posted 2 months ago

Apply

7 - 12 years

9 - 14 Lacs

Bengaluru

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : AWS Glue Good to have skills : PySpark Minimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your day will involve overseeing the application development process and ensuring seamless communication within the team and stakeholders. Roles & Responsibilities: Expected to be an SME Collaborate and manage the team to perform Responsible for team decisions Engage with multiple teams and contribute on key decisions Provide solutions to problems for their immediate team and across multiple teams Lead the application development process Ensure effective communication within the team and stakeholders Professional & Technical Skills: Must To Have Skills:Proficiency in AWS Glue Good To Have Skills:Experience with PySpark Strong understanding of ETL processes Experience in data transformation and integration Knowledge of cloud computing services Ability to troubleshoot and optimize data pipelines Additional Information: The candidate should have a minimum of 7.5 years of experience in AWS Glue This position is based at our Bengaluru office A 15 years full-time education is required Qualifications 15 years full time education

Posted 2 months ago

Apply

5 - 10 years

7 - 12 Lacs

Bengaluru

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : AWS Redshift Good to have skills : PySpark Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will oversee the development process and ensure successful project delivery. Roles & Responsibilities: Expected to be an SME Collaborate and manage the team to perform Responsible for team decisions Engage with multiple teams and contribute on key decisions Provide solutions to problems for their immediate team and across multiple teams Lead the application development process Coordinate with stakeholders to gather requirements Ensure timely delivery of projects Professional & Technical Skills: Must To Have Skills:Proficiency in AWS Glue Good To Have Skills:Experience with PySpark Strong understanding of ETL processes Experience in data transformation and integration Knowledge of cloud computing platforms Ability to troubleshoot and resolve technical issues Additional Information: The candidate should have a minimum of 5 years of experience in AWS Glue This position is based at our Bengaluru office A 15 years full-time education is required Qualifications 15 years full time education

Posted 2 months ago

Apply

2 - 3 years

5 - 8 Lacs

Hyderabad

Work from Office

Master Data Management (MDM): Design and implement MDM strategies to ensure data consistency, quality, and accuracy across multiple systems. ETL Development: Develop, maintain, and optimize ETL pipelines to extract, transform, and load data from various sources into data warehouses or data lakes. AWS Services: Utilize AWS cloud technologies (such as Redshift, S3, Lambda, and EC2) to support scalable and cost-effective data storage and processing solutions. Data Analysis & Reporting: Develop interactive dashboards and reports in Tableau to visualize key business metrics and provide actionable insights to stakeholders. Python & SQL Development: Write efficient Python scripts and SQL queries to process, manipulate, and analyze large datasets. Data Integration: Integrate data from various internal and external systems, ensuring seamless data flows and supporting business needs. Data Quality Assurance: Ensure high data quality standards are met by performing regular data validation, cleansing, and enrichment processes. Collaboration: Work closely with cross-functional teams (data scientists, business analysts, IT) to understand business requirements and translate them into effective data solutions. Performance Optimization: Continuously monitor and optimize data processing and reporting systems to improve efficiency and reduce processing time. Documentation: Maintain comprehensive documentation for ETL processes, data pipelines, and dashboard designs.

Posted 2 months ago

Apply

3 - 8 years

5 - 10 Lacs

Gurugram

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : AWS Glue Good to have skills : Data Building Tool Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. You will be responsible for creating efficient and scalable applications that align with the organization's goals and objectives. Your typical day will involve collaborating with cross-functional teams, analyzing user requirements, developing software solutions, and ensuring the applications are optimized for performance and usability. Roles & Responsibilities: Expected to perform independently and become an SME. Required active participation/contribution in team discussions. Contribute in providing solutions to work-related problems. Collaborate with cross-functional teams to gather and analyze user requirements. Design, develop, and test software applications using AWS Glue. Ensure the applications are optimized for performance and usability. Troubleshoot and debug issues in the applications. Provide technical guidance and support to junior developers. Professional & Technical Skills: Must To Have Skills:Proficiency in AWS Glue. Good To Have Skills:Experience with Data Building Tool. Strong understanding of software development principles and best practices. Experience with cloud-based technologies and services, particularly AWS. Knowledge of database systems and SQL. Familiarity with version control systems, such as Git. Ability to work in an Agile development environment. Excellent problem-solving and analytical skills. Additional Information: The candidate should have a minimum of 3 years of experience in AWS Glue. This position is based at our Bengaluru office. A 15 years full-time education is required.Qualifications 15 years full time education

Posted 2 months ago

Apply

3 - 8 years

5 - 10 Lacs

Pune

Work from Office

Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. You will play a crucial role in shaping the data platform components. Roles & Responsibilities: Expected to perform independently and become an SME. Required active participation/contribution in team discussions. Contribute in providing solutions to work related problems. Collaborate with cross-functional teams to design and implement data platform solutions. Develop and maintain data pipelines for efficient data processing. Optimize data storage and retrieval processes for improved performance. Implement data security measures to protect sensitive information. Conduct regular data platform performance evaluations and make recommendations for improvements. Professional & Technical Skills: Must To Have Skills:Proficiency in Databricks Unified Data Analytics Platform. Strong understanding of cloud-based data platforms. Experience with data modeling and database design. Hands-on experience with ETL processes and tools. Knowledge of data governance and compliance standards. Additional Information: The candidate should have a minimum of 3 years of experience in Databricks Unified Data Analytics Platform. This position is based at our Pune office. A 15 years full time education is required. Qualifications 15 years full time education

Posted 2 months ago

Apply

1 - 4 years

8 - 13 Lacs

Pune

Work from Office

Associate - Platform Support JOB_DESCRIPTION.SHARE.HTML CAROUSEL_PARAGRAPH JOB_DESCRIPTION.SHARE.HTML Pune, India India Enterprise IT - 22699 about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. What youll do Handle request escalation from multiple sources like email, telephone, and cases. Input requests into problem tracking database and assign Tier 2 group for resolution. Solicit user feedback of services to upgrade service quality. Troubleshooting IT-related issues for a user population of ZS employees and clients using ZS proprietary software via telephone and remote access. Contribute to improving user support by actively responding to queries and handling complaints. Establish best practices through the entire Tier1 support process. Working in a 24*7 environment. IT team projects as assigned. Proactively report unusual and recurring issues to management. Complete administrative tasks, such as tracking emails/tickets, assisting with organizational efforts. What youll bring Bachelor's degree required, Master' s degree desirable. 1-2 years of relevant experience in customer support, or as an IT ServiceDesk Associate (preferred). Knowledge of AWS services / AWS certifications preferred. Knowledge on AWS fundamentals, IAM, Redshift, S3 and workspaces. Knowledge about service catalogs, incident management, case management, change management and should have Experience in managing Service desk emails. Prior work experience in a similar role is preferred. Ability to approach problem-solving methodically and analytically. Strong oral and written communication skills. Strong customer service orientation. Ability to work varied hours, enabling support in a 24/7 environment. Perks & Benefits ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Considering applying? At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. To Complete Your Application Candidates must possess work authorization for their intended country of employment. An on-line application, including a full set of transcripts (official or unofficial), is required to be considered. NO AGENCY CALLS, PLEASE. Find Out More At www.zs.com

Posted 2 months ago

Apply

4 - 9 years

7 - 11 Lacs

Bengaluru

Work from Office

Explore, analyze, and visualize our healthcare and insurance data to provide insight to stakeholders Design reports and dashboards to monitor metrics and add value to business Identify pain points in existing processes and suggest improvements backed by data Use existing / build new frameworks to develop & maintain ETL processes Build data models to support product growth, when required What we are looking for: You have 4+ years of experience working in analytics and product-driven roles You have advanced SQL skills you've worked in cross functional teams involving Product, Engineering, Design and Research, and can manage senior stakeholders you're a self-starter who is comfortable working autonomously Analytics Stack Analytics : Python / R / SQL + Excel Database : PostgreSQL, Amazon Redshift, FireStore Warehouse : Amazon Redshift ETL : Lots of Python + custom-made Business Intelligence/Visualisation : Python/R libraries (location data) + Any BI Tool

Posted 2 months ago

Apply

8 - 13 years

13 - 18 Lacs

Pune

Work from Office

Position Summary We are looking for a highly skilled and experienced Data Engineering Manager to lead our data engineering team. The ideal candidate will possess a strong technical background, strong project management abilities, and excellent client handling/stakeholder management skills. This role requires a strategic thinker who can drive the design, development and implementation of data solutions that meet our clients needs while ensuring the highest standards of quality and efficiency. Job Responsibilities Technology Leadership Lead guide the team independently or with little support to design, implement deliver complex cloud-based data engineering / data warehousing project assignments Solution Architecture & Review Expertise in conceptualizing solution architecture and low-level design in a range of data engineering (Matillion, Informatica, Talend, Python, dbt, Airflow, Apache Spark, Databricks, Redshift) and cloud hosting (AWS, Azure) technologies Managing projects in fast paced agile ecosystem and ensuring quality deliverables within stringent timelines Responsible for Risk Management, maintaining the Risk documentation and mitigations plan. Drive continuous improvement in a Lean/Agile environment, implementing DevOps delivery approaches encompassing CI/CD, build automation and deployments. Communication & Logical Thinking Demonstrates strong analytical skills, employing a systematic and logical approach to data analysis, problem-solving, and situational assessment. Capable of effectively presenting and defending team viewpoints, while securing buy-in from both technical and client stakeholders. Handle Client Relationship Manage client relationship and client expectations independently. Should be able to deliver results back to the Client independently. Should have excellent communication skills. Education BE/B.Tech Master of Computer Application Work Experience Should have expertise and 8+ years of working experience in at least twoETL toolsamong Matillion, dbt, pyspark, Informatica, and Talend Should have expertise and working experience in at least twodatabases among Databricks, Redshift, Snowflake, SQL Server, Oracle Should have strong Data Warehousing, Data Integration and Data Modeling fundamentals like Star Schema, Snowflake Schema, Dimension Tables and Fact Tables. Strong experience on SQL building blocks. Creating complex SQL queries and Procedures. Experience in AWS or Azure cloud and its service offerings Aware oftechniques such asData Modelling, Performance tuning and regression testing Willingness to learn and take ownership of tasks. Excellent written/verbal communication and problem-solving skills and Understanding and working experience on Pharma commercial data sets like IQVIA, Veeva, Symphony, Liquid Hub, Cegedim etc. would be an advantage Hands-on in scrum methodology (Sprint planning, execution and retrospection) Behavioural Competencies Teamwork & Leadership Motivation to Learn and Grow Ownership Cultural Fit Talent Management Technical Competencies Problem Solving Lifescience Knowledge Communication Agile PySpark Data Modelling Matillion Designing technical architecture AWS Data Pipeline

Posted 2 months ago

Apply

5 - 10 years

30 - 35 Lacs

Bengaluru

Work from Office

Position Summary Delivery Director - Big Data & Cloud Data Management This position is part of the Senior leadership in data warehousing and Business Intelligence areas. Someone who can work on multiple project streams and clients for better business decision making especially in the area of Lifesciences/ Pharmaceutical domain. Functional Domain Life Sciences focusing on Pharma Commercial aspect Technology Domain Big DataCloud Data Management ( EDW/ Data Lake/ Big Data) Location Noida/Gurgaon Qualification B.Tech. or equivalent degree is a minimum criterion Job Responsibilities Lead the delivery of major engagements on a day-to-day basis, providing hands on technical leadership to the delivery team.Global program planning and execution for Data Management programs across Enterprise DWH, Data Lake, Big Data, Business Intelligence & Reporting, Master Data Management solutions. This includes setting up project plans, scopes, budgets, staffing resources, leading client workshops, creating and coordinatingfinal deliverables, providing high quality and insightful advice to our clients in terms of Cloud Data Management strategy, architecture, operating model etc.Drive data management business requirements across pharma commercial data sets in a workshop setting with client business teams and onshore counterparts.Be able to guide the team to deliver the logical and physical data models for Pharma Commercial Data Lakes and Warehouse Maintain responsibility for risk management, quality and profitability on engagements and liaise with the client lead and onshore delivery partners .Drive Clear and Timely communication across program stakeholders to ensure everyone is on the same page on the weekly progress.Must maintain high standards of quality and thoroughness. Should be able to set-up adequate controls within team to perform code reviews and monitor the quality Represent the Business Information Management (BIM) practice as a SME in cloud data management space.Coach, mentor and develop middle and junior level staff. Performance management of all the team members rolling up to the role.Develop the manager layers to be leaders of the future.Be known as a Thought Leader in a specific aspect of Information Management technology spectrum or Pharma domain.Direct the training & skill enhancement of the team, in line with pipeline opportunities Understand wider Product and service offerings of Axtria to co-ordinate these to ensure our clients get the best level of advice and support.Should be able to work on large Cloud data management deals within Life Sciences domain.Work with Vertical/Onshore teams to drive Business Development and growth for the practice Education BE/B.Tech Master of Computer Application Work Experience Candidate should have 14+ years of prior experience in delivering customer focused Cloud Data Management solution(s)Enterprise Data Warehouse, Enterprise Data Lake, Master Data Management System, Business Intelligence & Reporting solutions, IT Architecture Consulting, Cloud Platforms ( AWS/AZURE), SaaS/PaaS based solutions in the Life Science industry.Minimum of 5 years of relevant experience in Pharma domain. (Must)Should have successfully program managed/solutioned 2-3 end to end DW Cloud implementations on AWS/Redshift/Cloud in Pharma or Life Sciences Business domains (Must)Candidate must have prior hands-on working experience on big data and ETL technologies.Tech stack exposure (hands on/project mgmt. on few of these)from amongst AWS, ETL, Data modelling, Python, Qlik/Tableau/MicroStrategy, Dataiku, Databricks, Airflow, Graph DB, Full stack.Good to have working knowledge of toolsAt least 2 of the following QlikView, Qlik Sense, Tableau, MicroStrategy, Spotfire, MS PBI. Must have a very good understanding of end-to-end pharma commercial landscape covering both enterprise and syndicated data sets.Decent exposure to Business processes like alignment, market definition, segmentation, sales crediting, activity metrics calculation and Exposure on advanced analytics such as next best action implementation.Ability to handle large teams of IT professionals.Ability to lead large RFP responses.Ability to handle P&L.Ability to handle different stakeholders customers, vendors, internal teams.Strong project/program management capabilities.Strong written and verbal communication skills Behavioural Competencies Project Management Communication Attention to P&L Impact Teamwork & Leadership Motivation to Learn and Grow Lifescience Knowledge Ownership Cultural Fit Scale of resources managed Scale of revenues managed / delivered Problem solving Talent Management Capability Building / Thought Leadership Account management Technical Competencies Delivery Management- BIM/ Cloud Info Management AWS EMR Amazon Redshift Business Intelligence(BI)

Posted 2 months ago

Apply

5 - 10 years

30 - 35 Lacs

Bengaluru

Work from Office

Position Summary This position is part of the technical leadership in data warehousing and Business Intelligence areas. Someone who can work on multiple project streams and clients for better business decision making especially in the area of Lifesciences/ Pharmaceutical domain. Job Responsibilities o Technology Leadership Lead guide the team independently or with little support to design, implement deliver complex reporting and BI project assignments. o Technical portfolio Expertise in a range of BI and hosting technologies like the AWS stack (Redshift, EC2), Qlikview, QlikSense, Tableau, Microstrategy, Spotfireo Project Management ? Get accurate briefs from the Client and translate into tasks for team members with priorities and timeline plans. ? Must maintain high standards of quality and thoroughness. Should be able to monitor accuracy and quality of others' work.? Ability to think in advance about potential risks and mitigation plans.o Logical Thinking Able to think analytically, use a systematic and logical approach to analyze data, problems, and situations. Must be able to guide team members in analysis.o Handle Client Relationship Manage client relationship and client expectations independently. Should be able to deliver results back to the Client independently. Should have excellent communication skills. Education BE/B.Tech Master of Computer Application Work Experience Minimum of 5 years of relevant experience in Pharma domain.? Technical:? Should have 15 years of hands on experience in the following tools Must have working knowledge of toolsAtleast 2 of the following Qlikview, QlikSense, Tableau, Microstrategy, Spotfire? Aware of techniques such asUI design, Report modeling, performance tuning and regression testing? Basic expertise with MS excel? Advanced expertise with SQL? Functional:? Should have experience in following concepts and technologies Specifics: Pharma data sources like IMS, Veeva, Symphony, Cegedim etc. Business processes like alignment, market definition, segmentation, sales crediting, activity metrics calculation 0-2 years of relevant experience in a large/midsize IT services/Consulting/Analytics Company1-3 years of relevant experience in a large/midsize IT services/Consulting/Analytics Company3-5 years of relevant experience in a large/midsize IT services/Consulting/Analytics Company3-5 years of relevant experience in a large/midsize IT services/Consulting/Analytics Company Behavioural Competencies Project ManagementCommunicationAttention to P&L ImpactTeamwork & LeadershipMotivation to Learn and GrowLifescience KnowledgeOwnershipCultural FitScale of resources managedScale of revenues managed / deliveredProblem solvingTalent ManagementCapability Building / Thought Leadership Technical Competencies

Posted 2 months ago

Apply

2 - 5 years

5 - 9 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

We are looking for a Data Scientist with a product engineering mindset to join our fast-paced, data-driven team. Youll work at the intersection of data science, software engineering, and product development, helping to design and deliver scalable, data-centric solutions that drive business value and enhance user experience, particularly in the sports domain. Collaborate with cross-functional teams (Product, Engineering, Design, and Business) to identify opportunities where data science can improve products and services. Design, develop, and deploy machine learning models and data pipelines in production environments using Python, R, and cloud-native services on AWS and Microsoft Azure. Translate complex data into clear product insights and actionable recommendations. Contribute to the product lifecycle by embedding data science into product discovery, experimentation, and iterative improvements. Maintain a strong focus on the user experience, ensuring data-driven solutions are practical and aligned with product goals. Communicate findings effectively with both technical and non-technical stakeholders. Location: Delhi, Mumbai, Bengaluru, Hyderabad, Kolkata, Pune, Chennai, Remote

Posted 2 months ago

Apply

2 - 6 years

12 - 16 Lacs

Pune

Work from Office

As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Design and Develop Data Solutions, Design and implement efficient data processing pipelines using AWS services like AWS Glue, AWS Lambda, Amazon S3, and Amazon Redshift. Develop and manage ETL (Extract, Transform, Load) workflows to clean, transform, and load data into structured and unstructured storage systems. Build scalable data models and storage solutions in Amazon Redshift, DynamoDB, and other AWS services. Data IntegrationIntegrate data from multiple sources including relational databases, third-party APIs, and internal systems to create a unified data ecosystem. Work with data engineers to optimize data workflows and ensure data consistency, reliability, and performance. Automation and OptimizationAutomate data pipeline processes to ensure efficiency Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing

Posted 2 months ago

Apply

2 - 6 years

12 - 16 Lacs

Pune

Work from Office

Design, develop, and manage our data infrastructure on AWS, with a focus on data warehousing solutions. Write efficient, complex SQL queries for data extraction, transformation, and loading. Utilize DBT for data modelling and transformation. Use Python for data engineering tasks, demonstrating strong work experience in this area. Implement scheduling tools like Airflow, Control M, or shell scripting to automate data processes and workflows. Participate in an Agile environment, adapting quickly to changing priorities and requirements Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Proven expertise in AWS technologies, with a strong understanding of AWS services. Experience in Redshift is optional Experience in data warehousing with a solid grasp of SQL, including ability to write complex queries. Proficiency in Python, demonstrating good work experience in data engineering tasks. Familiarity with scheduling tools like Airflow, Control M, or shell scripting. Excellent communication skills, willing attitude towards learning Preferred technical and professional experience Knowledge of DBT for data modelling and transformation is a plus. Experience with PySpark or Spark is highly desirable. Familiarity with DevOps, CI/CD, and Airflow is beneficial.Experience in Agile environments is a nice-to-have

Posted 2 months ago

Apply

4 - 9 years

12 - 16 Lacs

Hyderabad

Work from Office

As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS; Exposure to streaming solutions and message brokers like Kafka technologies. Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers AWS S3 , Redshift , and EMR for data storage and distributed processing. AWS Lambda , AWS Step Functions , and AWS Glue to build serverless, event-driven data workflows and orchestrate ETL processes

Posted 2 months ago

Apply

7 - 12 years

10 - 14 Lacs

Bengaluru

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Talend ETL Good to have skills : NA Minimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, addressing any challenges that arise, and providing guidance to your team members. You will also engage in strategic discussions to align project goals with organizational objectives, ensuring that the applications developed meet the highest standards of quality and functionality. Your role will require you to balance technical expertise with effective communication, fostering a collaborative environment that encourages innovation and problem-solving. Roles & Responsibilities: Expected to be an SME. Collaborate and manage the team to perform. Responsible for team decisions. Engage with multiple teams and contribute on key decisions. Provide solutions to problems for their immediate team and across multiple teams. Facilitate knowledge sharing sessions to enhance team capabilities. Monitor project progress and implement necessary adjustments to meet deadlines. Professional & Technical Skills: Must To Have Skills: Proficiency in Talend ETL. Strong understanding of data integration processes and methodologies. Experience with data warehousing concepts and practices. Familiarity with SQL and database management systems. Ability to troubleshoot and optimize ETL processes for performance. Additional Information: The candidate should have minimum 7.5 years of experience in Talend ETL. This position is based at our Bengaluru office. A 15 years full time education is required. Qualification 15 years full time education

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies