Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) π§ Primary Skills Python Spark (PySpark) SQL Delta Lake π Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less
Posted 4 days ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) π§ Primary Skills Python Spark (PySpark) SQL Delta Lake π Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less
Posted 4 days ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) π§ Primary Skills Python Spark (PySpark) SQL Delta Lake π Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less
Posted 4 days ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) π§ Primary Skills Python Spark (PySpark) SQL Delta Lake π Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less
Posted 4 days ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) π§ Primary Skills Python Spark (PySpark) SQL Delta Lake π Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less
Posted 4 days ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) π§ Primary Skills Python Spark (PySpark) SQL Delta Lake π Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less
Posted 4 days ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) π§ Primary Skills Python Spark (PySpark) SQL Delta Lake π Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less
Posted 4 days ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) π§ Primary Skills Python Spark (PySpark) SQL Delta Lake π Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less
Posted 4 days ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Sanofi We are an innovative global healthcare company, driven by one purpose: we chase the miracles of science to improve peopleβs lives. Our team, across some 100 countries, is dedicated to transforming the practice of medicine by working to turn the impossible into the possible. We provide potentially life-changing treatment options and life-saving vaccine protection to millions of people globally, while putting sustainability and social responsibility at the center of our ambitions. Sanofi has recently embarked into a vast and ambitious digital transformation program. A cornerstone of this roadmap is the acceleration of its data transformation and of the adoption of artificial intelligence (AI) and machine learning (ML) solutions that will accelerate Manufacturing & Supply performance and help bring drugs and vaccines to patients faster, to improve health and save lives. Who You Are: You are a dynamic Data Engineer interested in challenging the status quo to design and develop globally scalable solutions that are needed by Sanofiβs advanced analytic, AI and ML initiatives for the betterment of our global patients and customers. You are a valued influencer and leader who has contributed to making key datasets available to data scientists, analysts, and consumers throughout the enterprise to meet vital business use needs. You have a keen eye for improvement opportunities while continuing to fully comply with all data quality, security, and governance standards. Our vision for digital, data analytics and AI Join us on our journey in enabling Sanofiβs Digital Transformation through becoming an AI first organization. This means: AI Factory - Versatile Teams Operating in Cross Functional Pods: Utilizing digital and data resources to develop AI products, bringing data management, AI and product development skills to products, programs and projects to create an agile, fulfilling and meaningful work environment. Leading Edge Tech Stack: Experience building products that will be deployed globally on a leading-edge tech stack. World Class Mentorship and Training: Working with renowned leaders and academics in machine learning to further develop your skillsets There are multiple vacancies across our Digital profiles and NA region. Further assessments will be completed to determine specific function and level of hired candidates. Job Highlights Propose and establish technical designs to meet business and technical requirements Develop and maintain data engineering solutions based on requirements and design specifications using appropriate tools and technologies Create data pipelines / ETL pipelines and optimize performance Test and validate developed solution to ensure it meets requirements Create design and development documentation based on standards for knowledge transfer, training, and maintenance Work with business and products teams to understand requirements, and translate them into technical needs Adhere to and promote to best practices and standards for code management, automated testing, and deployments Leverage existing or create new standard data pipelines within Sanofi to bring value through business use cases Develop automated tests for CI/CD pipelines Gather/organize large & complex data assets, and perform relevant analysis Conduct peer reviews for quality, consistency, and rigor for production level solution Actively contribute to Data Engineering community and define leading practices and frameworks Communicate results and findings in a clear, structured manner to stakeholders Remains up to date on the companyβs standards, industry practices and emerging technologies Key Functional Requirements & Qualifications Experience working cross-functional teams to solve complex data architecture and engineering problems Demonstrated ability to learn new data and software engineering technologies in short amount of time Good understanding of agile/scrum development processes and concepts Able to work in a fast-paced, constantly evolving environment and manage multiple priorities Strong technical analysis and problem-solving skills related to data and technology solutions Excellent written, verbal, and interpersonal skills with ability to communicate ideas, concepts and solutions to peers and leaders Pragmatic and capable of solving complex issues, with technical intuition and attention to detail Service-oriented, flexible, and approachable team player Fluent in English (Other languages a plus) Key Technical Requirements & Qualifications Bachelorβs Degree or equivalent in Computer Science, Engineering, or relevant field 4 to 5+ years of experience in data engineering, integration, data warehousing, business intelligence, business analytics, or comparable role with relevant technologies and tools, such as Spark/Scala, Informatica/IICS/Dbt Understanding of data structures and algorithms Working knowledge of scripting languages (Python, Shell scripting) Experience in cloud-based data platforms (Snowflake is a plus) Experience with job scheduling and orchestration (Airflow is a plus) Good knowledge of SQL and relational databases technologies/concepts Experience working with data models and query tuning Nice To Haves Experience working in life sciences/pharmaceutical industry is a plus Familiarity with data ingestion through batch, near real-time, and streaming environments Familiarity with data warehouse concepts and architectures (data mesh a plus) Familiarity with Source Code Management Tools (GitHub a plus) Pursue Progress Discover Extraordinary Better is out there. Better medications, better outcomes, better science. But progress doesnβt happen without people β people from different backgrounds, in different locations, doing different roles, all united by one thing: a desire to make miracles happen. So, letβs be those people. Watch our ALL IN video and check out our Diversity, Equity and Inclusion actions at sanofi.com! null Show more Show less
Posted 4 days ago
510.0 years
0 Lacs
Bhopal, Madhya Pradesh, India
On-site
Role : Data Engineers (510 Years of Experience) Experience : 510 years Location : Gurgaon, Pune, Bangalore, Chennai, Jaipur and Bhopal Skills : Python/Scala, SQL, ETL, Big Data (Spark, Kafka, Hive), Cloud (AWS/Azure/GCP), Data Warehousing Responsibilities : Build and maintain robust, scalable data pipelines and systems. Design and implement ETL processes to support analytics and reporting. Optimize data workflows for performance and scalability. Collaborate with data scientists, analysts, and engineering teams. Ensure data quality, governance, and security compliance. Required Skills Strong experience with Python/Scala, SQL, and ETL tools. Hands-on with Big Data technologies (Hadoop, Spark, Kafka, Hive, etc. Proficiency in Cloud Platforms (AWS/GCP/Azure). Experience with data warehousing (e.g., Redshift, Snowflake, BigQuery). Familiarity with CI/CD pipelines and version control systems. Nice To Have Experience with Airflow, Databricks, or dbt. Knowledge of real-time data processing (ref:hirist.tech) Show more Show less
Posted 4 days ago
10.0 years
0 Lacs
Itanagar, Arunachal Pradesh, India
Remote
Our Company Weβre Hitachi Digital, a company at the forefront of digital transformation and the fastest growing division of Hitachi Group. Weβre crucial to the companyβs strategy and ambition to become a premier global player in the massive and fast-moving digital transformation market. Our group companies, including GlobalLogic, Hitachi Digital Services, Hitachi Vantara and more, offer comprehensive services that span the entire digital lifecycle, from initial idea to full-scale operation and the infrastructure to run it on. Hitachi Digital represents One Hitachi, integrating domain knowledge and digital capabilities, and harnessing the power of the entire portfolio of services, technologies, and partnerships, to accelerate synergy creation and make real-world impact for our customers and society as a whole. Imagine the sheer breadth of talent it takes to unleash a digital future. We donβt expect you to βfitβ every requirement β your life experience, character, perspective, and passion for achieving great things in the world are equally as important to us. Preferred job location: Bengaluru, Hyderabad, Pune, New Delhi or Remote The team Hitachi Digital is a leader in digital transformation, leveraging advanced AI and data technologies to drive innovation and efficiency across various operational companies (OpCos) and departments. We are seeking a highly experienced Lead Data Engineer to join our dynamic team and contribute to the development of robust data solutions and applications. The role Lead the design, development, and implementation of data engineering solutions with a focus on Google BigQuery. Develop and optimize complex SQL queries and data pipelines in BigQuery. Implement and integrate VectorAI and Agent Workspace for Google Gemini into data solutions. Lead the development of high-performance data ingestion processes using modern ETL/ELT practices. Collaborate with engineers to establish best practices for data system creation, ensuring data quality, integrity, and proper documentation. Continuously improve reporting and analysis by automating processes and streamlining workflows. Conduct research and stay updated on the latest advancements in data engineering and technologies. Troubleshoot and resolve complex issues related to data systems and applications. Document development processes, methodologies, and best practices. Mentor junior developers and participate in code reviews, providing constructive feedback to team members. Provide strategic direction and leadership in data engineering and technology adoption. What Youβll Bring Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field. 10+ years of experience in data technologies. 5+ years of extensive experience in migrating data workloads to BigQuery on GCP. Strong programming skills in languages such as Python, Java, or SQL. Technical proficiency in BigQuery and other related tools on GCP. GCP Certifications in the data space. Knowledge of cloud platforms, particularly Google Cloud Platform (GCP). Experience with VectorAI and Agent Workspace for Google Gemini. Excellent problem-solving skills and the ability to work independently and as part of a team. Strong communication skills and the ability to convey complex technical concepts to non-technical stakeholders. Proven leadership skills and experience in guiding development projects from conception to deployment. Preferred Qualifications: Familiarity with data engineering tools and techniques. Previous experience in a similar role within a tech-driven company. About Us Weβre a global, 1000-strong diverse team of professional experts, promoting and delivering Social Innovation through our One Hitachi initiative (OT x IT x Product) and working on projects that have a real-world impact. Weβre curious, passionate and empowered, blending our legacy of 110 years of innovation with our shaping our future. Here youβre not just another employee; youβre part of a tradition of excellence and a community working towards creating a digital future. Championing diversity, equity, and inclusion Diversity, equity, and inclusion (DEI) are integral to our culture and identity. Diverse thinking, a commitment to allyship, and a culture of empowerment help us achieve powerful results. We want you to be you, with all the ideas, lived experience, and fresh perspective that brings. We support your uniqueness and encourage people from all backgrounds to apply and realize their full potential as part of our team. How We Look After You We help take care of your today and tomorrow with industry-leading benefits, support, and services that look after your holistic health and wellbeing. Weβre also champions of life balance and offer flexible arrangements that work for you (role and location dependent). Weβre always looking for new ways of working that bring out our best, which leads to unexpected ideas. So here, youβll experience a sense of belonging, and discover autonomy, freedom, and ownership as you work alongside talented people you enjoy sharing knowledge with. Weβre proud to say weβre an equal opportunity employer and welcome all applicants for employment without attention to race, colour, religion, sex, sexual orientation, gender identity, national origin, veteran, age, disability status or any other protected characteristic. Should you need reasonable accommodations during the recruitment process, please let us know so that we can do our best to set you up for success. Show more Show less
Posted 4 days ago
6.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Job Title : Data Analyst Insurance Domain. Duration : 6 Months. Employment Type : Contractual. Work Location : Gurugram & Bangalore (Priority) | Chennai, Pune, Mumbai (Secondary). Job Description. We are looking for an experienced Data Analyst with strong domain knowledge in Insurance and expertise in handling end-to-end data workflows. The ideal candidate should have hands-on experience in data modelling, data analysis, data architecture, and data visualization, along with advanced skills in modern data in tools such as Azure, Python, Spark, and PySpark is required to deliver insights that support strategic business decisions in the insurance sector. Key Responsibilities Analyze large, complex datasets to identify trends, patterns, and insights relevant to the insurance business. Design and implement data models to support analytical and operational reporting needs. Build and maintain scalable data architectures using cloud platforms such as Azure. Develop efficient data pipelines and ETL processes using Python, Spark, and PySpark. Apply domain expertise to validate and ensure data accuracy, relevance, and usability. Create clear and insightful data visualizations and dashboards using open-source or enterprise tools (excluding Power BI). Collaborate with stakeholders to translate business requirements into analytical solutions. Ensure best practices in data governance, security, and documentation. Key Skills Required 6+ years of experience as a Data Analyst. 3 to 4 years of hands-on experience in the Insurance domain. Expertise in Data Modelling, Data Analysis, and Data Architecture. Proficiency in Azure, Python, Spark, and PySpark. Strong SQL skills for data extraction, transformation, and analysis. Experience with Data Visualization using tools (excluding Power BI). Excellent communication and stakeholder management skills. Strong analytical thinking and problem-solving abilities. (ref:hirist.tech) Show more Show less
Posted 4 days ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
You Lead the Way. We've Got Your Back. With the right backing, people and businesses have the power to progress in incredible ways. When you join Team Amex, you become part of a global and diverse community of colleagues with an unwavering commitment to back our customers, communities, and each other. Here, you'll learn and grow as we help you create a career journey that's unique and meaningful to you with benefits, programs, and flexibility that support you personally and professionally. At American Express, you'll be recognized for your contributions, leadership, and impactβevery colleague has the opportunity to share in the company's success. Together, we'll win as a team, striving to uphold our company values and powerful backing promise to provide the world's best customer experience every day. And we'll do it with the utmost integrity, and in an environment where everyone is seen, heard and feels like they belong. Join Team Amex and let's lead the way together. American Express has embarked on an exciting transformation driven by an energetic new team of high performers. This is a great opportunity to join the Customer Marketing organization within American Express Technologies and become a driver of this exciting journey. We are looking for a highly skilled and experienced Senior Engineer with a history of building Bigdata, GCP Cloud, Python and Spark applications. The Senior Engineer will play a crucial role in designing, implementing, and optimizing data solutions to support our organization's data-driven initiatives. This role requires expertise in data engineering, strong problem-solving abilities, and a collaborative mindset to work effectively with various stakeholders. Joining the Enterprise Marketing team, this role will be focused on the delivery of innovative solutions to satisfy the needs of our business. As an agile team we work closely with our business partners to understand what they require, and we strive to continuously improve as a team. We pride ourselves on a culture of kindness and positivity, and a continuous focus on supporting colleague development to help you achieve your career goals. We lead with integrity, and we emphasize work/life balance for all of our teammates. How will you make an impact in this role? There are hundreds of opportunities to make your mark on technology and life at American Express. Here's just some of what you'll be doing: As a part of our team, you will be developing innovative, high quality, and robust operational engineering capabilities. Develop software in our technology stack which is constantly evolving but currently includes Big data, Spark, Python, Scala, GCP, Adobe Suit ( like Customer Journey Analytics ). Work with Business partners and stakeholders to understand functional requirements, architecture dependencies, and business capability roadmaps. Create technical solution designs to meet business requirements. Define best practices to be followed by team. Taking your place as a core member of an Agile team driving the latest development practices Identify and drive reengineering opportunities, and opportunities for adopting new technologies and methods. Suggest and recommend solution architecture to resolve business problems. Perform peer code review and participate in technical discussions with the team on the best solutions possible. As part of our diverse tech team, you can architect, code and ship software that makes us an essential part of our customers' digital lives. Here, you can work alongside talented engineers in an open, supportive, inclusive environment where your voice is valued, and you make your own decisions on what tech to use to solve challenging problems. American Express offers a range of opportunities to work with the latest technologies and encourages you to back the broader engineering community through open source. And because we understand the importance of keeping your skills fresh and relevant, we give you dedicated time to invest in your professional development. Find your place in technology of #TeamAmex. Minimum Qualifications : Β· BS or MS degree in computer science, computer engineering, or other technical discipline, or equivalent work experience. Β· 8+ years of hands-on software development experience with Big Data & Analytics solutions β Hadoop Hive, Spark, Scala, Hive, Python, shell scripting, GCP Cloud Big query, Big Table, Airflow. Β· Working knowledge of Adobe suit like Adobe Experience Platform, Adobe Customer Journey Analytics Β· Proficiency in SQL and database systems, with experience in designing and optimizing data models for performance and scalability. Β· Design and development experience with Kafka, Real time ETL pipeline, API is desirable. Β· Experience in designing, developing, and optimizing data pipelines for large-scale data processing, transformation, and analysis using Big Data and GCP technologies. Β· Certifications in cloud platform (GCP Professional Data Engineer) is a plus. Β· Understanding of distributed (multi-tiered) systems, data structures, algorithms & Design Patterns. Β· Strong Object-Oriented Programming skills and design patterns. Β· Experience with CICD pipelines, Automated test frameworks, and source code management tools (XLR, Jenkins, Git, Maven). Β· Good knowledge and experience with configuration management tools like GitHub Β· Ability to analyze complex data engineering problems, propose effective solutions, and implement them effectively. Β· Looks proactively beyond the obvious for continuous improvement opportunities. Β· Communicates effectively with product and cross functional team. Β· Willingness to learn new technologies and leverage them to their optimal potential. Β· Understanding of various SDLC methodologies, familiarity with Agile & scrum ceremonies. We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations. Show more Show less
Posted 4 days ago
8.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
About The Role Seeking a highly skilled Senior Data Engineer with 8 years of experience to join our dynamic team. Requirements Experienced in architecting, building and maintaining end-to-end data pipelines using Python and Spark in Databricks Proficient in designing and implementing scalable data lake and data warehouse solutions on Azure including Azure Data Lake, Data Factory, Synapse and Azure SQL Hands on experience in leading the integration of complex data sources and the development of efficient ETL processes Champion best practices in data governance, data quality and data security across the organization Adept in collaborating closely with data scientists, analysts and business stakeholders to deliver high-impact data solutions. Show more Show less
Posted 4 days ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Design, develop, and maintain ETL (Extract, Transform, Load) pipelines using AWS services like Glue, Lambda, Step Functions, and Data Pipeline. Automate data ingestion from various sources such as databases, APIs, logs, and streaming data. Optimize data processing for performance and cost efficiency Work with Amazon Redshift, Athena, or Snowflake to build and optimize data warehouses for analytics. Optimize queries, indexing, and partitioning for performance improvements. Work with Data Scientists, Analysts, and Software Engineers to deliver data solutions. Understand business requirements and translate them into scalable data solutions. Maintain documentation and ensure best practices for data engineering in AWS. Strong SQL skills Show more Show less
Posted 4 days ago
5.0 years
0 Lacs
Delhi, India
On-site
Job Summary With over 5 years of experience, you should possess significant knowledge in developing technology solutions and have a track record of collaborating effectively within development teams, preferably utilizing Agile development techniques. Extensive experience and understanding of the Communication Sector and the need for digital disruption and you have proven track record of successful design and implementation of customer projects, preferably enterprise CRM implementations for in any of the Communication domain. Hand on experience with Salesforce Communication Cloud (Omnistudio, Product Catalog/EPC, CPQ, ESM, Order Management and Digital Commerce). Experience leading teams in the analysis of complex problems, and the design and implementation of related solutions. Salesforce.com integration experience, including between different business systems as well as working with integration tools with end-to-end implementation experience in building CRM solutions. A detailed understanding of Web Services, Data Modeling, and Enterprise application integration concepts, including experience with enterprise integration tools (ESBs and/or ETL tools), and common integration design patterns with enterprise systems (e.g. CMS, ERP, HRIS, DWH/DM, SAP). Strong experience with configuration, customization, programming with APEX APIs, APEX Triggers, Apex classes, APEX Web services, API, AppExchange deployment, Salesforce.com s-controls and implementing new instances of Salesforce.com from scratch. Additional Salesforce.com experience includes Workflow Alerts and Actions, Approval Workflow, Process Builders, and Lightning Flow. Strong practical deployment knowledge of Lightning, VisualForce, Flex, and LWC, Omniscripts, FlexCards. Ability to define the system landscape, identify gaps between current and desired end-states, and deliver a CRM solution. Understanding of DevOps and Release Management for large-scale transformation projects. A self-starter, adept at picking up new skills and technologies, and eager to break new ground. Excellent communication skills to communicate with customers, partners, and internal team Skills : Hands-On experience in Salesforce Communication Cloud Omnistudio, EPC, CPQ, ESM, CLM, Digital Commerce, OM, Salesforce/Apex, Apex Design Patterns, Triggers, Workflow Alerts and Actions, Process Builders, Visualforce, Lightning, LWC, Data modeling, Process modeling tools, and best practices, Application, design and development background. Platform Security, Identity and Access Management , Sharing and Transparency Data Architecture and Management , large, mission-critical volumes. Architectural Design Patterns. DevOps and Release management for large transformation projects. Understanding of Mobile and Lightning Style frameworks and channels. Familiarity with modern web and mobile technologies (HTML, CSS, JavaScript, Web Components, others). Project management tools and best practices. (ref:hirist.tech) Show more Show less
Posted 4 days ago
0 years
0 Lacs
Greater Kolkata Area
On-site
Description We are seeking a talented and experienced professional for the position of Business Systems Management, Data Engineering (DX-DSS-DEN) at Google Cloud. In this role, you will be instrumental in driving business insights through advanced data engineering practices while managing complex systems to support our cloud operations. You will leverage Google Cloud technologies to ensure data integrity, optimize workflows, and develop scalable solutions that enhance our service offerings. Collaborating closely with cross-functional teams, you will drive initiatives that improve data accessibility and usability, transforming raw data into meaningful insights that influence strategic decisions. As part of a fast-paced environment, you will be responsible for the design, development, and implementation of robust data pipelines, extracting value from vast datasets. You will also focus on creating documentation and providing training to stakeholders, ensuring that our systems are user-friendly and aligned with business requirements. If you are passionate about data management and systems optimization, and you are looking to contribute to cutting-edge technology solutions at a global leader in cloud computing, we invite you to join us and make a significant impact. Responsibilities Design, develop, and maintain scalable data pipelines using Google Cloud technologies. Collaborate with cross-functional teams to gather requirements and translate them into technical specifications. Ensure data quality and integrity by developing validation checks and monitoring processes. Implement best practices for data governance and security across all data engineering initiatives. Optimize existing data systems for improved performance and reduced costs. Create comprehensive documentation of data systems and processes for future reference and training. Provide technical support and training to stakeholders on data access and visualization Bachelor's degree in Computer Science, Data Engineering, or a related field. Proven experience in data engineering, with a focus on cloud platforms, particularly Google Cloud. Strong proficiency in SQL and experience with big data technologies such as Hadoop, BigQuery, or Dataflow. Solid understanding of data modeling, ETL processes, and data warehousing concepts. Experience with programming languages such as Python, Java, or Go. Familiarity with data visualization tools like Tableau or Data Studio. Excellent problem-solving skills and ability to work in a fast-paced, dynamic environment. (ref:hirist.tech) Show more Show less
Posted 4 days ago
1.0 - 3.0 years
0 Lacs
Greater Kolkata Area
On-site
Role : Machine Learning Engineer Key Responsibilities Collaborate with data scientists to support end-to-end ML model development, including data preparation, feature engineering, training, and evaluation. Build and maintain automated pipelines for data ingestion, transformation, and model scoring using Python and SQL. Assist in model deployment using CI/CD pipelines (e.g., Jenkins) and ensure smooth integration with production systems. Develop tools and scripts to support model monitoring, logging, and retraining workflows. Work with data from relational databases (RDS, Redshift) and preprocess it for model consumption. Analyze pipeline performance and model behavior; identify opportunities for optimization and refactoring. Contribute to the development of a feature store and standardized processes to support reproducible data science. Required Skills & Experience 1-3 years of hands-on experience in Python programming for data science or ML engineering tasks. Solid understanding of machine learning workflows, including model training, validation, deployment, and monitoring. Proficient in SQL and working with structured data from sources like Redshift, RDS, etc. Familiarity with ETL pipelines and data transformation best practices. Basic understanding of ML model deployment strategies and CI/CD tools like Jenkins. Strong analytical mindset with the ability to interpret and debug data/model issues. Preferred Qualifications Exposure to frameworks like scikit-learn, XGBoost, LightGBM, or similar. Knowledge of ML lifecycle tools (e.g., MLflow, Ray). Familiarity with cloud platforms (AWS preferred) and scalable infrastructure. (ref:hirist.tech) Show more Show less
Posted 4 days ago
7.0 - 15.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
DataArchitecture Design: Develop and maintain a comprehensive data architecture strategy that aligns with the business objectives and technology landscape. DataModeling:Createand managelogical, physical, and conceptual data models to support various business applications and analytics. DatabaseDesign: Design and implement database solutions, including data warehouses, data lakes, and operational databases. DataIntegration: Oversee the integration of data from disparate sources into unified, accessible systems using ETL/ELT processes. DataGovernance:Implementand enforce data governance policies and procedures to ensure data quality, consistency, and security. TechnologyEvaluation: Evaluate and recommend data management tools, technologies, and best practices to improve data infrastructure and processes. Collaboration: Work closely with data engineers, data scientists, business analysts, and other stakeholders to understand data requirements and deliver effective solutions. Trusted by the worldβs leading brands Documentation:Createand maintain documentation related to data architecture, data flows, data dictionaries, and system interfaces. PerformanceTuning: Optimize database performance through tuning, indexing, and query optimization. Security: Ensure data security and privacy by implementing best practices for data encryption, access controls, and compliance with relevant regulations (e.g., GDPR, CCPA) Requirements Helpingproject teams withsolutions architecture,troubleshooting, and technical implementation assistance. Proficiency in SQL and database management systems (e.g., MySQL, PostgreSQL, Oracle, SQL Server). Minimum7to15yearsofexperienceindataarchitecture or related roles. Experiencewithbig data technologies (e.g., Hadoop, Spark, Kafka, Airflow). Expertisewithcloud platforms (e.g., AWS, Azure, Google Cloud) and their data services. Knowledgeofdataintegration tools (e.g., Informatica, Talend, FiveTran, Meltano). Understandingofdatawarehousing concepts and tools (e.g., Snowflake, Redshift, Synapse, BigQuery). ο¬ Experiencewithdata governanceframeworks and tools. Show more Show less
Posted 4 days ago
130.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Associate Director, Data Engineering Based in Hyderabad, join a global healthcare biopharma company and be part of a 130-year legacy of success backed by ethical integrity, forward momentum, and an inspiring mission to achieve new milestones in global healthcare. Led an Organization driven by digital technology and data-backed approaches that supports a diversified portfolio of prescription medicines, vaccines, and animal health products. Drive innovation and execution excellence. Be the leaders who have a passion for using data, analytics, and insights to drive decision-making, which will allow us to tackle some of the world's greatest health threats. Our Technology centers focus on creating a space where teams can come together to deliver business solutions that save and improve lives. AN integral part of the our company's IT operating model, Tech centers are globally distributed locations where each IT division has employees to enable our digital transformation journey and drive business outcomes. These locations, in addition to the other sites, are essential to supporting our business and strategy. A focused group of leaders in each tech center helps to ensure we can manage and improve each location, from investing in growth, success, and well-being of our people, to making sure colleagues from each IT division feel a sense of belonging to managing critical emergencies. And together, we must leverage the strength of our team to collaborate globally to optimize connections and share best practices across the Tech Centers. Role Overview As the Associate Director, Data Engineering, your role will focus on business intelligence at the organization to enhance data-driven decision-making across the organization. This role is crucial for transforming data into valuable insights that drive business performance, support strategic initiatives and ultimately contribute to our company's mission to use science to improve and save lives around the world. What Will You Do In This Role You will develop and ensure that business intelligence activities are efficient and effective, enabling timely access to accurate data for informed decision-making, and focused on automation, controls, and data quality. Design, develop and maintain data pipelines to extract data from a variety of sources and populate data lake and data warehouse. Collaborate with Data Analyst, Data scientists, Machine Learning Engineers to identify and transform data for ingestion, exploration, and modeling. Work with data governance team and implement data quality checks and maintain data catalogs. Use Orchestration, logging, and monitoring tools to build resilient pipelines. Use test driven development methodology when building ELT/ETL pipelines. Understand and apply concepts like data lake, data warehouse, lake-house, data mesh and data-fabric where relevant. Develop data models for cloud data warehouses like Redshift and Snowflake. Develop pipelines to ingest data into cloud data warehouses. You will investigate enterprise data requirements where there is some complexity and ambiguity and plan own data modeling and design activities, selecting appropriate techniques and the correct level of detail for meeting assigned objectives. You will define and implement data engineering strategies that align with organizational goals and data governance standards. You will play a lead role in agile engineering and consulting, providing guidance on for complex data and unplanned data challenges. You will collaborate in the formulation of analytics policies, standards, and best practices to ensure consistency and compliance across the organization. Encourages a culture of continuous learning, constructive collaboration, and innovation within the team. What Should You Have Bachelor's degree in Computer Science/Engineering, Data Sciences, Bioinformatics, Biostatistics or any other computational quantitative science. Minimum of 5-7 years of developing data pipelines & data infrastructure, ideally within a drug development or life sciences context. Expert in software / data engineering practices (including versioning, release management, deployment of datasets, agile & related software tools). Strong software development skills in R and Python, SQL, PySpark. Agile working knowledge. Strong working knowledge of at least one large-scale data processing technology (e.g. High-performance computing, distributed computing), databases and underlying technology (cloud or on-prem environments, containerization, distributed storage & databases). Strong interpersonal and communication skills (verbal and written) effectively bridging scientific and business needs; experience working in a matrix environment. Proven record of delivering high-quality results in quantitative sciences and/or a solid publication track record. Our technology teams operate as business partners, proposing ideas and innovative solutions that enable new organizational capabilities. We collaborate internationally to deliver services and solutions that help everyone be more productive and enable innovation. Who we are We are known as Merck & Co., Inc., Rahway, New Jersey, USA in the United States and Canada and MSD everywhere else. For more than a century, we have been inventing for life, bringing forward medicines and vaccines for many of the world's most challenging diseases. Today, our company continues to be at the forefront of research to deliver innovative health solutions and advance the prevention and treatment of diseases that threaten people and animals around the world. What we look for Imagine getting up in the morning for a job as important as helping to save and improve lives around the world. Here, you have that opportunity. You can put your empathy, creativity, digital mastery, or scientific genius to work in collaboration with a diverse group of colleagues who pursue and bring hope to countless people who are battling some of the most challenging diseases of our time. Our team is constantly evolving, so if you are among the intellectually curious, join usβand start making your impact today. Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Regular Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Not Applicable Shift Valid Driving License Hazardous Material(s) Required Skills Business Intelligence (BI), Database Administration, Data Engineering, Data Management, Data Modeling, Data Visualization, Design Applications, Information Management, Software Development, Software Development Life Cycle (SDLC), System Designs Preferred Skills Job Posting End Date 07/14/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R336586 Show more Show less
Posted 4 days ago
4.0 - 6.0 years
0 Lacs
Kapra, Telangana, India
On-site
Note- Candidate should ready to go Johannesburg(South Africa) for initially a period of 3-4 months. Must have a ready passport. Job Description We are seeking a skilled Data Masking Engineer with 4-6 years of experience in SQL Server and Redgate tools to design, implement, and manage data masking solutions. The ideal candidate will ensure sensitive data is protected while maintaining database usability for development, testing, and analytics. Key Responsibilities Design and implement data masking strategies for SQL Server databases to comply with security and privacy regulations (GDPR, HIPAA, etc.). Use Redgate Data Masker and other tools to anonymize sensitive data while preserving referential integrity. Develop and maintain masking rules, scripts, and automation workflows for efficient data obfuscation. Collaborate with DBAs, developers, and security teams to identify sensitive data fields and define masking policies. Validate masked data to ensure consistency, usability, and compliance with business requirements. Troubleshoot and optimize masking processes to minimize performance impact on production and non-production environments. Document masking procedures, policies, and best practices for internal teams. Stay updated with Redgate tool updates, SQL Server features, and data security trends. Required Skills & Qualifications 4-6 years of hands-on experience in SQL Server database Strong expertise in Redgate Data Masker or similar data masking tools (e.g., Delphix, Informatica). Proficiency in T-SQL, PowerShell, or Python for scripting and automation. Knowledge of data privacy laws (GDPR, CCPA) and secure data handling practices. Experience with SQL Server security features (Dynamic Data Masking, Always Encrypted, etc.) is a plus. Familiarity with DevOps/CI-CD pipelines for automated masking in development/test environments. Strong analytical skills to ensure masked data remains realistic for testing. Preferred Qualifications Redgate or Microsoft SQL Server certifications. Experience with SQL Server Integration Services (SSIS) or ETL processes. Knowledge of cloud databases (Azure SQL, AWS RDS) and their masking solutions (ref:hirist.tech) Show more Show less
Posted 4 days ago
3.0 years
0 Lacs
Gurgaon, Haryana, India
Remote
About This Role Want to elevate your career by being a part of the world's largest asset manager? Do you thrive in an environment that fosters positive relationships and recognizes stellar service? Are analyzing complex problems and identifying solutions your passion? Look no further. BlackRock is currently seeking a candidate to become part of our Global Investment Operations Data Engineering team. We recognize that strength comes from diversity, and will embrace your rare skills, eagerness, and passion while giving you the opportunity to grow professionally and as an individual. We know you want to feel valued every single day and be recognized for your contribution. At BlackRock, we strive to empower our employees and actively engage your involvement in our success. With over USD $11.5 trillion of assets under management, we have an extraordinary responsibility: our technology and services empower millions of investors to save for retirement, pay for college, buy a home and improve their financial well-being. Come join our team and experience what it feels like to be part of an organization that makes a difference. Technology & Operations Technology & Operations(T&O) is responsible for the firm's worldwide operations across all asset classes and geographies. The operational functions are aligned with clients, products, fund structures and our Third-party provider networks. Within T&O, Global Investment Operations (GIO) is responsible for the development of the firm's operating infrastructure to support BlackRock's investment businesses worldwide. GIO spans Trading & Market Documentation, Transaction Management, Collateral Management & Payments, Asset Servicing including Corporate Actions and Cash & Asset Operations, and Securities Lending Operations. GIO provides operational service to BlackRock's Portfolio Managers and Traders globally as well as industry leading service to our end clients. GIO Engineering Working in close partnership with GIO business users and other technology teams throughout Blackrock, GIO Engineering is responsible for developing and providing data and software solutions that support GIO business processes globally. GIO Engineering solutions combine technology, data, and domain expertise to drive exception-based, function-agnostic, service-orientated workflows, data pipelines, and management dashboards. The Role β GIO Engineering Data Lead Work to date has been focused on building out robust data pipelines and lakes relevant to specific business functions, along with associated pools and Tableau / PowerBI dashboards for internal BlackRock clients. The next stage in the project involves Azure / Snowflake integration and commercializing the offering so BlackRockβs 150+ Aladdin clients can leverage the same curated data products and dashboards that are available internally. The successful candidate will contribute to the technical design and delivery of a curated line of data products, related pipelines, and visualizations in collaboration with SMEs across GIO, Technology and Operations, and the Aladdin business. Responsibilities Specifically, we expect the role to involve the following core responsibilities and would expect a successful candidate to be able to demonstrate the following (not in order of priority) Design, develop and maintain a Data Analytics Infrastructure Work with a project manager or drive the project management of team deliverables Work with subject matter experts and users to understand the business and their requirements. Help determine the optimal dataset and structure to deliver on those user requirements Work within a standard data / technology deployment workflow to ensure that all deliverables and enhancements are provided in a disciplined, repeatable, and robust manner Work with team lead to understand and help prioritize the teamβs queue of work Automate periodic (daily/weekly/monthly/Quarterly or other) reporting processes to minimize / eliminate associated developer BAU activities. Leverage industry standard and internal tooling whenever possible in order to reduce the amount of custom code that requires maintenance Experience 3+ years of experience in writing ETL, data curation and analytical jobs using Hadoop-based distributed computing technologies: Spark / PySpark, Hive, etc. 3+ years of knowledge and Experience of working with large enterprise databases preferably Cloud bases data bases/ data warehouses like Snowflake on Azure or AWS set-up Knowledge and Experience in working with Data Science / Machine / Gen AI Learning frameworks in Python, Azure/ openAI, meta tec. Knowledge and Experience building reporting and dashboards using BI Tools: Tableau, MS PowerBI, etc. Prior Experience working on Source Code version Management tools like GITHub etc. Prior experience working with and following Agile-based workflow paths and ticket-based development cycles Prior Experience setting-up infrastructure and working on Big Data analytics Strong analytical skills with the ability to collect, organize, analyse, and disseminate significant amounts of information with attention to detail and accuracy Experience working with SMEs / Business Analysts, and working with Stakeholders for sign-off Our Benefits To help you stay energized, engaged and inspired, we offer a wide range of benefits including a strong retirement plan, tuition reimbursement, comprehensive healthcare, support for working parents and Flexible Time Off (FTO) so you can relax, recharge and be there for the people you care about. Our hybrid work model BlackRockβs hybrid work model is designed to enable a culture of collaboration and apprenticeship that enriches the experience of our employees, while supporting flexibility for all. Employees are currently required to work at least 4 days in the office per week, with the flexibility to work from home 1 day a week. Some business groups may require more time in the office due to their roles and responsibilities. We remain focused on increasing the impactful moments that arise when we work together in person β aligned with our commitment to performance and innovation. As a new joiner, you can count on this hybrid model to accelerate your learning and onboarding experience here at BlackRock. About BlackRock At BlackRock, we are all connected by one mission: to help more and more people experience financial well-being. Our clients, and the people they serve, are saving for retirement, paying for their childrenβs educations, buying homes and starting businesses. Their investments also help to strengthen the global economy: support businesses small and large; finance infrastructure projects that connect and power cities; and facilitate innovations that drive progress. This mission would not be possible without our smartest investment β the one we make in our employees. Itβs why weβre dedicated to creating an environment where our colleagues feel welcomed, valued and supported with networks, benefits and development opportunities to help them thrive. For additional information on BlackRock, please visit @blackrock | Twitter: @blackrock | LinkedIn: www.linkedin.com/company/blackrock BlackRock is proud to be an Equal Opportunity Employer. We evaluate qualified applicants without regard to age, disability, family status, gender identity, race, religion, sex, sexual orientation and other protected attributes at law. Show more Show less
Posted 4 days ago
8.0 - 12.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Avant de postuler Γ un emploi, sΓ©lectionnez votre langue de prΓ©fΓ©rence parmi les options disponibles en haut Γ droite de cette page. DΓ©couvrez votre prochaine opportunitΓ© au sein d'une organisation qui compte parmi les 500 plus importantes entreprises mondiales. Envisagez des opportunitΓ©s innovantes, dΓ©couvrez notre culture enrichissante et travaillez avec des Γ©quipes talentueuses qui vous poussent Γ vous dΓ©velopper chaque jour. Nous savons ce quβil faut faire pour diriger UPS vers l'avenir : des personnes passionnΓ©es dotΓ©es dβune combinaison unique de compΓ©tences. Si vous avez les qualitΓ©s, de la motivation, de l'autonomie ou le leadership pour diriger des Γ©quipes, il existe des postes adaptΓ©s Γ vos aspirations et Γ vos compΓ©tences d'aujourd'hui et de demain. Fiche De Poste Job Title: Senior Business Analyst Experience Range: 8-12 Years Location: Chennai, Hybrid Employment Type: Full-Time About UPS UPS is a global leader in logistics, offering a broad range of solutions that include transportation, distribution, supply chain management, and e-commerce. Founded in 1907, UPS operates in over 220 countries and territories, delivering packages and providing specialized services worldwide. Our mission is to enable commerce by connecting people, places, and businesses, with a strong focus on sustainability and innovation. About The Role We are seeking an experienced Senior Business Analyst to join our project team EDI-OS. The primary function of the EDI-OS application is to translate application data to and from EDI standard format. EDI-OS uses IBM Transformation Extender (ITX) 9.0/10 as the data transformation engine, and ITX Advanced (ITXA) 9.0 for enveloping and de-enveloping functionality. EDI-OS translates primarily Small Package data for multiple business functions including Billing, Visibility, PLD, Brokerage/Customs and Finance. EDI (Electronic Data Interchange) is the systematic exchange of data between internal UPS applications and external customers and vendors using standard data formats such as X12 and EDIFACT. The Senior Business Analyst will play a pivotal role in bridging the gap between business stakeholders, development teams, and data engineering teams. This role involves eliciting and analyzing requirements, defining business processes, and ensuring alignment of project objectives with strategic goals. The candidate will also work closely with architects, developers, and testers to ensure comprehensive requirements coverage and successful project delivery. Key Responsibilities Primary Skills Secondary Skills Soft Skills Educational And Preferred Qualifications About the Team As a Senior Business Analyst , you will be working with a dynamic, cross-functional team that includes developers, product managers, and other quality engineers. You will be a key player in the quality assurance process, helping shape testing strategies and ensuring the delivery of high-quality web applications. Requirements Elicitation and Analysis: Gather and document business and technical requirements through stakeholder interviews, workshops, and document analysis. Analyze complex data flows and business processes to define clear and concise requirements. Create detailed requirement specifications, user stories, and acceptance criteria for both web application and data engineering components. Business Process Design and Improvement: Define and document business processes, workflows, and data models. Identify areas for process optimization and automation within web and data solutions. Collaborate with stakeholders to design solutions that align with business objectives. Stakeholder Communication and Collaboration: Serve as a liaison between business stakeholders, development teams, and data engineering teams. Facilitate communication and collaboration to ensure stakeholder alignment and understanding. Conduct requirement walkthroughs, design reviews, and user acceptance testing sessions. Solution Validation and Quality Assurance: Ensure requirements traceability throughout the project lifecycle. Validate and test solutions to ensure they meet business needs and objectives. Collaborate with QA teams to define testing strategies and acceptance criteria. Business Analysis: Requirement gathering, process modeling, and gap analysis. EDI Mapping Documentation: User stories, functional specifications, and acceptance criteria. Agile Methodologies: Experience in Agile/Scrum environments. Conversant with the Mainframe Environment β to login and look at the file layout, analyze the EDI layout mapping. Stakeholder Management: Effective communication and collaboration with cross-functional teams. Data Analysis: Ability to analyze and interpret complex data flows and business processes. Data Engineering: Understanding of data pipelines in Azure DevOps, ETL processes, and data modeling. Database - DB2 Query Languages β SQL, PL/SQL, Communication Skills: Excellent verbal and written communication for stakeholder engagement. Strong problem-solving abilities and attention to detail. Excellent communication skills, both verbal and written. Effective time management and organizational capabilities. Ability to work independently and within a collaborative team environment. Strong interpersonal skills to engage with cross-functional teams. Time management, relationship building, prioritization, Bachelor's degree in Computer Science, Engineering, Information Technology, or a related field. Relevant certifications such as: Certified Business Analysis Professional (CBAP) PMI Professional in Business Analysis (PMI-PBA) About The Team As a Senior Business Analyst , you will be working with a dynamic, cross-functional team that includes developers, product managers, and other quality engineers. You will be a key player in the quality assurance process, helping shape testing strategies and ensuring the delivery of high-quality web applications. Type De Contrat en CDI Chez UPS, Γ©galitΓ© des chances, traitement Γ©quitable et environnement de travail inclusif sont des valeurs clefs auxquelles nous sommes attachΓ©s. Show more Show less
Posted 4 days ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrowβpeople with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description Job Title: Senior Business Analyst Experience Range: 8-12 Years Location: Chennai, Hybrid Employment Type: Full-Time About UPS UPS is a global leader in logistics, offering a broad range of solutions that include transportation, distribution, supply chain management, and e-commerce. Founded in 1907, UPS operates in over 220 countries and territories, delivering packages and providing specialized services worldwide. Our mission is to enable commerce by connecting people, places, and businesses, with a strong focus on sustainability and innovation. About The Role We are seeking an experienced Senior Business Analyst to join our project team EDI-OS. The primary function of the EDI-OS application is to translate application data to and from EDI standard format. EDI-OS uses IBM Transformation Extender (ITX) 9.0/10 as the data transformation engine, and ITX Advanced (ITXA) 9.0 for enveloping and de-enveloping functionality. EDI-OS translates primarily Small Package data for multiple business functions including Billing, Visibility, PLD, Brokerage/Customs and Finance. EDI (Electronic Data Interchange) is the systematic exchange of data between internal UPS applications and external customers and vendors using standard data formats such as X12 and EDIFACT. The Senior Business Analyst will play a pivotal role in bridging the gap between business stakeholders, development teams, and data engineering teams. This role involves eliciting and analyzing requirements, defining business processes, and ensuring alignment of project objectives with strategic goals. The candidate will also work closely with architects, developers, and testers to ensure comprehensive requirements coverage and successful project delivery. Key Responsibilities Primary Skills Secondary Skills Soft Skills Educational And Preferred Qualifications About the Team As a Senior Business Analyst , you will be working with a dynamic, cross-functional team that includes developers, product managers, and other quality engineers. You will be a key player in the quality assurance process, helping shape testing strategies and ensuring the delivery of high-quality web applications. Requirements Elicitation and Analysis: Gather and document business and technical requirements through stakeholder interviews, workshops, and document analysis. Analyze complex data flows and business processes to define clear and concise requirements. Create detailed requirement specifications, user stories, and acceptance criteria for both web application and data engineering components. Business Process Design and Improvement: Define and document business processes, workflows, and data models. Identify areas for process optimization and automation within web and data solutions. Collaborate with stakeholders to design solutions that align with business objectives. Stakeholder Communication and Collaboration: Serve as a liaison between business stakeholders, development teams, and data engineering teams. Facilitate communication and collaboration to ensure stakeholder alignment and understanding. Conduct requirement walkthroughs, design reviews, and user acceptance testing sessions. Solution Validation and Quality Assurance: Ensure requirements traceability throughout the project lifecycle. Validate and test solutions to ensure they meet business needs and objectives. Collaborate with QA teams to define testing strategies and acceptance criteria. Business Analysis: Requirement gathering, process modeling, and gap analysis. EDI Mapping Documentation: User stories, functional specifications, and acceptance criteria. Agile Methodologies: Experience in Agile/Scrum environments. Conversant with the Mainframe Environment β to login and look at the file layout, analyze the EDI layout mapping. Stakeholder Management: Effective communication and collaboration with cross-functional teams. Data Analysis: Ability to analyze and interpret complex data flows and business processes. Data Engineering: Understanding of data pipelines in Azure DevOps, ETL processes, and data modeling. Database - DB2 Query Languages β SQL, PL/SQL, Communication Skills: Excellent verbal and written communication for stakeholder engagement. Strong problem-solving abilities and attention to detail. Excellent communication skills, both verbal and written. Effective time management and organizational capabilities. Ability to work independently and within a collaborative team environment. Strong interpersonal skills to engage with cross-functional teams. Time management, relationship building, prioritization, Bachelor's degree in Computer Science, Engineering, Information Technology, or a related field. Relevant certifications such as: Certified Business Analysis Professional (CBAP) PMI Professional in Business Analysis (PMI-PBA) About The Team As a Senior Business Analyst , you will be working with a dynamic, cross-functional team that includes developers, product managers, and other quality engineers. You will be a key player in the quality assurance process, helping shape testing strategies and ensuring the delivery of high-quality web applications. Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation. Show more Show less
Posted 4 days ago
1.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description Amazon is looking for an enthusiastic, hard-working, and creative candidate to join the Fee Strategy Operations team as a Business Analyst. This position offers an exciting introduction to the Amazon Marketplace and provides a training ground for success. You will be responsible for supporting Fee Strategy within key workstreams such as go to market and fee incentives, driving reporting and solving challenging business goals. You will utilize data and develop creative processes to improve your teamβs performance. You will directly work with Fee Strategy, Product, Tech, Operations, and stakeholder teams to develop scalable, long-term solutions which will have a significant impact on Selling Partners and the respective support teams. Our environment is fast-paced, and requires someone who is a self-starter, detail-oriented, analytical, and comfortable working with multiple teams, partners, and management. The candidate should have a track record of delivering results, experience processing large amounts of data and report generation & management. The candidate should be responsible for converting data into actionable business insights. They should be analysis experts who leverage various data platforms and analytical tools to provide timely, meaningful, and consumable information. They build deep contextual and domain knowledge and ensure data quality while building scalable tools. They communicate findings with the most effective and influential methods. Key job responsibilities Define analytical approach; review and vet analytical approach with stakeholders Proactively and independently work with stakeholders to construct use cases and associated standardized outputs Scale data processes and reports; write queries that clients can update themselves; lead work with data engineering for full-scale automation Have a working knowledge of the data available or needed by the wider business for more complex or comparative analysis Solve ambiguous analyses with less well-defined inputs and outputs; drive to the heart of the problem and identify root causes Have the capability to handle large data sets in analysis through the use of additional tools Derive recommendations from analysis that significantly impact a department, create new processes, or change existing processes Identify and implement optimal communication mechanisms based on the data set and the stakeholders involved Actively manage the timeline and deliverables of projects, focusing on interactions in the team A day in the life Please refer Key Job Responsibilities About The Team Please refer Key Job Responsibilities Basic Qualifications 1+ years of tax, finance or a related analytical field experience 2+ years of complex Excel VBA macros writing experience Bachelor's degree or equivalent Experience creating complex SQL queries joining multiple datasets, ETL DW concepts 2+ years of Excel or Tableau (data manipulation, macros, charts and pivot tables) experience Experience demonstrating problem solving and root cause analysis Experience with reporting and Data Visualization tools such as Quick Sight / Tableau / Power BI or other BI packages Experience defining requirements and using data and metrics to draw business insights Preferred Qualifications Experience writing complex Excel VBA macros Experience scripting for automation (e.g., Python, Perl, Ruby) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region youβre applying in isnβt listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2992053 Show more Show less
Posted 4 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The ETL (Extract, Transform, Load) job market in India is thriving with numerous opportunities for job seekers. ETL professionals play a crucial role in managing and analyzing data effectively for organizations across various industries. If you are considering a career in ETL, this article will provide you with valuable insights into the job market in India.
These cities are known for their thriving tech industries and often have a high demand for ETL professionals.
The average salary range for ETL professionals in India varies based on experience levels. Entry-level positions typically start at around βΉ3-5 lakhs per annum, while experienced professionals can earn upwards of βΉ10-15 lakhs per annum.
In the ETL field, a typical career path may include roles such as: - Junior ETL Developer - ETL Developer - Senior ETL Developer - ETL Tech Lead - ETL Architect
As you gain experience and expertise, you can progress to higher-level roles within the ETL domain.
Alongside ETL, professionals in this field are often expected to have skills in: - SQL - Data Warehousing - Data Modeling - ETL Tools (e.g., Informatica, Talend) - Database Management Systems (e.g., Oracle, SQL Server)
Having a strong foundation in these related skills can enhance your capabilities as an ETL professional.
Here are 25 interview questions that you may encounter in ETL job interviews:
As you explore ETL jobs in India, remember to showcase your skills and expertise confidently during interviews. With the right preparation and a solid understanding of ETL concepts, you can embark on a rewarding career in this dynamic field. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2