Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 11.0 years
35 - 37 Lacs
Kolkata, Ahmedabad, Bengaluru
Work from Office
Dear Candidate, Looking for a Cloud Data Engineer to build cloud-based data pipelines and analytics platforms. Key Responsibilities: Develop ETL workflows using cloud data services. Manage data storage, lakes, and warehouses. Ensure data quality and pipeline reliability. Required Skills & Qualifications: Experience with BigQuery, Redshift, or Azure Synapse. Proficiency in SQL, Python, or Spark. Familiarity with data lake architecture and batch/streaming. Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies
Posted 1 month ago
3 - 6 years
4 - 8 Lacs
Bengaluru
Work from Office
locationsIN - Bangaloreposted onPosted Today time left to applyEnd DateMay 22, 2025 (5 days left to apply) job requisition idR140300 Company Overview A.P. Moller - Maersk is an integrated container logistics company and member of the A.P. Moller Group. Connecting and simplifying trade to help our customers grow and thrive . With a dedicated team of over 95,000 employees, operating in 130 countries; we go all the way to enable global trade for a growing world . From the farm to your refrigerator, or the factory to your wardrobe, A.P. Moller - Maersk is developing solutions that meet customer needs from one end of the supply chain to the other. About the Team At Maersk, the Global Ocean Manifest team is at the heart of global trade compliance and automation. We build intelligent, high-scale systems that seamlessly integrate customs regulations across 100+ countries, ensuring smooth cross-border movement of cargo by ocean, rail, and other transport modes. Our mission is to digitally transform customs documentation, reducing friction, optimizing workflows, and automating compliance for a complex web of regulatory bodies, ports, and customs authorities. We deal with real-time data ingestion, document generation, regulatory rule engines, and multi-format data exchange while ensuring resilience and security at scale. Key Responsibilities Work with large, complex datasets and ensure efficient data processing and transformation. Collaborate with cross-functional teams to gather and understand data requirements. Ensure data quality, integrity, and security across all processes. Implement data validation, lineage, and governance strategies to ensure data accuracy and reliability. Build, optimize , and maintain ETL pipelines for structured and unstructured data , ensuring high throughput, low latency, and cost efficiency . Experience in building scalable, distributed data pipelines for processing real-time and historical data. Contribute to the architecture and design of data systems and solutions. Write and optimize SQL queries for data extraction, transformation, and loading (ETL). Advisory to Product Owners to identify and manage risks, debt, issues and opportunities for the technical improvement . Providing continuous improvement suggestions in internal code frameworks, best practices and guidelines . Contribute to engineering innovations that fuel Maersks vision and mission. Required Skills & Qualifications 4 + years of experience in data engineering or a related field. Strong problem-solving and analytical skills. E xperience on Java, Spring framework Experience in building data processing pipelines using Apache Flink and Spark. Experience in distributed data lake environments ( Dremio , Databricks, Google BigQuery , etc.) E xperience on Apache Kafka, Kafka Streams Experience working with databases. PostgreSQL preferred, with s olid experience in writin g and opti mizing SQL queries. Hands-on experience in cloud environments such as Azure Cloud (preferred), AWS, Google Cloud, etc. Experience with data warehousing and ETL processes . Experience in designing and integrating data APIs (REST/ GraphQL ) for real-time and batch processing. Knowledge on Great Expectations, Apache Atlas, or DataHub , would be a plus Knowledge on RBAC, encryption, GDPR compliance would be a plus Business skills Excellent communication and collaboration skills Ability to translate between technical language and business language, and communicate to different target groups Ability to understand complex design Possessing the ability to balance and find competing forces & opinions , within the development team Personal profile Fact based and result oriented Ability to work independently and guide the team Excellent verbal and written communication Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing . Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing .
Posted 1 month ago
5 - 10 years
7 - 11 Lacs
Mumbai, Hyderabad
Work from Office
EDS Specialist - NAV02KL Company Worley Primary Location IND-MM-Navi Mumbai Job Engineering Design Systems (EDS) Schedule Full-time Employment Type Employee Job Level Experienced Job Posting Apr 7, 2025 Unposting Date May 30, 2025 Reporting Manager Title Manager We deliver the worlds most complex projects. Work as part of a collaborative and inclusive team . Enjoy a varied & challenging role. Building on our past. Ready for the future Worley is a global professional services company of energy, chemicals and resources experts headquartered in Australia. Right now, were bridging two worlds as we accelerate to more sustainable energy sources, while helping our customers provide the energy, chemicals and resources that society needs now. We partner with our customers to deliver projects and create value over the life of their portfolio of assets. We solve complex problems by finding integrated data-centric solutions from the first stages of consulting and engineering to installation and commissioning, to the last stages of decommissioning and remediation. Join us and help drive innovation and sustainability in our projects.? The Role As an EDS Specialist with Worley, you will work closely with our existing team to deliver projects for our clients while continuing to develop your skills and experience etc. Duties and responsibilities The AVEVA Engineering Senior Administrator is responsible for project set up, maintenance and support of the system. Senior Administrator shall ensure the set-up, configuration and deliverables are in line with organization/project/client standards. Gain full understanding of the scope, overall schedule, deliverables, milestones and coordination procedure. Understanding, documenting and managing the functional requirements (business scope) for an AVEVA Engineering implementation. Performing AVEVA Engineering support tasks Performing project implementations including configurations, reports and gateway. Suggesting how to improve AVEVA Engineering or optimize implementation. Providing advanced support and troubleshooting. Continually seeking opportunities to increase end-user satisfaction. Promote use of AVEVA Engineering and the value it brings to the projects within the organization. Qualifications: Bachelors degree in Engineering with at least 10 years of experience 5+ years of relevant experience in AVEVA Engineering. 5+ years of relevant experience in AEVA PDMS/E3D Administration. In-depth working knowledge of configuration and management of AVEVA Engineering, including project Administration Fully proficient with the management of Dabacon databases Knowledge of Engineering workflow in an EPC environment. Strong analytical and problem-solving skills Ability to work in a fast-paced environment. Effective oral and written communication skills required Experience with setting up Integration between Aveva Engineering and other Aveva and Hexagon design applications Good understanding of the Engineering data flow between various engineering application will be a plus Proficient in PML programming Good to have : Knowledge of writing PML1/2/.net and C# programs, and Visual basic .Net Previous experience of AVEVA NET Previous experience of AVEVA CAT/SPEC, ERM Moving forward together Were committed to building a diverse, inclusive and respectful workplace where everyone feels they belong, can bring themselves, and are heard. We provide equal employment opportunities to all qualified applicants and employees without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by law. We want our people to be energized and empowered to drive sustainable impact. So, our focus is on a values-inspired culture that unlocks brilliance through belonging, connection and innovation. And we're not just talking about it; we're doing it. We're reskilling our people, leveraging transferable skills, and supporting the transition of our workforce to become experts in today's low carbon energy infrastructure and technology. Whatever your ambition, theres a path for you here. And theres no barrier to your potential career success. Join us to broaden your horizons, explore diverse opportunities, and be part of delivering sustainable change.
Posted 1 month ago
5 - 8 years
6 - 10 Lacs
Bengaluru
Work from Office
About The Role Role Purpose The purpose of this role is to design, test and maintain software programs for operating systems or applications which needs to be deployed at a client end and ensure its meet 100% quality assurance parameters B?ig Data Developer - Spark,Scala,Pyspark Big Data Developer - Spark, Scala, Pyspark Coding & scripting Years of Experience5 to 12 years LocationBangalore Notice Period0 to 30 days Key Skills: - Proficient in Spark,Scala,Pyspark coding & scripting - Fluent in big data engineering development using the Hadoop/Spark ecosystem - Hands-on experience in Big Data - Good Knowledge of Hadoop Eco System - Knowledge of cloud architecture AWS - Data ingestion and integration into the Data Lake using the Hadoop ecosystem tools such as Sqoop, Spark, Impala, Hive, Oozie, Airflow etc. - Candidates should be fluent in the Python / Scala language - Strong communication skills ? 2. Perform coding and ensure optimal software/ module development Determine operational feasibility by evaluating analysis, problem definition, requirements, software development and proposed software Develop and automate processes for software validation by setting up and designing test cases/scenarios/usage cases, and executing these cases Modifying software to fix errors, adapt it to new hardware, improve its performance, or upgrade interfaces. Analyzing information to recommend and plan the installation of new systems or modifications of an existing system Ensuring that code is error free or has no bugs and test failure Preparing reports on programming project specifications, activities and status Ensure all the codes are raised as per the norm defined for project / program / account with clear description and replication patterns Compile timely, comprehensive and accurate documentation and reports as requested Coordinating with the team on daily project status and progress and documenting it Providing feedback on usability and serviceability, trace the result to quality risk and report it to concerned stakeholders ? 3. Status Reporting and Customer Focus on an ongoing basis with respect to project and its execution Capturing all the requirements and clarifications from the client for better quality work Taking feedback on the regular basis to ensure smooth and on time delivery Participating in continuing education and training to remain current on best practices, learn new programming languages, and better assist other team members. Consulting with engineering staff to evaluate software-hardware interfaces and develop specifications and performance requirements Document and demonstrate solutions by developing documentation, flowcharts, layouts, diagrams, charts, code comments and clear code Documenting very necessary details and reports in a formal way for proper understanding of software from client proposal to implementation Ensure good quality of interaction with customer w.r.t. e-mail content, fault report tracking, voice calls, business etiquette etc Timely Response to customer requests and no instances of complaints either internally or externally ? Deliver No. Performance Parameter Measure 1. Continuous Integration, Deployment & Monitoring of Software 100% error free on boarding & implementation, throughput %, Adherence to the schedule/ release plan 2. Quality & CSAT On-Time Delivery, Manage software, Troubleshoot queries, Customer experience, completion of assigned certifications for skill upgradation 3. MIS & Reporting 100% on time MIS & report generation Mandatory Skills: Python for Insights. Experience5-8 Years. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.
Posted 1 month ago
4 - 9 years
4 - 8 Lacs
Pune
Work from Office
About The Role Strong Experience in Informatica Powercenter ETL tool & Relational database experience 0.6 Years to 9 years of experience with the technical analysis and design development and implementation of data warehousing and Data Lake solutions Strong SQL programming skills Experience with Teradata is a big plus Strong UNIX Shell scripting experience to support data warehousing solutions Good experience in Hadoop hive eco system Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders. Skills (competencies) Verbal Communication
Posted 1 month ago
6 - 11 years
15 - 30 Lacs
Hyderabad, Gurugram, Bengaluru
Work from Office
Position: Senior AWS Data Engineer - Interested candidates can send their resumes to heena.ruchwani@gspann.com Experience: 6+ Years Locations: Pune, Hyderabad, Gurugram, Bangalore Notice Period: Immediate to 30 Days Preferred Job Description: We are hiring a Senior AWS Data Engineer to join our growing team. The ideal candidate will have deep expertise in AWS data services, strong ETL experience, and a passion for solving complex data problems at scale. Key Responsibilities: Design and develop scalable, high-performance data pipelines in AWS Work with services like Glue, Redshift, S3, EMR, Lambda, and Athena Build and optimize ETL processes for both structured and unstructured data Collaborate with cross-functional teams to deliver actionable data solutions Implement best practices for data quality, security, and cost-efficiency Required Skills: 6+ years in Data Engineering 3+ years working with AWS (Glue, S3, Redshift, Lambda, EMR, etc.) Proficient in Python or Scala for data transformation Strong SQL skills and experience in performance tuning Hands-on experience with Spark or PySpark Knowledge of data lake and DWH architecture Nice to Have: Familiarity with Kafka, Kinesis, or real-time data streaming Exposure to Terraform or CloudFormation Experience with CI/CD tools like Git and Jenkins How to Apply: Interested candidates can send their resumes to heena.ruchwani@gspann.com
Posted 1 month ago
2 - 7 years
6 - 10 Lacs
Bengaluru
Work from Office
Hello Talented Techie! We provide support in Project Services and Transformation, Digital Solutions and Delivery Management. We offer joint operations and digitalization services for Global Business Services and work closely alongside the entire Shared Services organization. We make efficient use of the possibilities of new technologies such as Business Process Management (BPM) and Robotics as enablers for efficient and effective implementations. We are looking for Data Engineer ( AWS, Confluent & Snaplogic ) Data Integration Integrate data from various Siemens organizations into our data factory, ensuring seamless data flow and real-time data fetching. Data Processing Implement and manage large-scale data processing solutions using AWS Glue, ensuring efficient and reliable data transformation and loading. Data Storage Store and manage data in a large-scale data lake, utilizing Iceberg tables in Snowflake for optimized data storage and retrieval. Data Transformation Apply various data transformations to prepare data for analysis and reporting, ensuring data quality and consistency. Data Products Create and maintain data products that meet the needs of various stakeholders, providing actionable insights and supporting data-driven decision-making. Workflow Management Use Apache Airflow to orchestrate and automate data workflows, ensuring timely and accurate data processing. Real-time Data Streaming Utilize Confluent Kafka for real-time data streaming, ensuring low-latency data integration and processing. ETL Processes Design and implement ETL processes using SnapLogic , ensuring efficient data extraction, transformation, and loading. Monitoring and Logging Use Splunk for monitoring and logging data processes, ensuring system reliability and performance. You"™d describe yourself as: Experience 3+ relevant years of experience in data engineering, with a focus on AWS Glue, Iceberg tables, Confluent Kafka, SnapLogic, and Airflow. Technical Skills : Proficiency in AWS services, particularly AWS Glue. Experience with Iceberg tables and Snowflake. Knowledge of Confluent Kafka for real-time data streaming. Familiarity with SnapLogic for ETL processes. Experience with Apache Airflow for workflow management. Understanding of Splunk for monitoring and logging. Programming Skills Proficiency in Python, SQL, and other relevant programming languages. Data Modeling Experience with data modeling and database design. Problem-Solving Strong analytical and problem-solving skills, with the ability to troubleshoot and resolve data-related issues. Preferred Qualities: Attention to Detail Meticulous attention to detail, ensuring data accuracy and quality. Communication Skills Excellent communication skills, with the ability to collaborate effectively with cross-functional teams. Adaptability Ability to adapt to changing technologies and work in a fast-paced environment. Team Player Strong team player with a collaborative mindset. Continuous Learning Eagerness to learn and stay updated with the latest trends and technologies in data engineering. Create a better #TomorrowWithUs! This role, based in Bangalore, is an individual contributor position. You may be required to visit other locations within India and internationally. In return, you'll have the opportunity to work with teams shaping the future. At Siemens, we are a collection of over 312,000 minds building the future, one day at a time, worldwide. We value your unique identity and perspective and are fully committed to providing equitable opportunities and building a workplace that reflects the diversity of society. Come bring your authentic self and create a better tomorrow with us. Find out more about Siemens careers at: www.siemens.com/careers
Posted 1 month ago
3 - 5 years
6 - 10 Lacs
Bengaluru
Work from Office
Hello Talented Techie! We provide support in Project Services and Transformation, Digital Solutions and Delivery Management. We offer joint operations and digitalization services for Global Business Services and work closely alongside the entire Shared Services organization. We make efficient use of the possibilities of new technologies such as Business Process Management (BPM) and Robotics as enablers for efficient and effective implementations. We are looking for Data Engineer We are looking for a skilled Data Architect/Engineer with strong expertise in AWS and data lake solutions. If you"™re passionate about building scalable data platforms, this role is for you. Your responsibilities will include: Architect & Design Build scalable and efficient data solutions using AWS services like Glue, Redshift, S3, Kinesis (Apache Kafka), DynamoDB, Lambda, Glue Streaming ETL, and EMR. Real-Time Data Integration Integrate real-time data from multiple Siemens orgs into our central data lake. Data Lake Management Design and manage large-scale data lakes using S3, Glue, and Lake Formation. Data Transformation Apply transformations to ensure high-quality, analysis-ready data. Snowflake Integration Build and manage pipelines for Snowflake, using Iceberg tables for best performance and flexibility. Performance Tuning Optimize pipelines for speed, scalability, and cost-effectiveness. Security & Compliance Ensure all data solutions meet security standards and compliance guidelines. Team Collaboration Work closely with data engineers, scientists, and app developers to deliver full-stack data solutions. Monitoring & Troubleshooting Set up monitoring tools and quickly resolve pipeline issues when needed. You"™d describe yourself as: Experience 3+ years of experience in data engineering or cloud solutioning, with a focus on AWS services. Technical Skills Proficiency in AWS services such as AWS API, AWS Glue, Amazon Redshift, S3, Apache Kafka and Lake Formation. Experience with real-time data processing and streaming architectures. Big Data Querying Tools: Solid understanding of big data querying tools (e.g., Hive, PySpark). Programming Strong programming skills in languages such as Python, Java, or Scala for building and maintaining scalable systems. Problem-Solving Excellent problem-solving skills and the ability to troubleshoot complex issues. Communication Strong communication skills, with the ability to work effectively with both technical and non-technical stakeholders. Certifications AWS certifications are a plus. Create a better #TomorrowWithUs! This role, based in Bangalore, is an individual contributor position. You may be required to visit other locations within India and internationally. In return, you'll have the opportunity to work with teams shaping the future. At Siemens, we are a collection of over 312,000 minds building the future, one day at a time, worldwide. We value your unique identity and perspective and are fully committed to providing equitable opportunities and building a workplace that reflects the diversity of society. Come bring your authentic self and create a better tomorrow with us. Find out more about Siemens careers at: www.siemens.com/careers
Posted 1 month ago
4 - 9 years
14 - 18 Lacs
Noida
Work from Office
Who We Are Build a brighter future while learning and growing with a Siemens company at the intersection of technology, community and s ustainability. Our global team of innovators is always looking to create meaningful solutions to some of the toughest challenges facing our world. Find out how far your passion can take you. What you need * BS in an Engineering or Science discipline, or equivalent experience * 7+ years of software/data engineering experience using Java, Scala, and/or Python, with at least 5 years' experience in a data focused role * Experience in data integration (ETL/ELT) development using multiple languages (e.g., Java, Scala, Python, PySpark, SparkSQL) * Experience building and maintaining data pipelines supporting a variety of integration patterns (batch, replication/CD C, event streaming) and data lake/warehouse in production environments * Experience with AWS-based data services technologies (e.g., Kinesis, Glue, RDS, Athena, etc.) and Snowflake CDW * Experience of working in the larger initiatives building and rationalizing large scale data environments with a large variety of data pipelines, possibly with internal and external partner integrations, would be a plus * Willingness to experiment and learn new approaches and technology applications * Knowledge and experience with various relational databases and demonstrable proficiency in SQL and supporting analytics uses and users * Knowledge of software engineering and agile development best practices * Excellent written and verbal communication skills The Brightly culture We"™re guided by a vision of community that serves the ambitions and wellbeing of all people, and our professional communities are no exception. We model that ideal every day by being supportive, collaborative partners to one another, conscientiousl y making space for our colleagues to grow and thrive. Our passionate team is driven to create a future where smarter infrastructure protects the environments that shape and connect us all. That brighter future starts with us.
Posted 1 month ago
3 - 5 years
11 - 15 Lacs
Hyderabad
Work from Office
Overview As Senior Analyst, Data Modeling, your focus would be to partner with D&A Data Foundation team members to create data models for Global projects. This would include independently analyzing project data needs, identifying data storage and integration needs/issues, and driving opportunities for data model reuse, satisfying project requirements. Role will advocate Enterprise Architecture, Data Design, and D&A standards, and best practices. You will be performing all aspects of Data Modeling working closely with Data Governance, Data Engineering and Data Architects teams. As a member of the data modeling team, you will create data models for very large and complex data applications in public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. The primary responsibilities of this role are to work with data product owners, data management owners, and data engineering teams to create physical and logical data models with an extensible philosophy to support future, unknown use cases with minimal rework. You'll be working in a hybrid environment with in-house, on-premise data sources as well as cloud and remote systems. You will establish data design patterns that will drive flexible, scalable, and efficient data models to maximize value and reuse. Responsibilities Complete conceptual, logical and physical data models for any supported platform, including SQL Data Warehouse, EMR, Spark, DataBricks, Snowflake, Azure Synapse or other Cloud data warehousing technologies. Governs data design/modeling documentation of metadata (business definitions of entities and attributes) and constructions database objects, for baseline and investment funded projects, as assigned. Provides and/or supports data analysis, requirements gathering, solution development, and design reviews for enhancements to, or new, applications/reporting. Supports assigned project contractors (both on- & off-shore), orienting new contractors to standards, best practices, and tools. Contributes to project cost estimates, working with senior members of team to evaluate the size and complexity of the changes or new development. Ensure physical and logical data models are designed with an extensible philosophy to support future, unknown use cases with minimal rework. Develop a deep understanding of the business domain and enterprise technology inventory to craft a solution roadmap that achieves business objectives, maximizes reuse. Partner with IT, data engineering and other teams to ensure the enterprise data model incorporates key dimensions needed for the proper managementbusiness and financial policies, security, local-market regulatory rules, consumer privacy by design principles (PII management) and all linked across fundamental identity foundations. Drive collaborative reviews of design, code, data, security features implementation performed by data engineers to drive data product development. Assist with data planning, sourcing, collection, profiling, and transformation. Create Source To Target Mappings for ETL and BI developers. Show expertise for data at all levelslow-latency, relational, and unstructured data stores; analytical and data lakes; data str/cleansing. Partner with the Data Governance team to standardize their classification of unstructured data into standard structures for data discovery and action by business customers and stakeholders. Support data lineage and mapping of source system data to canonical data stores for research, analysis and productization. Qualifications 8+ years of overall technology experience that includes at least 4+ years of data modeling and systems architecture. 3+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 4+ years of experience developing enterprise data models. Experience in building solutions in the retail or in the supply chain space. Expertise in data modeling tools (ER/Studio, Erwin, IDM/ARDM models). Experience with integration of multi cloud services (Azure) with on-premises technologies. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse, Teradata or SnowFlake. Experience with version control systems like Github and deployment & CI tools. Experience with Azure Data Factory, Databricks and Azure Machine learning is a plus. Experience of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI).
Posted 1 month ago
4 - 8 years
10 - 18 Lacs
Kochi, Chennai, Bengaluru
Hybrid
Data warehouse developer Experience: 3-8 years Location Chennai/Kochi/Bangalore Responsibilities: Design, build, and maintain scalable and robust data engineering pipelines using Microsoft Azure technologies such as SQL Azure, Azure Data Factory, and Azure Databricks. Develop and optimize data solutions using Azure SQL, PySpark, and PySQL to handle complex data transformation and processing tasks. Implement and manage data storage solutions in One Lake and Azure SQL, ensuring data integrity and accessibility. Work closely with stakeholders to design and build effective reporting and analytics solutions using Power BI and other analytical tools. Collaborate with IT and security teams to integrate solutions within Azure AD and ensure compliance with data security and privacy standards. Contribute to the architectural design of database and lakehouse structures, optimizing for performance and scalability. Utilize .NET frameworks where applicable, to enhance data processing and integration capabilities. Design and implement OLAP and data warehousing solutions, adhering to best practices in data warehouse design concepts. Perform database and query performance tuning and optimizations to ensure high performance and reliability. Stay updated with the latest technologies and trends in big data, proposing and implementing new tools and technologies to improve data systems and processes. Implement unit testing and automation strategies to ensure the reliability and performance of the full-stack application. Conduct thorough code reviews, providing constructive feedback to team members and ensuring adherence to coding standards and best practices. Collaborate with QA engineers to implement and maintain automated testing procedures, including API testing. Work in an Agile environment, participating in sprint planning, daily stand-ups, and retrospective meetings to ensure timely and iterative project delivery. Stay abreast of industry trends and emerging technologies to continuously improve skills and contribute innovative ideas. Requirements: Bachelors degree in computer science, Engineering, or a related field. 3-8 years of professional experience in data engineering or a related field. Profound expertise in SQL,T-SQL, database design, and data warehousing principles. Strong experience with Microsoft Azure tools including MS Fabric, SQL Azure, Azure Data Factory, Azure Databricks, and Azure Data Lake. Proficient in Python, PySpark, and PySQL for data processing and analytics tasks. Experience with Power BI and other reporting and analytics tools. Demonstrated knowledge of OLAP, data warehouse design concepts, and performance optimizations in database and query processing. Knowledge of .NET frameworks is highly preferred. Excellent problem-solving, analytical, and communication skills. Bachelors or Masters degree in Computer Science, Engineering, or a related field. Interested candidates can share their resumes at megha.chattopadhyay@aspiresys.com
Posted 1 month ago
2 - 7 years
9 - 13 Lacs
Kochi
Work from Office
We are looking for a highly skilled and experienced Azure Data Engineer with 2 to 7 years of experience to join our team. The ideal candidate should have expertise in Azure Synapse Analytics, PySpark, Azure Data Factory, ADLS Gen2, SQL DW, T-SQL, and other relevant technologies. ### Roles and Responsibilities Design, develop, and implement data pipelines using Azure Data Factory or Azure Synapse Analytics. Develop and maintain data warehouses or data lakes using various tools and technologies. Work with various types of data sources including flat files, JSON, and databases. Build workflows and pipelines in Azure Synapse Analytics. Collaborate with cross-functional teams to identify and prioritize project requirements. Ensure data quality and integrity by implementing data validation and testing procedures. ### Job Requirements Hands-on experience in Azure Data Factory or Azure Synapse Analytics. Experience in data warehouse or data lake development. Strong knowledge of Spark, Python, and DWH concepts. Ability to build workflows and pipelines in Azure Synapse Analytics. Fair knowledge of Microsoft Fabric & One Lake, SSIS, ADO, and other relevant technologies. Strong analytical, interpersonal, and collaboration skills. Must Have: Azure Synapse Analytics with PySpark, Azure Data Factory, ADLS Gen2, SQL DW, T-SQL. Good to have: Azure data bricks, Microsoft Fabric & One Lake, SSIS, ADO.
Posted 1 month ago
5 - 10 years
13 - 17 Lacs
Kochi
Work from Office
We are looking for a highly skilled and experienced Data Engineering Lead to join our team. The ideal candidate will have 5-10 years of experience in designing and implementing scalable data lake architecture and data pipelines. ### Roles and Responsibility Design and implement scalable data lake architectures using Azure Data Lake services. Develop and maintain data pipelines to ingest data from various sources. Optimize data storage and retrieval processes for efficiency and performance. Ensure data security and compliance with industry standards. Collaborate with data scientists and analysts to facilitate data accessibility. Monitor and troubleshoot data pipeline issues to ensure reliability. Document data lake designs, processes, and best practices. Experience with SQL and NoSQL databases, as well as familiarity with big data file formats like Parquet and Avro. Must have skills: Azure Data Lake, Azure Synapse Analytics, Azure Data Factory, Azure DataBricks, Python (PySpark, Numpy etc), SQL, ETL, Data warehousing, Azure Devops, Experience in developing streaming pipeline using Azure Event Hub, Azure Stream analytics, Spark streaming, and integration with business intelligence tools such as Power BI. Good to have skills: Big Data technologies (e.g., Hadoop, Spark), Data security. General Skills: Experience with Agile and DevOps methodologies and the software development lifecycle, proactive and responsible for deliverables, escalates dependencies and risks, works with most DevOps tools, limited supervision, completes assigned tasks on time and provides regular status reports, trains new team members, and builds strong relationships with project stakeholders. ### Job Requirements Minimum 5 years of experience in designing and implementing scalable data lake architecture and data pipelines. Strong knowledge of Azure Data Lake, Azure Synapse Analytics, Azure Data Factory, Azure DataBricks, Python (PySpark, Numpy etc), SQL, ETL, Data warehousing, and Azure Devops. Experience in developing streaming pipelines using Azure Event Hub, Azure Stream analytics, and Spark streaming. Familiarity with big data file formats like Parquet and Avro. Ability to work with multi-cultural global teams and virtually. Knowledge of cloud solutions such as Azure or AWS with DevOps/Cloud certifications is desired. Proactive and responsible for deliverables. Escalates dependencies and risks. Works with most DevOps tools, limited supervision. Completes assigned tasks on time and provides regular status reports. Trains new team members and builds strong relationships with project stakeholders.
Posted 1 month ago
8 - 10 years
13 - 17 Lacs
Kochi
Work from Office
We are looking for a skilled Data Engineering Lead with 8 to 10 years of experience, based in Bengaluru. The ideal candidate will have a strong background in designing and implementing scalable data lake architecture and data pipelines. ### Roles and Responsibility Design and implement scalable data lake architectures using Azure Data Lake services. Develop and maintain data pipelines to ingest data from various sources. Optimize data storage and retrieval processes for efficiency and performance. Ensure data security and compliance with industry standards. Collaborate with data scientists and analysts to facilitate data accessibility. Monitor and troubleshoot data pipeline issues to ensure reliability. Document data lake designs, processes, and best practices. Experience with SQL and NoSQL databases, as well as familiarity with big data file formats like Parquet and Avro. Experience in developing streaming pipelines using Azure Event Hub, Azure Stream analytics, Spark streaming. Experience in integrating with business intelligence tools such as Power BI. ### Job Requirements Strong knowledge of Azure Data Lake, Azure Synapse Analytics, Azure Data Factory, and Azure DataBricks. Proficiency in Python (PySpark, Numpy), SQL, ETL, and data warehousing. Experience with Agile and DevOps methodologies and the software development lifecycle. Proactive and responsible for deliverables; escalates dependencies and risks. Works with most DevOps tools, limited supervision, and completes assigned tasks on time with regular status reporting. Ability to train new team members and build strong relationships with project stakeholders. Knowledge of cloud solutions such as Azure or AWS with DevOps/Cloud certifications is desired. Ability to work with multi-cultural global teams virtually. Completion of assigned tasks on time and regular status reporting.
Posted 1 month ago
2 - 5 years
10 - 14 Lacs
Kochi
Work from Office
We are looking for a skilled ETL Developer with 2 to 5 years of experience to join our team in Bengaluru. The ideal candidate will have hands-on experience in developing data integration routines using Azure Data Factory, Azure Data Bricks, Scala/PySpark Notebooks, Azure PaaS SQL & Azure BLOB Storage. ### Roles and Responsibility Convert business and technical requirements into appropriate technical solutions and implement features using Azure Data Factory, Databricks, and Azure Data Lake Store. Implement data integration features using Azure Data Factory, Azure Data Bricks, and Scala/PySpark Notebooks. Set up and maintain Azure PaaS SQL databases and database objects, including Azure BLOB Storage. Create complex queries, including dynamic queries, for data ingestion. Own project tasks and ensure timely completion. Maintain effective communication within the team, with peers, leadership teams, and other IT groups. ### Job Requirements Bachelor's degree in Computer Science or equivalent. Minimum 2-5 years of experience as a software developer. Hands-on experience in developing data integration routines using Azure Data Factory, Azure Data Bricks, Scala/PySpark Notebooks, Azure PaaS SQL, and Azure BLOB Storage. Experience/knowledge in Azure Data Lake and related services. Ability to take accountability for quality technical deliverables to agreed schedules and estimates. Strong verbal and written communication skills. Must be an outstanding team player. Ability to manage and prioritize workload. Quick learner with a 'can-do' attitude. Flexible and able to quickly adapt to change.
Posted 1 month ago
10 - 15 years
20 - 25 Lacs
Kolkata
Work from Office
We are looking for a skilled Solution Architect with 10 to 15 years of experience to join our team in Bengaluru. The role involves designing and implementing scalable, reliable, and high-performing data architecture solutions. ### Roles and Responsibility Design and develop data architecture solutions that meet business requirements. Collaborate with stakeholders to identify needs and translate them into technical data solutions. Provide technical leadership and support to software development teams. Define and implement data management policies, procedures, and standards. Ensure data quality and integrity through data cleansing and validation. Develop and implement data security and privacy policies, ensuring compliance with regulations like GDPR and HIPAA. Design and implement data migration plans from legacy systems to the cloud. Build data pipelines and workflows using Azure services such as Azure Data Factory, Azure Databricks, and Azure Stream Analytics. Develop and maintain data models and database schemas aligned with business requirements. Evaluate and select appropriate data storage technologies including Azure SQL Database, Azure Cosmos DB, and Azure Data Lake Storage. Troubleshoot data-related issues and provide technical support to data users. Stay updated on the latest trends and developments in data architecture and recommend improvements. Coordinate and interact with multiple teams for smooth operations. ### Job Requirements Proven experience as a Technical/Data Architect with over 10 years of product/solutions development experience. Hands-on experience with software/product architecture, design, development, testing, and implementation. Excellent communication skills, problem-solving aptitude, organizational, and leadership skills. Experience with Agile development methodology and strategic development/deployment methodologies. Understanding of source control (Git/VSTS), continuous integration/continuous deployment, and information security. Hands-on experience with Cloud-based (Azure) product/platform development and implementation. Good experience in designing and working with Data Lakes, Data Warehouses, and ETL tools (Azure based). Expertise in Azure Data Analytics with a thorough understanding of Azure Data Platform tools. Hands-on experience and good understanding of Azure services like Data Factory, Data Bricks, Synapse, Data Lake Gen2, Stream Analytics, Azure Spark, Azure ML, SQL Server DB, Cosmos DB. Hands-on experience in Information management and Business Intelligence projects, handling huge client data sets with functions including transfer, ingestion, processing, analyzing, and visualization. Excellent communication and problem-solving skills, and the ability to work effectively in a team environment.
Posted 1 month ago
3 - 7 years
9 - 14 Lacs
Kochi
Work from Office
We are looking for a highly skilled and experienced Senior Data Engineer to join our team, with 3-7 years of experience in modern data ecosystems. The ideal candidate will have hands-on proficiency in Informatica CDI, Azure Data Factory (ADF), Azure Data Lake (ADLS), and Databricks. ### Roles and Responsibility Provide daily Application Management Support for the full data stack, addressing service requests, incidents, enhancements, and changes. Lead and coordinate resolution of complex data integration and analytics issues through thorough root cause analysis. Collaborate with technical and business stakeholders to support and optimize data pipelines, models, and dashboards. Maintain detailed documentation including architecture diagrams, troubleshooting guides, and test cases. Remain flexible for shift-based work or on-call duties depending on client needs and critical business periods. Ensure seamless operation of data platforms and timely resolution of incidents. ### Job Requirements Bachelor’s degree in Computer Science, Engineering, Data Analytics, or related field, or equivalent work experience. Strong understanding of data governance, performance tuning, and cloud-based data architecture best practices. Excellent stakeholder collaboration skills to translate business needs into scalable technical solutions. Solid understanding of data pipeline management and optimization techniques. Experience integrating data from various sources including ERP, CRM, POS, and third-party APIs. Familiarity with DevOps/CI-CD pipelines in a data engineering context. Certifications such as Informatica Certified Developer, Microsoft Certified: Azure Data Engineer Associate, and Databricks Certified Data Engineer are preferred. Passionate, proactive problem solvers with a strong client orientation. Professionals eager to learn and grow in a fast-paced, global delivery environment. A chance to work alongside a world-class, multidisciplinary team delivering data excellence to global businesses.
Posted 1 month ago
5 - 7 years
9 - 14 Lacs
Noida
Work from Office
Reports to : Program Manager- Analytics BI Position summary: A Specialist shall work with the development team and responsible for development task as individual contribution. He/she should be able to mentor team and able to help in resolving issues. He/she should be technical sound and able to communicate with client perfectly. Key duties & responsibilities Work as Lead Developer Data engineering project for E2E Analytics. Ensure Project delivery on time. Mentor other team mates and guide them. Will take the requirement from client and communicate as well. Ensure Timely documents creation for knowledge base, user guides, and other various communications systems. Ensures delivery against Business needs, team goals and objectives, i.e., meeting commitments and coordinating overall schedule. Works with large datasets in various formats, integrity/QA checks, and reconciliation for accounting systems. Leads efforts to troubleshoot and solve process or system related issues. Understand, support, enforce and comply with company policies, procedures and Standards of Business Ethics and Conduct. Experience working with Agile methodology Experience, Skills and Knowledge: Bachelors degree in Computer Science or equivalent experience is required. B.Tech/MCA preferable. Minimum 5 -7 years experience. Excellent communications and strong commitment for delivering the highest level of service Technical Skills Expert knowledge and experience working with Spark, Scala Experience in Azure data Factory ,Azure Data bricks, data Lake Experience working with SQL and Snowflake Experience with data integration tools such as SSIS, ADF Experience with programming languages such as Python Expert in Astronomer Airflow. Experience with programming languages such as Python, Spark, Scala Experience or exposure on Microsoft Azure Data Fundamentals Key competency profile: Own youre a development by implementing and sharing your learnings Motivate each other to perform at our highest level Work the right way by acting with integrity and living our values every day Succeed by proactively identifying problems and solutions for yourself and others. Communicate effectively if there any challenge. Accountability and Responsibility should be there.
Posted 1 month ago
3 - 5 years
5 - 9 Lacs
Bengaluru
Work from Office
The Core AI BI & Data Platforms Team has been established to create, operate and run the Enterprise AI, BI and Data that facilitate the time to market for reporting, analytics and data science teams to run experiments, train models and generate insights as well as evolve and run the CoCounsel application and its shared capability of CoCounsel AI Assistant. The Enterprise Data Platform aims to provide self service capabilities for fast and secure ingestion and consumption of data across TR. At Thomson Reuters, we are recruiting a team of motivated Cloud professionals to transform how we build, manage and leverage our data assets. The Data Platform team in Bangalore is seeking an experienced Software Engineer with a passion for engineering cloud-based data platform systems. Join our dynamic team as a Software Engineer and take a pivotal role in shaping the future of our Enterprise Data Platform. You will develop and implement data processing applications and frameworks on cloud-based infrastructure, ensuring the efficiency, scalability, and reliability of our systems. About the Role In this opportunity as the Software Engineer, you will: Develop data processing applications and frameworks on cloud-based infrastructure in partnership with Data Analysts and Architects with guidance from Lead Software Engineer. Innovate with new approaches to meet data management requirements. Make recommendations about platform adoption, including technology integrations, application servers, libraries, and AWS frameworks, documentation, and usability by stakeholders. Contribute to improving the customer experience. Participate in code reviews to maintain a high-quality codebase Collaborate with cross-functional teams to define, design, and ship new features Work closely with product owners, designers, and other developers to understand requirements and deliver solutions. Effectively communicate and liaise across the data platform & management teams Stay updated on emerging trends and technologies in cloud computing About You You're a fit for the role of Software Engineer, if you meet all or most of these criteria: Bachelor's degree in Computer Science, Engineering, or a related field 3+ years of relevant experience in Implementation of data lake and data management of data technologies for large scale organizations. Experience in building & maintaining data pipelines with excellent run-time characteristics such as low-latency, fault-tolerance and high availability. Proficient in Python programming language. Experience in AWS services and management, including Serverless, Container, Queueing and Monitoring services like Lambda, ECS, API Gateway, RDS, Dynamo DB, Glue, S3, IAM, Step Functions, CloudWatch, SQS, SNS. Good knowledge in Consuming and building APIs Business Intelligence tools like PowerBI Fluency in querying languages such as SQL Solid understanding in Software development practices such as version control via Git, CI/CD and Release management Agile development cadence Good critical thinking, communication, documentation, troubleshooting and collaborative skills. Whats in it For You? Join us to inform the way forward with the latest AI solutions and address real-world challenges in legal, tax, compliance, and news. Backed by our commitment to continuous learning and market-leading benefits, youll be prepared to grow, lead, and thrive in an AI-enabled future. This includes: Industry-Leading Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, and hybrid model, empowering employees to achieve a better work-life balance. Career Development and Growth: ?By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrows challenges and deliver real-world solutions. Our skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Culture: Globally recognized and award-winning reputation for inclusion, innovation, and customer-focus. Our eleven business resource groups nurture our culture of belonging across the diverse backgrounds and experiences represented across our global footprint. Hybrid Work Model: Weve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives.
Posted 1 month ago
10 - 15 years
37 - 45 Lacs
Bengaluru
Work from Office
The Thomson Reuters Financial transformation team instrumenting on implementing and delivering solution relating to digital finance strategies, enterprise performance strategies and technologies solutions. This position will play a key role as part of Performance Management projects, including tech driven transformation with tools like OneStream. About the Role: In this opportunity as EPM Architect (OneStream), you will: 10 -15 years working experience with Enterprise Performance Management Solutions implementation and delivery. Hands on experience in EPM tools: OneStream, Hyperion Experience of involvement in end-to-end implementation of OneStream platform with significant exposure to managing OneStream infrastructure. Design and architect optimal and scalable solutions. Responsible for managing OS Infrastructure (Environment Management, Application Performance) Work with internal team to ensure OS compliance with TR Security Standards (VPN connection, Encryption standards, Security Dashboards etc.) Ensure Application governance across OS environments like code management, artifact management etc. Drive automation initiatives related to above mentioned areas. Experience of data integration methodologies for connecting OneStream platform with other systems like Data Lake, SQL Server, S4 Hana, PowerBI etc. Must demonstrate exceptional analytical skills, and a passion for the insights that result from those analyses, together with a strong understanding of the data and collection processes needed to fuel that analysis. Must have a passion for serving others, work well in a team, be self-motivated, and be a problem-solver. Must have hands on experience of planning, forecasting and month end processes. Good to have Gen AI, Sensible ML knowledge. Power BI and other reporting experience. About you: You're a fit for the role of EPM Architect (OneStream) if your background includes: Leading Financial Planning and Performance Management projects, including tech driven transformation with tools like OneStream, Oracle EPM Lead solution design and development team. Lead ongoing management and optimization of OneStream platforms infrastructure with evolving business requirements. Will work with core OneStream project team during implementation of various processes on the platform Will provide technical knowledge and expertise in the areas of Security, System Integration and application performance management. Should lead the admin activities for OneStream upgradation patches hotfixes.
Posted 1 month ago
10 - 18 years
12 - 22 Lacs
Pune, Bengaluru
Hybrid
Hi, We are hiring for the role of AWS Data Engineer with one of the leading organization for Bangalore & Pune. Experience - 10+ Years Location - Bangalore & Pune Ctc - Best in the industry Job Description Technical Skills PySpark coding skill Proficient in AWS Data Engineering Services Experience in Designing Data Pipeline & Data Lake If interested kindly share your resume at nupur.tyagi@mounttalent.com
Posted 1 month ago
8 - 13 years
25 - 30 Lacs
Mumbai, Navi Mumbai, Mumbai (All Areas)
Work from Office
Strong programming skills in Python or R, with good knowledge in data manipulation, analysis, and visualization libraries (pandas, numpy, matplotlib, seaborn) Knowledge of machine learning techniques algorithms. FMCG industry will be preferable. Required Candidate profile Hands-on knowledge on machine learning frameworks (e.g., scikit-learn, TensorFlow, PyTorch) Proficiency in SQL for data extraction, integration, and manipulation to analyze large datasets. Perks and benefits To be disclosed post interview
Posted 1 month ago
2 - 6 years
3 - 6 Lacs
Hyderabad
Work from Office
Role Description The R&D Data Catalyst Team is responsible for building Data Searching, Cohort Building, and Knowledge Management tools that provide the Amgen scientific community with visibility to Amgen’s wealth of human datasets, projects and study histories, and knowledge over various scientific findings . These solutions are pivotal tools in Amgen’s goal to accelerate the speed of discovery, and speed to market of advanced precision medications . The S r. Data Engineer will be responsible for the end-to-end development of an enterprise analytics and data mastering solution leveraging Databricks and Power BI. This role requires expertise in both data architecture and analytics, with the ability to create scalable, reliable, and high-performing enterprise solutions that research cohort-building and advanced research pipeline . The ideal candidate will have experience creating and surfacing large unified repositories of human data, based on integrations from multiple repositories and solutions , and be exceptionally skilled with data analysis and profiling . You will collaborate closely with stakeholders , product team members , and related I T teams, to design and implement data models, integrate data from various sources, and ensure best practices for data governance and security. The ideal candidate will have a strong background in data warehousing, ETL, Databricks, Power BI, and enterprise data mastering. Roles & Responsibilities Design and build scalable enterprise analytics solutions using Databricks, Power BI, and other modern data tools. Leverage data virtualization, ETL, and semantic layers to balance need for unification, performance, and data transformation with goal to reduce data proliferation Break down features into work that aligns with the architectural direction runway Participate hands-on in pilots and proofs-of-concept for new patterns Create robust documentation from data analysis and profiling, and proposed designs and data logic Develop advanced sql queries to profile, and unify data Develop data processing code in sql , along with semantic views to prepare data for reporting Develop PowerBI Models and reporting packages Design robust data models , and processing layers, that support both analytical processing and operational reporting needs. D esign and develop solutions based on best practices for data governance, security, and compliance within Databricks and Power BI environments. Ensure the integration of data systems with other enterprise applications, creating seamless data flows across platforms. Develop and maintain Power BI solutions, ensuring data models and reports are optimized for performance and scalability. Collaborate with stakeholders to define data requirements, functional specifications, and project goals. Continuously evaluate and adopt new technologies and methodologies to enhance the architecture and performance of data solutions. Basic Qualifications and Experience Master’s degree with 4 to 6 years of experience in Product Owner / Platform Owner / Service Owner OR Bachelor’s degree with 8 to 10 years of experience in Product Owner / Platform Owner / Service Owner Functional Skills: Must-Have Skills Minimum of 3 years of hands-on experience with BI solutions (Preferrable Power BI or Business Objects) including report development, dashboard creation, and optimization. Minimum of 6 years of hands-on experience building Change-data-capture (CDC) ETL pipelines, data warehouse design and build, and enterprise-level data management. Hands-on experience with Databricks, including data engineering, optimization, and analytics workloads. Deep understanding of Power BI, including model design , DAX, and Power Query. Proven experience designing and implementing data mastering solutions and data governance frameworks. Expertise in cloud platforms ( AWS ), data lakes, and data warehouses. Strong knowledge of ETL processes, data pipelines, and integration technologies. Strong communication and collaboration skills to work with cross-functional teams and senior leadership. Ability to assess business needs and design solutions that align with organizational goals. Exceptional h ands - on capabilities with data profiling, data transformation, data mastering Success in mentoring and training team members Good-to-Have Skills: Experience in developing differentiated and deliverable solutions Experience with human data, ideally human healthcare data Familiarity with laboratory testing, patient data from clinical care, HL7, FHIR, and/or clinical trial data management Professional Certifications (please mention if the certification is preferred or mandatory for the role) ITIL Foundation or other relevant certifications (preferred) SAFe Agile Practitioner (6.0) Microsoft CertifiedData Analyst Associate (Power BI) or related certification. Databricks Certified Professional or similar certification. Soft Skills: Excellent analytical and troubleshooting skills Deep intellectual curiosity High est degree of initiative and self-motivation Strong verbal and written communication skills , including presentation to varied audiences of complex technical/business topics Confidence technical leader Ability to work effectively with global, virtual teams , specifically including leveraging of tools and artifacts to assure clear and efficient collaboration across time zones Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong problem solving, analytical skills; Ability to learn quickly and retain and synthesize complex information from diverse sources
Posted 1 month ago
4 - 6 years
10 - 14 Lacs
Hyderabad
Work from Office
ABOUT THE ROLE Role Description The R&D Data Catalyst Team is responsible for building Data Searching, Cohort Building, and Knowledge Management tools that provide the Amgen scientific community with visibility to Amgen’s wealth of human datasets, projects and study histories, and knowledge over various scientific findings . These solutions are pivotal tools in Amgen’s goal to accelerate the speed of discovery, and speed to market of advanced precision medications . The Data Architect will be responsible for the end-to-end architecture of an enterprise analytics and data mastering solution leveraging Databricks and Power BI. This role requires expertise in both data architecture and analytics, with the ability to create scalable, reliable, and high-performing enterprise solutions that research cohort-building and advanced research pipeline . The ideal candidate will have proven experience creating and surfacing large unified repositories of human data, based on integrations from multiple sources and solutions. You will collaborate closely with stakeholders across departments, including data engineering, business intelligence, and IT teams, to design and implement data models, integrate data from various sources, and ensure best practices for data governance and security. The ideal candidate will have a strong background in data warehousing, ETL, Databricks, Power BI, and enterprise data mastering. Roles & Responsibilities Architect scalable enterprise analytics solutions using Databricks, Power BI, and other modern data tools. Leverage data virtualization, ETL, and semantic layers to balance need for unification, performance, and data transformation with goal to reduce data proliferation Support development planning by break ing down features into work that aligns with the architectural direction runway Participate hands-on in pilots and proofs-of-concept for new patterns Create robust documentation of architectural direction, patterns, and standards Present and train engineers and cross-team collaborators on architecture strategy and patterns Collaborate with data engineers to build and optimize ETL pipelines, ensuring efficient data ingestion and processing from multiple sources. Design robust data models , and processing layers, that support both analytical processing and operational reporting needs. Develop and implement best practices for data governance, security, and compliance within Databricks and Power BI environments. Ensure the integration of data systems with other enterprise applications, creating seamless data flows across platforms. Provide thought leadership and strategic guidance on data architecture, advanced analytics, and data mastering best practices. Develop and maintain Power BI solutions, ensuring data models and reports are optimized for performance and scalability. Serve as a subject matter expert on Power BI and Databricks, providing technical leadership and mentoring to other teams. Collaborate with stakeholders to define data requirements, architecture specifications, and project goals. Continuously evaluate and adopt new technologies and methodologies to enhance the architecture and performance of data solutions. Basic Qualifications and Experience Master’s degree with 4 to 6 years of experience in data management and data architecture OR Bachelor’s degree with 6 to 8 years of experience in data management and data architecture Functional Skills: Must-Have Skills Minimum of 3 years of hands-on experience with BI solutions (Preferably Power BI or Business Objects) including report development, dashboard creation, and optimization. Minimum of 7 years of hands-on experience building change-data-capture (CDC) ETL pipelines, data warehouse design and build, and enterprise-level data management. Hands-on experience with Databricks, including data engineering, optimization, and analytics workloads. Deep understanding of Power BI, including model design , DAX, and Power Query. Proven experience designing and implementing data mastering solutions and data governance frameworks. Expertise in cloud platforms ( AWS ), data lakes, and data warehouses. Strong knowledge of ETL processes, data pipelines, and integration technologies. Strong communication and collaboration skills to work with cross-functional teams and senior leadership. Ability to assess business needs and design solutions that align with organizational goals. Exceptional h ands - on capabilities with data profiling, data transformation, data mastering Success in mentoring and training team members Good-to-Have Skills: Experience in developing differentiated and deliverable solutions Experience with human data, ideally human healthcare data Familiarity with laboratory testing, patient data from clinical care, HL7, FHIR, and/or clinical trial data management Professional Certifications (please mention if the certification is preferred or mandatory for the role) ITIL Foundation or other relevant certifications (preferred) SAFe Agile Practitioner (6.0) Microsoft CertifiedData Analyst Associate (Power BI) or related certification. Databricks Certified Professional or similar certification. Soft Skills: Excellent analytical and troubleshooting skills Deep intellectual curiosity High est degree of initiative and self-motivation Strong verbal and written communication skills , including presentation to varied audiences of complex technical/business topics Confidence technical leader Ability to work effectively with global, virtual teams , specifically including leveraging of tools and artifacts to assure clear and efficient collaboration across time zones Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong problem solving, analytical skills; Ability to learn quickly and retain and synthesize complex information from diverse sources
Posted 1 month ago
2 - 6 years
11 - 15 Lacs
Hyderabad
Work from Office
Amgen’s Precision Medicine technology te am is responsible for building Data Searching, Cohort Building, and Knowledge Management tools that provide the Amgen scientific community with visibility to Amgen’s wealth of human datasets, projects and study histories, and knowledge over various scientific findings. These data include multiomics data (genomics, transcriptomics, proteomics, etc.), clinical study subject measurement and outcome data, images, and specimen inventory data . Our PMED data management , standardization, surfacing, and processing capabilities are pivotal tools in Amgen’s goal to accelerate the speed of discovery, and speed to market of advanced precision medications . The Solution and Data Architect will be responsible for the end-to-end architecture of an enterprise analytics and data mastering solution leveraging Databricks and Power BI. This role requires expertise in both data architecture and analytics, with the ability to create scalable, reliable, and high-performing enterprise solutions that research cohort-building and advanced research pipeline . The ideal candidate will have experience creating and surfacing large unified repositories of human data, based on integrations from multiple repositories and solutions. You will collaborate closely with stakeholders across departments, including data engineering, business intelligence, and IT teams, to design and implement data models, integrate data from various sources, and ensure best practices for data governance and security. The ideal candidate will have a strong background in data warehousing, ETL, Databricks, Power BI, and enterprise data mastering. Roles & Responsibilities Architect scalable enterprise analytics solutions using Databricks, Power BI, and other modern data tools. Leverage data virtualization, ETL, and semantic layers to balance need for unification, performance, and data transformation with goal to reduce data proliferation Support development planning by break ing down features into work that aligns with the architectural direction runway Participate hands-on in pilots and proofs-of-concept for new patterns Create robust documentation of architectural direction, patterns, and standards Present and train engineers and cross-team collaborators on architecture strategy and patterns Collaborate with data engineers to build and optimize ETL pipelines, ensuring efficient data ingestion and processing from multiple sources. Design robust data models , and processing layers, that support both analytical processing and operational reporting needs. Develop and implement best practices for data governance, security, and compliance within Databricks and Power BI environments. Ensure the integration of data systems with other enterprise applications, creating seamless data flows across platforms. Provide thought leadership and strategic guidance on data architecture, advanced analytics, and data mastering best practices. Develop and maintain Power BI solutions, ensuring data models and reports are optimized for performance and scalability. Serve as a subject matter expert on Power BI and Databricks, providing technical leadership and mentoring to other teams. Collaborate with stakeholders to define data requirements, architecture specifications, and project goals. Continuously evaluate and adopt new technologies and methodologies to enhance the architecture and performance of data solutions. Basic Qualifications and Experience Master’s degree with 6 to 8 years of experience in data management and data solution architecture Bachelor’s degree with 8 to 10 years of experience in in data management and data solution architecture Diploma and 10 to 12 years of experience in in data management and data solution architecture Functional Skills: Must-Have Skills Minimum of 3 years of hands-on experience with BI solutions (Preferable Power BI or Business Objects) including report development, dashboard creation, and optimization. Minimum of 7 years of hands-on experience building Change-data-capture (CDC) ETL pipelines, data warehouse design and build, and enterprise-level data management. Hands-on experience with Databricks, including data engineering, optimization, and analytics workloads. Deep understanding of Power BI, including model design , DAX, and Power Query. Proven experience designing and implementing data mastering solutions and data governance frameworks. Expertise in cloud platforms ( AWS ), data lakes, and data warehouses. Strong knowledge of ETL processes, data pipelines, and integration technologies. Strong communication and collaboration skills to work with cross-functional teams and senior leadership. Ability to assess business needs and design solutions that align with organizational goals. Exceptional h ands - on capabilities with data profiling, data transformation, data mastering Success in mentoring and training team members Good-to-Have Skills: Experience in developing differentiated and deliverable solutions Experience with human data, ideally human healthcare data Familiarity with laboratory testing, patient data from clinical care, HL7, FHIR, and/or clinical trial data management Professional Certifications (please mention if the certification is preferred or mandatory for the role) ITIL Foundation or other relevant certifications (preferred) SAFe Agile Practitioner (6.0) Microsoft CertifiedData Analyst Associate (Power BI) or related certification. Databricks Certified Professional or similar certification. Soft Skills: Excellent analytical and troubleshooting skills Deep intellectual curiosity High est degree of initiative and self-motivation Strong verbal and written communication skills , including presentation to varied audiences of complex technical/business topics Confidence technical leader Ability to work effectively with global, virtual teams , specifically including leveraging of tools and artifacts to assure clear and efficient collaboration across time zones Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong problem solving, analytical skills; Ability to learn quickly and retain and synthesize complex information from diverse sources
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane