Home
Jobs

13457 Etl Jobs - Page 46

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Role : Manager / Sr Manager - MDM Experience : 7-12 yeatrs Job Location : Gurgaon/Noida/Bangalore/Hyderabad Your Responsibilities includes, but not limited to: Participate in overall architecture, Capacity planning, development, and implementation of Master Data Management solutions (MDM). Using MDM technologies and tools across an enterprise to enable the management and integration of master data. Understand the technical landscape current as well as desired future state Assess the current state architecture & understand current business processes for managing Master Data Management solutions. Assess the functional and non-functional requirements of desired future state MDM solution Prepare the to-be architecture including data ingestion, data quality rules, data model, match/merge, workflows, UI, batch integration and real-time services. Extensive hands-on experience in installation and configuration of core Informatica MDM Hub components such as Hub console, Hub Store, Hub Server, Cleanse/Match Server and Cleanse Adaptor. Ability to deliver full lifecycle MDM projects for clients including Data modeling, Metadata management, design and configuration of matching and merging rules, design and configuration of standardizing, cleansing and deduplication rules. Create Design Document and data models addressing business needs for the client MDM environment - Contribute to creating reusable assets and accelerators for MDM platforms. Will also be involved in integration/transfer of data across multiple systems, streamlining data processes and providing access to MDM data across the enterprise. Make technology decisions related to the Client MDM environment & Interpret requirements and architect MDM solutions. Provide subject matter expertise on data architecture and data integration implementations across various downstream system. Coordinate with Project Managers and participate in project planning and recurring meetings. Collaborate with other team members to review prototypes and develop iterative revisions. Must have Skills : 5-12 years of experience & should have hands on experience of working in MDM Projects and hands on experience in one or more MDM tools like Informatica or Reltio and has expertise in defining matching/ merging & survivor-ship rules. Should have strong commercial knowledge of key business processes & compliance requirements within Pharma Industry across multiple master data domains like Physician & Product Hands on experience in industry data quality tools like Informatica IDQ, IBM Data Quality. Must be proficient reading and understanding data models and experience working with data and databases. Strong technical experience in the areas of Master Data Management, Meta data management, Data Quality, Data Governance, Data Integration (ETL) and Data Security Experience with (all stages of MDM SDLC) planning, designing, building, deploying and maintaining scalable, highly available, mission critical enterprise-wide applications for large enterprises Should have experience in integrating MDM with Data Warehouses and Data Lakes Excellent query writing skills with Working knowledge of Oracle, SQL server, and other major databases Good knowledge of SOA/Real-time integration , Pub-Sub Model and Data Integration with Various CRM systems like Veeva, Siebel. Expertise in engaging with business users to understand the business requirements and articulate the value proposition. Should have experience working with 3rd Party Data Providers like IQVIA, SHS, Veeva etc Solid experience in configuring 3rd Party Address standardization tools Like or Tools Similar to Address Doctor, Loqate Provide subject matter expertise on data architecture and data integration implementations across various downstream systems Possesses excellent communication skills, both written and verbal, innovative presentation skills Education BE/B.Tech, MCA, M.Sc., M. Tech, MBA with 60%+ Why Axtria: - Axtria (www.Axtria.com) is truly a New-Age Software Product Unicorn, a first of its kind in providing the cloud software and data analytics to the Life Sciences industry globally. We help Life Sciences companies transform the product commercialization journey to drive sales growth and improve healthcare outcomes for patients. We are acutely aware that our work impacts millions of patients and lead passionately to improve their lives. Since our founding in 2010, technology innovation has been our winning differentiation, and we continue to leapfrog competition with platforms that deploy Artificial Intelligence and Machine Learning. Our cloud-based platforms - Axtria DataMAX ™, Axtria InsightsMAX ™, Axtria SALESIQ ™, Axtria CUSTOMERIQ ™ and Axtria MarketingIQ - enable customers to efficiently manage data, leverage data science to deliver insights for sales and marketing planning and manage end-to-end commercial operations. With customers in over 20 countries, Axtria is one of the biggest global commercial solutions providers in the Life Sciences industry. We continue to win industry recognition for growth and are featured in some of the most aspirational lists - INC 5000, Deloitte FAST 500, NJBiz FAST 50, SmartCEO Future 50, Red Herring 100, and several other growth and technology awards. Axtria is looking for exceptional talent to join our rapidly growing global team People are our biggest perk! Our transparent and collaborative culture offers a chance to work with some of the brightest minds in the industry Our data analytics and software platforms support data science, commercial operations, and cloud information management. We enable commercial excellence through our cloud-based sales planning and operations platform We are leaders in managing data using the latest cloud information management and big data technologies Axtria Institute, our in-house university, offers the best training in the industry and an opportunity to learn in a structured environment. A customized career progression plan ensures every associate is setup for success and able to do meaningful work in a fun environment. We want our legacy to be the leaders we produce for the industry 3500+ employees worldwide – growing rapidly & strengthening our product engineering team. We would almost double our India headcount in the coming year Show more Show less

Posted 4 days ago

Apply

5.0 years

0 Lacs

Vadodara, Gujarat, India

Remote

Linkedin logo

Internal Job Title: Data & Analytics Development Lead Business: Lucy Electric Location: Halol, Vadodara, Gujarat, Job Reference Number: 4076 Job Purpose Primary point of contact for data engineering, analysis, reporting, and management information from ERP systems and other sources. Maintain and enhance KPIs, metrics, and dashboards delivering actionable insights into business operations to drive continuous improvement. Support multiple business units by enabling comparisons and identifying opportunities for process enhancement. Engage a wide range of stakeholders to lead activities using Microsoft Power Platform, with a focus on Power BI, to ensure business requirements are met. Contribute to the functional roadmap to align data, reporting, AI and analytics capabilities in the short, medium, and long term. Job Context Working closely with the Data & Analytics Solutions Architect and cross-functional teams to ensure a coordinated approach to Business Intelligence delivery in alignment with business priorities and goals Act as the Data Platform Subject Matter Expert to support the team in advancing processes for agile development, metadata definition, business logic coding, data modelling, unit testing and data product delivery in line with the functional roadmap Job Dimensions The role is a hybrid role, with flexible attendance at our office in Vadodara, India, to support business engagement There is an occasional need to visit other sites and business partners at their premises to build stakeholder relationships or to attend specific industry events, globally Key Accountabilities These will include: Analyzing complex data sets to uncover trends, patterns, and actionable insights that drive business effectiveness and operational efficiency Collaborating remotely with cross-functional stakeholders across different countries, to confirm business requirements and translate them into analytical solutions Overseeing the end-to-end data lifecycle, including data collection, cleaning, validation and warehousing, ensuring high data quality and integrity Carrying out agile backlog management (CI/CD) and coordinating design reviews against best practice guidelines, with change control and user acceptance testing (UAT) Collaborating with the wider business to promote appropriate use of data & analytics tools through co-ordinated communications Delivering training and coaching sessions to enhance data literacy and empower business users to make data-driven decisions Leading activities according to the the analytics roadmap – resolving issues, identifying opportunities, and defining clear success metrics Supporting the Solutions Architects to foster a strong data culture and ensuring analytics input is embedded in the evaluation and prioritisation of new initiatives Troubleshooting production issues and coordinating with others to resolve incidents and complete tasks using IT Service Management (ITSM) tools Qualifications, Experience & Skills A bachelor’s degree (or equivalent professional qualifications and experience) in a relevant stream Effective communication skills in the global Business Language, English 5+ years’ experience in a business analytics or data-driven role using BI tools, preferably Power BI/Fabric, with at least 2 years in a leadership capacity demonstrating strong team management skills Capability to de-construct existing reports, validate data, and guide a small team to design and implement BI solutions Good understanding of handling multiple data sources, such MS SQL, Dataverse, M365, Azure data services Familiarity with Microsoft Dynamics 365 applications or equivalent enterprise-level finance, supply chain operations, customer service and sales business software A keen investigative mindset for identifying process improvement opportunities through data analysis, providing recommendations for automation and optimisation Experience in creating well-formed supporting documentation A proactive approach to meet service levels for Business as Usual (BAU) support and Ad-Hoc reporting needs, while working on Projects and Agile Workstreams at the same time A general understanding of a company’s ‘value chain’ and basic manufacturing industry terminology Good to Have Skills: ETL/ELT toolsets, Data Lake / One Lake, DAX, Python, T-SQL, C#, REST APIs Azure DevOps with multistage pipelines, source/version control, GIT Microsoft Power Platform and Fabric Administration Dynamics 365 accreditation or similar ERP functional qualification Data Governance tools and principles General AI understanding, Microsoft Copilot, Machine Learning (ML) frameworks, Near Time and Real Time data processing with large datasets Behavioral Competencies Capable people and performance manager, with excellent communication and interpersonal skills Process change adopter, through positive stakeholder relationship management with internal and external parties Customer-oriented problem solver, with desire to share knowledge and support others, demonstrating active listening and empathy towards their views and concerns Business focused innovative thinker, able to adapt and achieve collaborative outcomes in a global culture, working with remote support teams Lucy Group Ltd is the parent company of all Lucy Group companies. Since its origins in Oxford, UK, over 200 years ago, the Group has grown and diversified. The Group’s businesses help to advance the transition to a carbon-free world with infrastructure that enables renewable energy, electric vehicles, smart city management and sustainable living. Today we employ in excess of 1,600 people worldwide, with operations in the UK, Saudi Arabia, UAE, India, South Africa, Brazil, Thailand, Malaysia, India and East Africa. Lucy Electric is an international leader in intelligent secondary power distribution products and solutions, with remote operation and monitoring. Linking energy generation to consumption, the business specialises in high-performance medium- and low-voltage switchgear for utility, industrial and commercial applications. Key products include Ring Main Units and package substations. Does this sound interesting? We would love to hear from you. Our application process in quick and easy. Apply today! Show more Show less

Posted 4 days ago

Apply

130.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description R3- Senior Manager, Data Quality Engineer The Opportunity Based in Hyderabad, join a global healthcare biopharma company and be part of a 130- year legacy of success backed by ethical integrity, forward momentum, and an inspiring mission to achieve new milestones in global healthcare. Be part of an organisation driven by digital technology and data-backed approaches that support a diversified portfolio of prescription medicines, vaccines, and animal health products. Drive innovation and execution excellence. Be a part of a team with passion for using data, analytics, and insights to drive decision-making, and which creates custom software, allowing us to tackle some of the world's greatest health threats. Our Technology Centers focus on creating a space where teams can come together to deliver business solutions that save and improve lives. An integral part of our company’s IT operating model, Tech Centers are globally distributed locations where each IT division has employees to enable our digital transformation journey and drive business outcomes. These locations, in addition to the other sites, are essential to supporting our business and strategy. A focused group of leaders in each Tech Center helps to ensure we can manage and improve each location, from investing in growth, success, and well-being of our people, to making sure colleagues from each IT division feel a sense of belonging to managing critical emergencies. And together, we must leverage the strength of our team to collaborate globally to optimize connections and share best practices across the Tech Centers. Role Overview As a Data Quality Engineer you will play a pivotal role ensuring the quality of our data across all domains, which will directly influence patients who use our life-saving products. Key tasks include data integration, ETL (Extract, Transform, Load) processes, and building and managing data quality routines. If you are passionate about data governance and want to make a significant impact, we encourage you to apply. What Will You Do In This Role As part of enterprise Data Quality platform team, you will contribute to our success in the following areas Work with our divisional partners to onboard their data to our data quality platform and help drive adoption within their teams. Understand divisional requirements and codify them within the data quality platform Create and maintain data quality rules and checks. Review data quality reports and communicate findings with divisional stakeholders Train users on the platform, promoting consistent use. Contribute to the development and documentation of standards for platform usage. Perform product engineering and develop automation utilities. What Should You Have Bachelor's degree in Computer Science, Information Technology, or a related field, or equivalent experience. Hands-on professional who has been in the technology industry for minimum 7-11 years as a Data Engineer. Strong level of SQL is a must. Knowledge of data transformation (ETL/ELT) routines. Strong understanding of REST APIs and how to use them programmatically. Experience with Collibra Data Quality and with data visualization tools is an advantage. Knowledge of GitHub and Python is an advantage Some experience with Spark/PySpark would be good. Good standard of professional communication and building working relationships with customers. Good time-management skills and ability to work independently. Innovative mindset, willingness to learn new areas and adapt to change. Strong work documentation habits with and attention to detail and accuracy. Critical analytical thinking and problem-solving attitude. Keen sense of urgency and customer focus. Team player spirit. Who We Are We are known as Merck & Co., Inc., Rahway, New Jersey, USA in the United States and Canada and MSD everywhere else. For more than a century, we have been inventing for life, bringing forward medicines and vaccines for many of the world's most challenging diseases. Today, our company continues to be at the forefront of research to deliver innovative health solutions and advance the prevention and treatment of diseases that threaten people and animals around the world. What We Look For Imagine getting up in the morning for a job as important as helping to save and improve lives around the world. Here, you have that opportunity. You can put your empathy, creativity, digital mastery, or scientific genius to work in collaboration with a diverse group of colleagues who pursue and bring hope to countless people who are battling some of the most challenging diseases of our time. Our team is constantly evolving, so if you are among the intellectually curious, join us—and start making your impact today. Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Regular Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Hybrid Shift Valid Driving License Hazardous Material(s) Required Skills Data Engineering, Data Visualization, Design Applications, Software Configurations, Software Development, Software Development Life Cycle (SDLC), Solution Architecture, System Designs, Systems Integration, Testing Preferred Skills Job Posting End Date 07/12/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R346609 Show more Show less

Posted 4 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Summary Position Summary Senior Analyst - SharePoint Admin - HYD The Microsoft 365 Consultant will be part of a cross functional team supporting and enhancing the features and functionality of Microsoft 365, SharePoint on-premises (2019 and Subscription Edition) The role will support all areas of Microsoft 365 including Teams, SharePoint Online, PowerShell, PowerApps, and Power Automate etc. The role will provide 3LS to the business in an agile manner and ensure lessons learnt are documented and communicated to 2LS. Work you’ll do Providing 3rd line support for Microsoft 365 and SharePoint services, operating under a best practice, Agile framework. Design solutions based on product backlog and user stories to deliver value to business users. Responding to service outages which affect Deloitte’s business operation and reputation, including out of hours escalations as part of a 24 x 7 on-call rota. Ability to solve problems, quickly and completely and to communicate them clearly to peers, customers, and management. Working with change management, developers, project teams, business teams and vendors to provide guidance and assist in the enhancement of the Microsoft 365 and SharePoint services. Proactive input into the definition of Microsoft 365 and SharePoint Server technical standards Working closely with practitioners and vendors to provide technical/application support and assistance for problems related to Microsoft 365, SharePoint and associated solutions. Monitoring and maintaining the performance, availability and security of Microsoft 365 and SharePoint services, with a focus on continuous service improvement Assisting in maintaining documentation, technology compliance standards and governance Responsibilities Explore ideas and build prototypes. Work collaboratively with teams and departments outside of the POD. Deal with incoming tickets, problems, and requests, liaise with business user, 1LS and 2LS and wider POD team. Estimates size of backlog items Translation of backlog items into engineering design and logical units of work (tasks) Evaluation of technical feasibility Implementation of backlog items Application of product support and enhancement best practices Work with relevant teams to evaluate design to be able to deliver application specific training material and workshops. Requirements validation of user stories and technical solutions across all platforms/services/solutions Engagement with global, local and extended teams to ensure operational alignment The Team The Group develops Custom products, applications and services for Deloitte professionals globally. As a team we are here to delight customers by embracing design thinking, agility, innovation, and a customer first focus. support. Location : Hyderabad Work shift Timings : 11 AM to 8 PM Qualifications Bachelor of Engineering/ Bachelor of Technology Essential Extensive SharePoint Server 2019 & SharePoint Online experience in a highly regulated enterprise configuration Development, governance, testing and deployment of Power Platform features including PowerApps, Power Automate & Power BI. At least 3 years' experience in developing/supporting capabilities on SharePoint using industry standard tools, including but not limited to SharePoint Designer, InfoPath, Visual Studio & PowerShell. Significant experience in service management, proactive monitoring, issue resolution, continuous improvement and collaboration. Microsoft 365 SharePoint Online management, configuration and governance Microsoft 365 Groups configuration, architecture and support Advanced PowerShell configuration and development Preferred SPFX framework design and development Microsoft 365 tenant administration and governance Microsoft Graph development and integration Content migration including data cleansing, ETL, data mapping and metadata Front end development technology. Angular, REST API, JSON, JavaScript, CSS, etc. Microsoft 365 Security principles including DLP, AIP, eDiscovery, GDPR Understanding of UI and UX principles Principles To work collaboratively to produce a solution that meets the needs of stakeholders given the resource constraints Collaborate extensively within your team including those outside your specialty Collaborate extensively with colleagues outside of the team within ITS and the wider business. To share by default, all information including “work in progress” To coach others in your skills and experience To continuously expand your knowledge and skills outside your specialty To validate your work as early as possible, working with others to do so To attend co-ordination meetings in person or available technology that enhances collaboration. To proactively look for ways to improve team performance, efficiency and productivity Work within iteration guidelines and seek clarification or provide input where required. Develop standardised support documentation and processes for all deliverables within the supported platforms/application/solutions Input and maintenance of internal service catalogue Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300877 Show more Show less

Posted 4 days ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Company Overview Viraaj HR Solutions is a dynamic HR consultancy dedicated to connecting talent with opportunity. Our mission is to enhance workforce efficiency and support organizations in achieving their goals through strategic recruitment and talent management. Our team values integrity, collaboration, and innovation, and we work diligently to align the right talent with the right role, ensuring a great fit both for clients and candidates. Role Responsibilities Develop and manage data pipelines on the Databricks platform. Optimize and maintain data processing workflows using Apache Spark. Implement ETL processes to integrate data from various sources. Collaborate with data engineers and analysts to design data models. Write optimized SQL queries for data retrieval and analysis. Utilize Python for scripting and automation tasks. Monitor and troubleshoot data processing jobs for performance issues. Work with cloud technologies (Azure/AWS) to enhance data solutions. Conduct data analytics to derive actionable insights. Implement version control mechanisms for code management. Participate in code reviews and contribute to documentation. Stay updated with the latest features and best practices of Databricks. Provide technical support and guidance to team members. Engage in collaborative projects to enhance data quality. Participate in strategy meetings to align data projects with business goals. Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field. 2+ years of experience in data engineering or development roles. Strong proficiency in the Databricks platform. Experience with Apache Spark and its components. Solid understanding of database management systems and SQL. Knowledge of Python for data manipulation and analytics. Familiarity with ETL tools and data integration techniques. Experience with cloud platforms such as AWS or Azure. Ability to work collaboratively in cross-functional teams. Excellent problem-solving skills and attention to detail. Strong communication skills, both verbal and written. Prior experience in data analysis and visualization is a plus. Understanding of data governance and security best practices. A proactive approach to learning new technologies. Experience in using version control software like Git. Skills: python scripting,databricks,version control (git),sql,cloud technologies,data governance and security,digital : databricks,cloud technologies (azure/aws),etl,apache spark,python,version control,data analytics,data integration Show more Show less

Posted 4 days ago

Apply

2.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Are you a data-driven professional with a knack for business intelligence and sales analytics? Do you excel at transforming complex datasets into actionable insights that drive business success? If yes, Marconix is looking for a Business Analyst (Sales Analyst) to join our team! Location: Hyderabad Salary: Up to ₹3 LPA (CTC) Work Mode: Work from Office Why Join Us? Work with a fast-growing and innovative sales solutions company Hands-on experience in business intelligence and sales analytics Opportunity to work with top-tier clients and industry leaders Sales Data Management & Reporting Transform raw sales data into valuable business insights using BI tools (Tableau, Power BI, etc.). Develop and deploy robust reporting dashboards for tracking performance metrics. Manage ETL processes (Extract, Transform, Load) to streamline data flow. Analyze large datasets to identify market trends and business growth opportunities. Business Intelligence & Analytics Develop data-driven strategies to optimize sales performance. Build predictive models to forecast sales trends and customer behavior. Conduct deep-dive analysis on business performance and suggest data-backed improvements. Work closely with stakeholders to understand their requirements and provide customized analytical solutions. Client & Team Management Act as the primary liaison between business and technical teams. Gather and analyze business requirements to enhance operational efficiency. Provide strategic recommendations to clients and internal teams based on data insights. What We Expect from You: Educational Background:Tech / BE / BCA / BSc in Computer Science, Engineering, or a related field. Experience: Relevant: 2+ years as a Business Analyst focusing on sales reporting & analytics Must-Have Skills: Strong expertise in BI tools (Tableau, Power BI, Oracle BI) Hands-on experience in ETL processes (Informatica, Talend, Teradata, Jasper, etc.) Solid understanding of data modeling, data analytics, and business reporting Excellent client management & stakeholder communication skills Strong analytical and problem-solving mindset Bonus Skills (Preferred but Not Mandatory): Experience in sales process automation & CRM analytics Exposure to AI & Machine Learning in sales analytics Show more Show less

Posted 4 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Company Overview Viraaj HR Solutions is a dynamic recruitment agency dedicated to connecting organizations with top-tier talent across various industries. We pride ourselves on our commitment to understanding our clients' unique needs and delivering tailored solutions that drive their success. Our mission is to cultivate a culture of excellence and integrity, and we seek talented individuals who are passionate about contributing to organizational growth. Job Title Informatica CDI (Cloud Data Integration) Location: On-site - India Role Responsibilities Develop and implement Informatica Cloud Data Integration solutions. Design ETL processes to facilitate effective data movement. Collaborate with stakeholders to gather requirements and create data integration strategies. Monitor data quality and performance of integration jobs. Troubleshoot and resolve ETL issues and data discrepancies. Create and maintain documentation for data integration processes. Perform data modeling to enhance data access and usability. Utilize SQL to extract, transform, and load data as needed. Develop dashboards and reports to provide insights from integrated data. Ensure data governance and compliance with industry standards. Work on data migration projects from various sources to the cloud. Assist in the configuration and maintenance of cloud infrastructure. Participate in code reviews and contribute to best practices. Train end-users on data integration tools and processes. Stay updated with the latest trends in cloud data integration technologies. Qualifications Bachelor's degree in Computer Science, Information Technology, or related field. Minimum 3 years of experience in Informatica CDI or related roles. Proficiency in Cloud Data Integration and ETL tools. Strong SQL skills and experience in database management. Knowledge of data modeling techniques and data governance principles. Experience with cloud platforms like AWS, Azure, or Google Cloud. Solid troubleshooting skills and problem-solving abilities. Strong analytical skills and attention to detail. Excellent communication and collaboration skills. Ability to work independently and manage multiple tasks simultaneously. Certifications in Informatica or related technologies are a plus. Familiarity with Agile methodologies is preferred. Understanding of data privacy regulations and best practices. Experience with scripting languages like Python or Shell scripting is beneficial. Demonstrated ability to learn new technologies quickly. Strong commitment to professional development and continuous learning. Skills: troubleshooting,data quality,cloud platforms (aws, azure, google cloud),collaboration,problem-solving,etl processes,agile methodologies,data modeling,sql,analytical skills,cloud infrastructure,sql proficiency,informatica cloud data integration,data governance,scripting languages (python, shell scripting),data integration,communication,cloud data integration Show more Show less

Posted 4 days ago

Apply

6.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Senior Enterprise Data Architect 📍 Location: Remote (India Only) 🕒 Employment Type: Full-Time About the Role: We are seeking a highly experienced and strategic Enterprise Data Architect to lead and manage data architecture initiatives aligned with our client’s Data & Analytics strategy. In this role, you will be instrumental in defining, governing, and optimizing enterprise data assets to support advanced analytics, secure data usage, and improved decision-making across the organization. You will be responsible for designing scalable data models, supporting ETL/data teams, and driving architectural best practices across systems. This is a hands-on leadership role ideal for someone with deep technical knowledge and strong stakeholder engagement capabilities. Key Responsibilities: Influence and align with the client’s Data & Analytics Strategy . Design and implement scalable data models and architectures , including dimensional models, data vaults, star schemas, and snowflake schemas . Create and present data mapping documents to ETL and testing teams for implementation. Utilize Visio , Erwin , or similar tools for logical and physical data modeling. Collaborate with data architects, analysts, and stakeholders to gather and validate requirements. Lead and manage multiple concurrent projects and initiatives from start to completion. Conduct enterprise data modeling and contribute to data governance and architecture practices. Optimize DataHub performance, ensure data integrity , and enhance data security and compliance . Support integration with business intelligence and reporting tools to surface actionable insights. Required Qualifications: Bachelor’s degree in Computer Science , Information Systems, or a related field. 6 to 10 years of total IT experience, with at least 3+ years in data architecture or enterprise data modeling roles. Proven expertise in database design , including transactional modeling , dimensional modeling , and data vault architectures . Strong proficiency in data modeling tools (e.g., Erwin, Visio). Experience working with cross-functional teams and managing data initiatives end-to-end. Preferred Skills & Experience: Hands-on experience implementing data governance and analytics programs . Familiarity with cloud data platforms (AWS, Azure, or GCP) is a plus. Knowledge of Oracle Analytics or other enterprise reporting platforms is desirable. Soft Skills: Strategic and analytical thinker with strong problem-solving abilities . Effective communicator with the ability to influence stakeholders at all levels. High emotional intelligence , adaptability, and a proactive mindset. Committed to delivery excellence , collaboration, and continuous improvement. Why Join Us? Join a globally respected team and contribute to transforming enterprise data into actionable insights. You’ll play a key role in building scalable data solutions that drive business performance while working in a remote-friendly and innovation-driven environment. Show more Show less

Posted 4 days ago

Apply

5.0 - 7.0 years

0 Lacs

Thiruvananthapuram, Kerala, India

On-site

Linkedin logo

Role Description Job Overview: UST is seeking a skilled Snowflake Engineer with 5 to 7 years of experience to join our team. The ideal candidate will play a key role in the development, implementation, and optimization of data solutions on the Snowflake cloud platform. The candidate should have strong expertise in Snowflake , PySpark , and a solid understanding of ETL processes, along with proficiency in data engineering and data processing technologies. This role is essential for designing and maintaining high-performance data pipelines and data warehouses , focusing on scalability and efficient data storage, with a specific emphasis on transforming data using PySpark . Key Responsibilities Snowflake Data Warehouse Development: Design, implement, and optimize data warehouses on the Snowflake cloud platform. Ensure the effective utilization of Snowflake’s features for scalable, efficient, and high-performance data storage and processing. Data Pipeline Development: Develop, implement, and optimize end-to-end data pipelines on the Snowflake platform. Design and maintain ETL workflows to enable seamless data processing across systems. Data Transformation with PySpark: Leverage PySpark for data transformations within the Snowflake environment. Implement complex data cleansing, enrichment, and validation processes using PySpark to ensure the highest data quality. Collaboration: Work closely with cross-functional teams to design data solutions aligned with business requirements. Engage with stakeholders to understand business needs and translate them into technical solutions. Optimization: Continuously monitor and optimize data storage, processing, and retrieval performance in Snowflake. Leverage Snowflake’s capabilities for scalable data storage and data processing to ensure efficient performance. Required Qualifications Experience: 5 to 7 years of experience as a Data Engineer, with a strong emphasis on Snowflake. Proven experience in designing, implementing, and optimizing data warehouses on the Snowflake platform. Expertise in PySpark for data processing and analytics. Technical Skills: Snowflake: Strong knowledge of Snowflake architecture, features, and best practices for data storage and performance optimization. PySpark: Proficiency in PySpark for data transformation, cleansing, and processing within the Snowflake environment. ETL: Experience with ETL processes to extract, transform, and load data into Snowflake. Programming Languages: Proficiency in Python, SQL, or Scala for data processing and transformations. Data Modeling: Experience with data modeling techniques and designing efficient data schemas for optimal performance in Snowflake. Skills Snowflake,Pyspark,Sql,Etl Show more Show less

Posted 4 days ago

Apply

3.0 - 7.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Built systems that power B2B SaaS products? Want to scale them for real-world impact? Our client is solving some of the toughest data problems in India powering fintech intelligence, risk engines, and decision-making platforms where structured data is often missing. Their systems are used by leading institutions to make sense of complex, high-velocity datasets in real time. We’re looking for a Senior Data Engineer who has helped scale B2B SaaS platforms, built pipelines from scratch, and wants to take complete ownership of data architecture and infrastructure decisions. What You'll Do: Design, build, and maintain scalable ETL pipelines using Python , PySpark , and Airflow Architect ingestion and transformation workflows using AWS services like S3 , Lambda , Glue , and EMR Handle large volumes of structured and unstructured data with a focus on performance and reliability Lead data warehouse and schema design across Postgres , MongoDB , DynamoDB , and Elasticsearch Collaborate cross-functionally to ensure data infrastructure aligns with product and analytics goals Build systems from the ground up and contribute to key architectural decisions Mentor junior team members and guide implementation best practices You’re a Great Fit If You Have: 3 to 7 years of experience in data engineering , preferably within B2B SaaS/AI environments ( mandatory ) Strong programming skills in Python and experience with PySpark , and Airflow Strong expertise in designing, building and deploying data pipelines in product environments Mandatory experience in NoSQL databases Hands-on with AWS data services and distributed data processing tools like Spark or Dask Understanding of data modeling , performance tuning , and database design Experience working in fast-paced, product-driven teams and have seen the 0 to 1 journey Awareness of async programming and how it applies in real-world risk/fraud use cases Experience mentoring or guiding junior engineers is preferred Role Details: Location: Mumbai (On-site WFO) Experience: 3 to 7 years Budget: 20 to 30 LPA (Max) Notice Period: 30 days or less If you're from a B2B SaaS background and looking to solve meaningful, large-scale data problems we’d love to talk. Apply now or reach out directly to learn more. Show more Show less

Posted 4 days ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Company Overview Viraaj HR Solutions is a dynamic HR consulting firm dedicated to optimizing human resource management and development. Our mission is to bridge the gap between talent and opportunity, driving growth and success for both our clients and candidates. We foster a culture of collaboration, innovation, and integrity, consistently striving to deliver exceptional service in the evolving landscape of human resources. Role Responsibilities Design, develop, and implement ETL processes using Talend. Collaborate with data analysts and stakeholders to understand data requirements. Perform data cleansing and transformation tasks. Optimize and automate existing data integration workflows. Monitor ETL jobs and troubleshoot issues as they arise. Conduct performance tuning of Talend jobs for efficiency. Document ETL processes and maintain technical documentation. Work closely with cross-functional teams to support data needs. Ensure data integrity and accuracy throughout the ETL process. Stay updated with Talend best practices and upcoming features. Assist in the migration of data from legacy systems to new platforms. Participate in code reviews to ensure code quality and adherence to standards. Engage in user training and support as necessary. Provide post-implementation support for deployed solutions. Evaluate and implement new data tools and technologies. Qualifications 3+ years of experience as a Talend Developer. Strong understanding of ETL principles and practices. Proficiency in Talend Open Studio. Hands-on experience with SQL and database management. Familiarity with data warehousing concepts. Experience using Java for Talend scripting. Knowledge of APIs and web services. Effective problem-solving skills. Strong communication and collaboration abilities. Ability to work independently and as part of a team. Attention to detail and accuracy in data handling. Experience with job scheduling tools. Ability to manage multiple priorities and deadlines. Knowledge of data modeling concepts. Experience in documentation and process mapping. Skills: data cleansing,data warehousing,job scheduling tools,problem solving,team collaboration,sql,documentation,digital : talend open studio,talend,data transformation,data modeling,performance tuning,web services,api development,java,apis,data integration,etl processes Show more Show less

Posted 4 days ago

Apply

2.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Company Overview Viraaj HR Solutions is a dynamic HR consultancy dedicated to connecting talent with opportunity. Our mission is to enhance workforce efficiency and support organizations in achieving their goals through strategic recruitment and talent management. Our team values integrity, collaboration, and innovation, and we work diligently to align the right talent with the right role, ensuring a great fit both for clients and candidates. Role Responsibilities Develop and manage data pipelines on the Databricks platform. Optimize and maintain data processing workflows using Apache Spark. Implement ETL processes to integrate data from various sources. Collaborate with data engineers and analysts to design data models. Write optimized SQL queries for data retrieval and analysis. Utilize Python for scripting and automation tasks. Monitor and troubleshoot data processing jobs for performance issues. Work with cloud technologies (Azure/AWS) to enhance data solutions. Conduct data analytics to derive actionable insights. Implement version control mechanisms for code management. Participate in code reviews and contribute to documentation. Stay updated with the latest features and best practices of Databricks. Provide technical support and guidance to team members. Engage in collaborative projects to enhance data quality. Participate in strategy meetings to align data projects with business goals. Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field. 2+ years of experience in data engineering or development roles. Strong proficiency in the Databricks platform. Experience with Apache Spark and its components. Solid understanding of database management systems and SQL. Knowledge of Python for data manipulation and analytics. Familiarity with ETL tools and data integration techniques. Experience with cloud platforms such as AWS or Azure. Ability to work collaboratively in cross-functional teams. Excellent problem-solving skills and attention to detail. Strong communication skills, both verbal and written. Prior experience in data analysis and visualization is a plus. Understanding of data governance and security best practices. A proactive approach to learning new technologies. Experience in using version control software like Git. Skills: python scripting,databricks,version control (git),sql,cloud technologies,data governance and security,digital : databricks,cloud technologies (azure/aws),etl,apache spark,python,version control,data analytics,data integration Show more Show less

Posted 4 days ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Key Responsibilities: Design and manage distributed system architectures using Azure services such as Event Hub, Data Factory, ADLS Gen2, Cosmos DB, Synapse, Databricks, APIM, Function App, Logic App, and App Services . Implement infrastructure as code (IaC) using ARM templates and Terraform for consistent, automated environment provisioning. Deploy and manage containerized applications using Docker and orchestrate them with Azure Kubernetes Service (AKS) . Monitor and troubleshoot infrastructure and applications using Azure Monitor , Log Analytics , and Application Insights . Design and implement disaster recovery strategies , backups, and failover mechanisms to ensure business continuity. Automate provisioning, scaling, and infrastructure management to maintain system reliability and performance. Manage Azure environments across development, test, pre production, and production stages. Monitor and define job flows , set up proactive alerts, and ensure smooth ETL operations in Azure Data Factory and Databricks . Conduct root cause analysis and implement fixes for job failures. Work with Jenkins and Azure DevOps for automating CI/CD pipelines and deployment workflows. Write automation scripts using Python and Shell scripting for various operational tasks. Monitor VM performance metrics (CPU, memory, OS, network, storage) and recommend optimizations. Collaborate with development teams to improve application reliability and performance. Work in Agile environments with a proactive and results-driven mindset. Expertise in Azure services for data engineering and application deployment. Strong knowledge of Terraform , ARM templates, and CI/CD tools . Hands-on experience with Databricks , Data Factory , and Event Hub . Familiarity with Python , Shell scripting , Jenkins , and Azure DevOps . Deep understanding of container orchestration using AKS . Experience in monitoring , alerting , and log analysis for cloud-native application Show more Show less

Posted 4 days ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Key Responsibilities: Design and manage distributed system architectures using Azure services such as Event Hub, Data Factory, ADLS Gen2, Cosmos DB, Synapse, Databricks, APIM, Function App, Logic App, and App Services . Implement infrastructure as code (IaC) using ARM templates and Terraform for consistent, automated environment provisioning. Deploy and manage containerized applications using Docker and orchestrate them with Azure Kubernetes Service (AKS) . Monitor and troubleshoot infrastructure and applications using Azure Monitor , Log Analytics , and Application Insights . Design and implement disaster recovery strategies , backups, and failover mechanisms to ensure business continuity. Automate provisioning, scaling, and infrastructure management to maintain system reliability and performance. Manage Azure environments across development, test, pre-production, and production stages. Monitor and define job flows , set up proactive alerts, and ensure smooth ETL operations in Azure Data Factory and Databricks . Conduct root cause analysis and implement fixes for job failures. Work with Jenkins and Azure DevOps for automating CI/CD pipelines and deployment workflows. Write automation scripts using Python and Shell scripting for various operational tasks. Monitor VM performance metrics (CPU, memory, OS, network, storage) and recommend optimizations. Collaborate with development teams to improve application reliability and performance. Work in Agile environments with a proactive and results-driven mindset. Show more Show less

Posted 4 days ago

Apply

7.0 - 9.0 years

0 Lacs

New Delhi, Delhi, India

On-site

Linkedin logo

The purpose of this role is to understand, model and facilitate change in a significant area of the business and technology portfolio either by line of business, geography or specific architecture domain whilst building the overall Architecture capability and knowledge base of the company. Job Description: Role Overview : We are seeking a highly skilled and motivated Cloud Data Engineering Manager to join our team. The role is critical to the development of a cutting-edge reporting platform designed to measure and optimize online marketing campaigns. The GCP Data Engineering Manager will design, implement, and maintain scalable, reliable, and efficient data solutions on Google Cloud Platform (GCP). The role focuses on enabling data-driven decision-making by developing ETL/ELT pipelines, managing large-scale datasets, and optimizing data workflows. The ideal candidate is a proactive problem-solver with strong technical expertise in GCP, a passion for data engineering, and a commitment to delivering high-quality solutions aligned with business needs. Key Responsibilities : Data Engineering & Development : Design, build, and maintain scalable ETL/ELT pipelines for ingesting, processing, and transforming structured and unstructured data. Implement enterprise-level data solutions using GCP services such as BigQuery, Dataform, Cloud Storage, Dataflow, Cloud Functions, Cloud Pub/Sub, and Cloud Composer. Develop and optimize data architectures that support real-time and batch data processing. Build, optimize, and maintain CI/CD pipelines using tools like Jenkins, GitLab, or Google Cloud Build. Automate testing, integration, and deployment processes to ensure fast and reliable software delivery. Cloud Infrastructure Management : Manage and deploy GCP infrastructure components to enable seamless data workflows. Ensure data solutions are robust, scalable, and cost-effective, leveraging GCP best practices. Infrastructure Automation and Management: Design, deploy, and maintain scalable and secure infrastructure on GCP. Implement Infrastructure as Code (IaC) using tools like Terraform. Manage Kubernetes clusters (GKE) for containerized workloads. Collaboration and Stakeholder Engagement : Work closely with cross-functional teams, including data analysts, data scientists, DevOps, and business stakeholders, to deliver data projects aligned with business goals. Translate business requirements into scalable, technical solutions while collaborating with team members to ensure successful implementation. Quality Assurance & Optimization : Implement best practices for data governance, security, and privacy, ensuring compliance with organizational policies and regulations. Conduct thorough quality assurance, including testing and validation, to ensure the accuracy and reliability of data pipelines. Monitor and optimize pipeline performance to meet SLAs and minimize operational costs. Qualifications and Certifications : Education: Bachelor’s or master’s degree in computer science, Information Technology, Engineering, or a related field. Experience: Minimum of 7 to 9 years of experience in data engineering, with at least 4 years working on GCP cloud platforms. Proven experience designing and implementing data workflows using GCP services like BigQuery, Dataform Cloud Dataflow, Cloud Pub/Sub, and Cloud Composer. Certifications: Google Cloud Professional Data Engineer certification preferred. Key Skills : Mandatory Skills: Advanced proficiency in Python for data pipelines and automation. Strong SQL skills for querying, transforming, and analyzing large datasets. Strong hands-on experience with GCP services, including Cloud Storage, Dataflow, Cloud Pub/Sub, Cloud SQL, BigQuery, Dataform, Compute Engine and Kubernetes Engine (GKE). Hands-on experience with CI/CD tools such as Jenkins, GitHub or Bitbucket. Proficiency in Docker, Kubernetes, Terraform or Ansible for containerization, orchestration, and infrastructure as code (IaC) Familiarity with workflow orchestration tools like Apache Airflow or Cloud Composer Strong understanding of Agile/Scrum methodologies Nice-to-Have Skills: Experience with other cloud platforms like AWS or Azure. Knowledge of data visualization tools (e.g., Power BI, Looker, Tableau). Understanding of machine learning workflows and their integration with data pipelines. Soft Skills : Strong problem-solving and critical-thinking abilities. Excellent communication skills to collaborate with technical and non-technical stakeholders. Proactive attitude towards innovation and learning. Ability to work independently and as part of a collaborative team. Location: Bengaluru Brand: Merkle Time Type: Full time Contract Type: Permanent Show more Show less

Posted 4 days ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Company Overview Viraaj HR Solutions is a leading recruitment consultancy focused on connecting businesses with top talent across various industries. Our mission is to deliver exceptional HR solutions tailored to the unique needs of our clients, contributing to their success through strategic hiring practices. We value integrity, commitment, and excellence in our work culture, ensuring a supportive environment for both our clients and candidates. Role Responsibilities Design and implement robust data pipelines using Python and Pyspark. Develop and maintain data models that support organizational analytics and reporting. Work closely with data scientists and analysts to understand data requirements and translate them into technical specifications. Integrate and maintain Snowflake for data warehousing solutions. Ensure data quality and integrity through effective ETL processes. Conduct data profiling and performance tuning to optimize system performance. Collaborate with cross-functional teams to define data architecture standards and best practices. Participate in the creation of documentation for data flows and data management best practices. Monitor data pipelines and troubleshoot issues as they arise. Implement security measures to protect sensitive data information. Stay updated with the latest trends and technologies in data engineering. Assist in migrating existing data solutions to cloud-based infrastructures. Support continuous improvement initiatives around data management. Provide technical guidance and mentorship to junior data engineers. Participate in code reviews and adhere to best practices in software development. Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field. Minimum of 3 years of experience in data engineering or a related role. Proficient in Python programming and Pyspark framework. Experience with Snowflake or similar cloud data warehousing platforms. Strong understanding of ETL principles and data integration techniques. Solid understanding of database design and data modeling concepts. Excellent SQL skills for querying databases and data analysis. Familiarity with cloud platforms like AWS, Azure, or Google Cloud. Ability to work collaboratively in cross-functional teams. Strong analytical and problem-solving skills. Excellent communication and interpersonal skills. Experience with version control systems (e.g., Git). Knowledge of Agile methodologies and project management. A commitment to continuous learning and professional development. Ability to work on multiple projects simultaneously and meet deadlines. Skills: data architecture,etl,git,problem-solving skills,snowflake,python,data engineering,data warehousing,cloud computing,data integration,sql,data modeling,sql proficiency,pyspark,agile methodologies,cloud platforms (aws, azure, google cloud) Show more Show less

Posted 4 days ago

Apply

3.0 years

0 Lacs

Navi Mumbai, Maharashtra, India

On-site

Linkedin logo

Our corporate activities are growing rapidly, and we are currently seeking a full-time, office-based Data Engineer to join our Information Technology team. This position will work on a team to accomplish tasks and projects that are instrumental to the company’s success. If you want an exciting career where you use your previous expertise and can develop and grow your career even further, then this is the opportunity for you. Responsibilities Utilize skills in development areas including data warehousing, business intelligence, and databases (Snowflake, ANSI SQL, SQL Server, T-SQL); Support programming/software development using Extract, Transform, and Load (ETL) and Extract, Load and Transform (ELT) tools, (dbt, Azure Data Factory, SSIS); Design, develop, enhance and support business intelligence systems primarily using Microsoft Power BI; Collect, analyze and document user requirements; Participate in software validation process through development, review, and/or execution of test plan/cases/scripts; Create software applications by following software development lifecycle process, which includes requirements gathering, design, development, testing, release, and maintenance; Communicate with team members regarding projects, development, tools, and procedures; and Provide end-user support including setup, installation, and maintenance for applications Qualifications Bachelor's Degree in Computer Science, Data Science, or a related field; 3+ years of experience in Data Engineering; Knowledge of developing dimensional data models and awareness of the advantages and limitations of Star Schema and Snowflake schema designs; Solid ETL development, reporting knowledge based off intricate understanding of business process and measures; Knowledge of Snowflake cloud data warehouse, Fivetran data integration and dbt transformations is preferred; Knowledge of Python is preferred; Knowledge of REST API; Basic knowledge of SQL Server databases is required; Knowledge of C#, Azure development is a bonus; and Excellent analytical, written and oral communication skills. Medpace Overview Medpace is a full-service clinical contract research organization (CRO). We provide Phase I-IV clinical development services to the biotechnology, pharmaceutical and medical device industries. Our mission is to accelerate the global development of safe and effective medical therapeutics through its scientific and disciplined approach. We leverage local regulatory and therapeutic expertise across all major areas including oncology, cardiology, metabolic disease, endocrinology, central nervous system, anti-viral and anti-infective. Headquartered in Cincinnati, Ohio, employing more than 5,000 people across 40+ countries. Why Medpace? People. Purpose. Passion. Make a Difference Tomorrow. Join Us Today. The work we’ve done over the past 30+ years has positively impacted the lives of countless patients and families who face hundreds of diseases across all key therapeutic areas. The work we do today will improve the lives of people living with illness and disease in the future. Medpace Perks Flexible work environment Competitive compensation and benefits package Competitive PTO packages Structured career paths with opportunities for professional growth Company-sponsored employee appreciation events Employee health and wellness initiatives Awards Recognized by Forbes as one of America's Most Successful Midsize Companies in 2021, 2022, 2023 and 2024 Continually recognized with CRO Leadership Awards from Life Science Leader magazine based on expertise, quality, capabilities, reliability, and compatibility What To Expect Next A Medpace team member will review your qualifications and, if interested, you will be contacted with details for next steps. Show more Show less

Posted 4 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Overview This position is for Lead Data Engineer in the Commercial Data as a Service group. In this position you will enjoy being responsible for helping define and maintain the data systems key to delivering successful outcomes for our customers. You will be hands on and work closely to guide a team of Data Engineers in the associated data maintenance, integrations, enhancements, loads and transformation processes for the organization. This key individual will work closely with Data Architects to design and implement solutions and insure successful implementations. Role Leads initiatives to build and maintain database technologies, environments, and applications, seeking opportunities for improvements and efficiencies Architects internal data solutions as part of the full stack to include data modelling, integration with file based as well as event driven upstream systems Writes SQL statement procedures to optimize SQL execution and query development Effectively utilizes various tools such as Spark (Scala, Python), Nifi, Spark streaming, Informatica for data ETL, Manages the deployment of data solutions that are optimally standardized and database updates to meet project deliverables Leads database security posture, which includes proactively identifying security risks and implementing both risk mitigation plans and control functions Oversees the resolution of chronic complex problems to prevent future data performance issues Supports process improvement efforts to identify and test opportunities for automation and/or reduction in time to deployment Responsible for complex design (in conjunction with Data Architects), development, and performance and system testing, and provides functional guidance, advice to experienced engineers Mentors junior staff by providing training to develop technical skills and capabilities across the team All about you Experience developing a specialization in a particular functional area (e.g., modeling, data loads, transformations, replication, performance tuning, logical and physical database design, performance and troubleshooting, data replication, backup and recovery, and data security) leveraging Apache Spark, Nifi, Databricks, Snowflake, Informatica, streaming solutions. Experience leading a major work stream or multiple smaller work streams for a large domain initiative, often providing technical guidance and advice to project team members Experience creating deliverables within the global database technology domains and sub-domains, supporting cross-functional leaders in the technical community to derive new solutions Experience supporting automation and/or cloud delivery effort; may perform financial and cost analysis Experience in database architecture or other relevant IT experience Experience in leading business system application and database architecture design, influencing technology direction in range of breadth of IT areas Show more Show less

Posted 4 days ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Greetings from TATA Consultancy Services!! TCS is hiring for Data Modeler - Architect Experience Range: 10+ Years Job Location: Hyderabad (Adibatla), Chennai Job Summary: Detail-oriented and analytical Data Modeler to design, implement, and maintain logical and physical data models that support business intelligence, data warehousing, and enterprise data integration needs. The ideal candidate will work closely with business analysts, data architects, and software engineers to ensure data is organized effectively and support scalable, high-performance applications. Required Skills: • Strong understanding of relational, dimensional, and NoSQL data modeling techniques. • Proficient in data modeling tools (e.g., Erwin, Enterprise Architect, PowerDesigner, SQL Developer Data Modeler). • Experience with Advanced SQL and major database platforms (e.g., Oracle, SQL Server, PostgreSQL, MySQL). • Familiarity with cloud data platforms (e.g., AWS Redshift, Google BigQuery, Azure SQL, Snowflake). • Excellent communication and documentation skills. • Knowledge of data governance and data quality principles. • Experience with data warehousing concepts and tools (e.g., ETL pipelines, OLAP cubes). • Familiarity with industry standards such as CDM (Common Data Model), FHIR, or other domain-specific models Key Responsibilities: • Design and develop conceptual, logical, and physical data models. • Translate business requirements into data structures that support analytics, reporting, and operational needs. • Work with stakeholders to understand and document data needs and flows. • Optimize and maintain existing data models for performance and scalability. • Ensure data models are consistent with architectural guidelines and standards. • Develop and maintain metadata repositories and data dictionaries. • Collaborate with data architects and engineers to implement models within databases and data platforms. • Assist in data quality analysis and improvement initiatives. • Document data models and data mapping specifications. Regards Bodhisatwa Ray Show more Show less

Posted 4 days ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Position: Data Architect Roles And Responsibilities 10+ years of relevant work experience, including previous experience leading Data related projects in the field of Reporting and Analytics. Design, build & maintain scalable data lake and data warehouse in cloud (GCP) Expertise in gathering business requirements, analysing business needs, defining the BI/DW architecture to support and help deliver technical solutions to complex business and technical requirements Creating solution prototype and participating in technology selection. Perform POC and technical presentations Architect, develop and test scalable data warehouses and data pipelines architecture in Cloud Technologies (GCP) Experience in SQL and No SQL DBMS like MS SQL Server, MySQL, PostgreSQL, DynamoDB,Cassandra, MongoDB. Design and develop scalable ETL processes, including error handling. Expert in Query and program languages MS SQL Server, T-SQL, PostgreSQL, MY SQL, Python, R. Preparing data structures for advanced analytics and self-service reporting using MS SQL, SSIS, SSRS Write scripts for stored procedures, database snapshots backups and data archiving. Experience with any of these cloud-based technologies: PowerBI/Tableau, Azure Data Factory, Azure Synapse, Azure Data Lake AWS RedShift, Glue, Athena, AWS Quicksight Google Cloud Platform Good To Have Agile development environment pairing DevOps with CI/CD pipelines AI/ML background Skills: data warehousing,aws redshift,data architect,mongodb,gcp,analytics,data lake,agile,powerbi,etl,business intelligence,data,r,ci/cd,sql,dynamodb,azure data factory,nosql,cassandra,devops,python,tableau,t-sql Show more Show less

Posted 4 days ago

Apply

8.0 years

0 Lacs

India

Remote

Linkedin logo

The Red Hat Technology Sales team is looking for an AI Platform Sales Specialist (SSP) to join our team in India. This position is a highly technical sales support role that will ensure successful Go-To-Market (GTM) execution for Red Hat AI Strategy across India Market. This role requires a deep understanding of the importance of delivering productive outcomes using the Red Hat AI Portfolio, enabling direct sales account team peers and partners to do the same with customers. The ideal candidate will have a background leading development teams in delivering applications to production, consulting, and/or driving sales of both generative and predictive AI technologies. This role requires both customer-facing, front line developers to CxO, and evangelism conversations. Much of this role requires consulting on the GTM with Products and Engineering and mentoring in the execution of winning sales plays in partnership with Cloud Service Providers, SIs, and 3rd party ISVs. What Will You Do Consult on and enable successful execution of Data Science AI/ML and Generative AI solutions in assigned geo, focusing on Red Hat AI GTM. Educate customers and stakeholders on the value of Red Hat's Data Science AI/ML and Generative AI offerings, highlighting their applications and benefits. Lead and enable on hyper-competitive sales situations for the Data Science AI/ML and Generative AI portfolio, leveraging expertise in Predictive Analytics and Generative AI development practices. Serve as a geo-level bi-directional conduit of GTM for Red Hat AI between sales, products, engineering, marketing, enablement, and customer success, ensuring seamless communication and collaboration. Work closely with Red Hat AI product teams, providing feedback and identifying gaps in the offering to drive continuous improvement. Build lasting relationships with cloud providers and other 3rd parties for joint GTM, focusing on Data Science AI/ML and Generative AI solutions. Interface with regional sales leadership for account planning and GTM, ensuring alignment and effective execution of Red Hat AI sales strategies. What Will You Bring 8+ years of Data Science and/or AI/ML application development practices and architectures. Subject matter expert in Data Science and Data Engineering concepts, Predictive Analytics, and/or Generative AI development practices Working knowledge of full stack AI development and deployment tools, including Feature Stores, Vector Databases, Embedding Models, ETL tooling, and modern programming languages/frameworks Experience with Kubernetes ecosystem, including competing technologies, and Cloud Service Providers with established account relationships Understanding of DevOps, agile, and similar concepts Working knowledge of full stack application development and deployment tools, including those of build, deployment, project management, automation, source code management, testing, quality/security, monitoring [APM], and configuration of the same, to deliver applications to production Executive presence with public speaking skills Expert Practitioner of the Challenger Sales Model Entrepreneurial mindset Preferred Qualifications Computer Science or other technical degree with an MBA, emphasizing a strong foundation in technical and business acumen. Previous experience as a sales engineer, technical sales, or similar role, highlighting expertise in driving sales and technical conversations in the Data Science AI/ML and Generative AI space. Experience working with cloud providers and other 3rd parties for joint GTM, leveraging relationships to drive indirect sales and co-selling opportunities. Background in leading development teams in delivering applications to production, consulting, and/or driving sales of Red Hat AI solutions, demonstrating a deep understanding of customer challenges and the ability to propose credible solutions. Strong relationships with Cloud Service Provider account teams, executives, and industry partners, with the ability to grow and maintain these relationships to drive co-selling and indirect sales opportunities. Technical and/or Sales role experience working directly for hyperscalers or in partner ecosystem About Red Hat Red Hat is the world’s leading provider of enterprise open source software solutions, using a community-powered approach to deliver high-performing Linux, cloud, container, and Kubernetes technologies. Spread across 40+ countries, our associates work flexibly across work environments, from in-office, to office-flex, to fully remote, depending on the requirements of their role. Red Hatters are encouraged to bring their best ideas, no matter their title or tenure. We're a leader in open source because of our open and inclusive environment. We hire creative, passionate people ready to contribute their ideas, help solve complex problems, and make an impact. Inclusion at Red Hat Red Hat’s culture is built on the open source principles of transparency, collaboration, and inclusion, where the best ideas can come from anywhere and anyone. When this is realized, it empowers people from different backgrounds, perspectives, and experiences to come together to share ideas, challenge the status quo, and drive innovation. Our aspiration is that everyone experiences this culture with equal opportunity and access, and that all voices are not only heard but also celebrated. We hope you will join our celebration, and we welcome and encourage applicants from all the beautiful dimensions that compose our global village. Equal Opportunity Policy (EEO) Red Hat is proud to be an equal opportunity workplace and an affirmative action employer. We review applications for employment without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, citizenship, age, veteran status, genetic information, physical or mental disability, medical condition, marital status, or any other basis prohibited by law. Red Hat does not seek or accept unsolicited resumes or CVs from recruitment agencies. We are not responsible for, and will not pay, any fees, commissions, or any other payment related to unsolicited resumes or CVs except as required in a written contract between Red Hat and the recruitment agency or party requesting payment of a fee. Red Hat supports individuals with disabilities and provides reasonable accommodations to job applicants. If you need assistance completing our online job application, email application-assistance@redhat.com. General inquiries, such as those regarding the status of a job application, will not receive a reply. Show more Show less

Posted 4 days ago

Apply

10.0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Data Architect is responsible to define and lead the Data Architecture, Data Quality, Data Governance, ingesting, processing, and storing millions of rows of data per day. This hands-on role helps solve real big data problems. You will be working with our product, business, engineering stakeholders, understanding our current eco-systems, and then building consensus to designing solutions, writing codes and automation, defining standards, establishing best practices across the company and building world-class data solutions and applications that power crucial business decisions throughout the organization. We are looking for an open-minded, structured thinker passionate about building systems at scale. Role Design, implement and lead Data Architecture, Data Quality, Data Governance Defining data modeling standards and foundational best practices Develop and evangelize data quality standards and practices Establish data governance processes, procedures, policies, and guidelines to maintain the integrity and security of the data Drive the successful adoption of organizational data utilization and self-serviced data platforms Create and maintain critical data standards and metadata that allows data to be understood and leveraged as a shared asset Develop standards and write template codes for sourcing, collecting, and transforming data for streaming or batch processing data Design data schemes, object models, and flow diagrams to structure, store, process, and integrate data Provide architectural assessments, strategies, and roadmaps for data management Apply hands-on subject matter expertise in the Architecture and administration of Big Data platforms, Data Lake Technologies (AWS S3/Hive), and experience with ML and Data Science platforms Implement and manage industry best practice tools and processes such as Data Lake, Databricks, Delta Lake, S3, Spark ETL, Airflow, Hive Catalog, Redshift, Kafka, Kubernetes, Docker, CI/CD Translate big data and analytics requirements into data models that will operate at a large scale and high performance and guide the data analytics engineers on these data models Define templates and processes for the design and analysis of data models, data flows, and integration Lead and mentor Data Analytics team members in best practices, processes, and technologies in Data platforms Qualifications B.S. or M.S. in Computer Science, or equivalent degree 10+ years of hands-on experience in Data Warehouse, ETL, Data Modeling & Reporting 7+ years of hands-on experience in productionizing and deploying Big Data platforms and applications, Hands-on experience working with: Relational/SQL, distributed columnar data stores/NoSQL databases, time-series databases, Spark streaming, Kafka, Hive, Delta Parquet, Avro, and more Extensive experience in understanding a variety of complex business use cases and modeling the data in the data warehouse Highly skilled in SQL, Python, Spark, AWS S3, Hive Data Catalog, Parquet, Redshift, Airflow, and Tableau or similar tools Proven experience in building a Custom Enterprise Data Warehouse or implementing tools like Data Catalogs, Spark, Tableau, Kubernetes, and Docker Knowledge of infrastructure requirements such as Networking, Storage, and Hardware Optimization with hands-on experience in Amazon Web Services (AWS) Strong verbal and written communications skills are a must and should work effectively across internal and external organizations and virtual teams Demonstrated industry leadership in the fields of Data Warehousing, Data Science, and Big Data related technologies Strong understanding of distributed systems and container-based development using Docker and Kubernetes ecosystem Deep knowledge of data structures and algorithms Experience working in large teams using CI/CD and agile methodologies Unique ID - Show more Show less

Posted 4 days ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Required Skills & Qualifications 5+ years of experience in report development and migration Strong hands-on experience with Oracle Reports (6i/10g/11g) Proficient in JasperReports, Jaspersoft Studio, and JRXML templates Strong knowledge of SQL and PL/SQL Working knowledge of Java and JasperReports API Experience configuring JDBC data sources and working with complex datasets Familiarity with JasperReports Server: deployment, user management, and scheduling Experience with Git or version control tools Good communication skills and ability to work with business stakeholders Preferred Qualifications Experience with iReport Designer (legacy support) Exposure to CI/CD for report deployment Knowledge of ETL tools or data transformation processes Oracle and/or JasperReports certification is a plus Show more Show less

Posted 4 days ago

Apply

6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description Job Title : Power BI Developer Location : Chennai/Hyderabad/Bangalore Candidate Specification Any Graduate, Min 6+ years relevant Experience Job Description Strong proficiency in DAX, Power Query (M), and SQL. Experience in data modelling and creating relationships within datasets. Understanding of ETL processes and data warehousing concepts. Skills Required RolePower BI Developer Industry TypeIT/ Computers - Software Functional AreaIT-Software Required Education Graduation Employment TypeFull Time, Permanent Key Skills POWER BI POWER PLATFORM POWER APPS AWS AZURE Other Information Job CodeGO/JC/174/2025 Recruiter NameSheena Rakesh Key Skills POWER BI POWER PLATFORM POWER APPS AWS AZURE Other Information Job CodeGO/JC/174/2025 Recruiter NameSheena Rakesh Show more Show less

Posted 4 days ago

Apply

5.0 - 12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

It was nice visiting your profile in portal, One of our top MNC client has critical job position onArtificial Engineer (AI) for Pune Location Please Apply relevant Profiles Candidates Required Skill:Artificial Engineer (AI) Years of Experience:5 to 12 Years, CTC: Can be discussed Notice Period: Immediate Joiners or 15-20 Days or can be discussed Work Location: Pune Interview: Online Candidates should haveAI Experience Job Description About the Role: In this role, you will be at the forefront of developing and deploying cutting-edge AI solutions that directly impact our business. You will leverage your expertise in data and machine learning engineering, natural language processing (NLP), computer vision, and agentic AI, to build scalable and robust systems that drive innovation and efficiency. You will be responsible for the entire AI lifecycle, from data acquisition and preprocessing to model development, deployment, and monitoring. Responsibilities Data and ML Engineering: Design and implement robust data pipelines to extract, transform, and load (ETL) data from diverse structured and unstructured sources (e.g., databases, APIs, text documents, images, videos). Develop and maintain scalable data storage and processing solutions. Perform comprehensive data cleaning, validation, and feature engineering to prepare data for machine learning models. Build and deploy machine learning models for a variety of business applications, including but not limited to process optimization and enterprise efficiency. Web Scraping and Document Processing: Implement web scraping solutions and utilize document processing libraries to extract and process data from various sources. NLP and Computer Vision: Develop and implement NLP models for tasks such as text classification, sentiment analysis, entity recognition, and language generation. Implement computer vision models for image classification, object detection, and image segmentation. Agentic AI Development Design and develop highly scalable production-ready code for agentic AI systems. Implement and integrate agentic AI solutions into existing workflows to automate complex tasks and improve decision-making. Develop and maintain agentic systems for data wrangling, supply chain optimization, and enterprise efficiency projects. Work with LLMs, and other related technologies to create agentic workflows. Integrate NLP and Computer Vision capabilities into agentic workflows to enhance their ability to understand and interact with diverse data sources. Model Development And Deployment Design and develop machine learning models and algorithms to solve simplified business problems. Evaluate and optimize model performance through rigorous testing and experimentation. Deploy and monitor machine learning models in production environments. Implement best practices for model versioning, reproducibility, and explainability. Optimize and deploy NLP and computer vision models for real-time inference. Communication And Collaboration Clearly articulate complex technical concepts to both technical and non-technical audiences. Demonstrate live coding proficiency and effectively explain your code and design decisions. Collaborate with cross-functional teams, including product managers, data scientists, and software engineers. Document code, models, and processes for knowledge sharing and maintainability. Qualifications Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Machine Learning, Natural Language Processing, Computer Vision, or a related field. Proven experience in developing and deploying machine learning models, NLP models, and computer vision models, and data pipelines. Strong programming skills in Python and experience with relevant libraries (e.g., TensorFlow, PyTorch, scikit-learn, pandas, NumPy, Hugging Face Transformers, OpenCV, Pillow). Experience with cloud computing platforms (e.g., AWS, GCP, Azure). Experience with database technologies (e.g., SQL, NoSQL). Experience with agentic AI development and LLMs is highly desirable. Excellent problem-solving and analytical skills. Product Engineering background Ability to demonstrate live coding proficiency. Experience in productionizing ML models. Preferred Qualifications Experience with containerization and orchestration technologies (e.g., Docker, Kubernetes). Experience with MLOps practices and tools. Experience with building RAG systems. Experience with deploying and optimizing models for edge devices. Experience with video processing and analysis. This job is provided by Shine.com Show more Show less

Posted 4 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies