Jobs
Interviews

1052 Etl Processes Jobs - Page 19

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 - 14.0 years

0 Lacs

hyderabad, telangana

On-site

As the Manager - Business Intelligence at MarketStar, you will play a crucial role in driving growth through innovative sales solutions and customer engagement strategies. With over 30 years of experience, MarketStar partners with leading brands to accelerate sales performance and deliver exceptional customer experiences. In this role, you will be responsible for managing BI/BA/Data teams and working closely with all stakeholders, including senior management. You will have end-to-end knowledge of operations management and business analysis, collaborating with cross-functional teams to understand business needs, define data requirements, and ensure data accuracy and integrity. Your key responsibilities will include collecting, analyzing, and interpreting complex data from multiple sources to identify trends, patterns, and opportunities for business improvement. You will also conduct ad-hoc analysis to support business initiatives, develop and maintain interactive dashboards, reports, and visualizations using BI tools such as Tableau, Power BI, or similar, and support business leaders in making data-driven decisions by presenting insights and recommendations. To succeed in this role, you should have a minimum of 10-12 years of experience in a position monitoring, managing, manipulating, and drawing insights from data, with at least 5 years of experience leading a team. A degree in Business Analytics, Business Administration, Business Communication, or Marketing is required, along with proven work experience in business intelligence, data analysis, or related roles. You must have solid understanding of data concepts, data modeling, and database design principles, excellent analytical and problem-solving skills, strong business acumen, and exceptional communication and presentation skills. Experience with data visualization tools such as Tableau, Power BI, or similar is essential, along with familiarity with ETL processes and data integration techniques. MarketStar offers constant learning and an entrepreneurial growth mindset, employee-centric benefits plan, fast track growth opportunities, an opportunity to work with leading brands, and customized training programs for personal and professional development. As an equal opportunities employer, MarketStar believes in employing a diverse workforce for success. If you are ready to join the MarketStar team and contribute to driving innovation and success, hit the Apply Now button and start your journey with us!,

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

As the world's largest producer of medicine and vaccinations for pets and livestock, Zoetis, Inc. invites you to join the Zoetis India Capability Center (ZICC) in Hyderabad, where innovation and excellence converge. At ZICC, a pivotal player in the animal healthcare industry, we are committed to driving transformative advancements and leveraging technology to tackle complex challenges. Our mission is to secure sustainable growth and uphold Zoetis" global competitiveness by harnessing the exceptional talent pool in India. Join a dynamic team at ZICC that collaborates with colleagues globally, embodying the ethos of One Zoetis. Together, we foster seamless integration and collaboration, providing an environment where your contributions can truly make a difference. Embark on our journey to lead innovation and shape the future of animal healthcare. Zoetis is currently seeking a talented Data Visualization Engineer to join our pharmaceutical R&D team. The ideal candidate will possess a solid background in data science, advanced visualization techniques, and proficiency in tools for creating impactful and interactive visualizations across diverse data sets. This role necessitates close collaboration with scientists, analysts, and stakeholders to transform intricate datasets into compelling visual narratives that steer decision-making in drug discovery, development, and clinical research. **Position Responsibilities** **Design and Develop Visualizations:** - Craft interactive and static visualizations for exploratory, descriptive, comparative, and predictive analyses. - Develop dashboards and reports summarizing key insights from high-throughput screening, clinical trial data, and other R&D datasets. - Implement visual representations for pathway analysis, pharmacokinetics, omics data, and time-series trends. *(40%)* **Collaborate with Cross-Functional Teams:** - Work closely with data scientists, bioinformaticians, pharmacologists, and clinical researchers to identify visualization needs. - Translate complex scientific data into clear, actionable visual insights tailored to both technical and non-technical audiences. *(20%)* **Maintain and Optimize Visualization Tools:** - Create reusable visualization components and frameworks to support large-scale data analysis. - Evaluate and recommend tools and platforms for effective data visualization, including emerging technologies. *(20%)* **Data Processing and Integration:** - Collaborate with data engineers to integrate, clean, and structure datasets for visualization purposes. - Ensure alignment of visualization solutions with pharmaceutical R&D standards, compliance, and security requirements. *(10%)* **Innovation and Expertise Development:** - Stay abreast of the latest visualization technology trends relevant to pharmaceutical research. - Apply advanced techniques such as 3D molecular visualization, network graphs, and predictive modeling visuals. *(10%)* **Organizational Relationships** **Animal Health Research & Development:** - Engage across the spectrum of R&D functions, including pharmaceutical, biopharmaceutical, vaccine, device, and genetics R&D groups, to align technology solutions with diverse scientific needs and development pipelines. **Zoetis Tech & Digital (ZTD):** - Partner closely with ZTD teams, particularly the VMRD-ZTD Engineering group, documentation specialists, and portfolio management groups, to ensure seamless integration of IT solutions and alignment with organizational objectives. **Resources Managed** *Supervision:* - No direct reports, but matrix leadership responsibilities within each project team. Managerial responsibilities for any project resources onboarded externally. **Education and Experience** *Education:* - Bachelors or Masters degree in Computer Science, Data Science, Bioinformatics, or related field. - Experience in the pharmaceutical or biotech sectors is advantageous. **Technical Skills Requirements** - **Visualization Tools:** Expertise in Tableau, Power BI, Plotly, ggplot2, Matplotlib, Seaborn, D3.js, or equivalent. - **Programming:** Proficiency in Python, R, or JavaScript (e.g., for D3.js). - **Data Handling:** Experience with SQL, Pandas, NumPy, and ETL processes. - **Omics and Network Tools:** Familiarity with Cytoscape, BioRender, or molecular visualization platforms (e.g., PyMOL, Chimera). - **Dashboarding:** Building interactive dashboards with Dash, Shiny, or Streamlit. - **3D Visualization:** Experience with tools for structural and spatial visualization (e.g., PyMOL, PyVista). **Soft Skills:** - Strong storytelling ability to convey scientific insights visually. - Effective communication and collaboration with interdisciplinary teams. - Analytical thinking to align visualizations with research goals. **Physical Position Requirements** - Minimal travel requirements, ranging from 0-10%. - Full-time position.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

You are a highly skilled and experienced Senior Tableau Developer who will be joining our growing data visualization team. Your role will involve creating compelling and insightful dashboards and visualizations to transform complex data into actionable insights for business stakeholders. Your responsibilities will include designing, developing, and maintaining interactive Tableau dashboards and visualizations, connecting to various data sources, performing data modeling and transformation, creating complex calculations, parameters, and calculated fields, developing and maintaining Tableau Server workbooks and data sources, collaborating with business stakeholders to gather requirements, selecting and implementing appropriate statistical methods and visualizations, providing training and support to end-users, staying up-to-date with the latest Tableau features and best practices, contributing to the development of data visualization standards and guidelines, performing performance tuning and optimization of Tableau dashboards, troubleshooting and resolving Tableau-related issues, and mentoring junior Tableau developers. To qualify for this role, you should have a Bachelor's degree in Computer Science, Information Systems, Statistics, or a related field, along with 5+ years of experience developing Tableau dashboards and visualizations. You should have advanced proficiency in Tableau Desktop and Server, a strong understanding of statistical concepts and methods, experience with data warehousing concepts and relational databases, and excellent analytical and problem-solving skills. Additionally, you should have the ability to communicate complex technical and statistical concepts to non-technical audiences, strong collaboration and communication skills, and experience with other data visualization tools like Power BI and Qlik Sense. If you have Tableau certification, it will be considered a plus.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

indore, madhya pradesh

On-site

ECI is the leading global provider of managed services, cybersecurity, and business transformation for mid-market financial services organizations across the globe. From its unmatched range of services, ECI provides stability, security, and improved business performance, freeing clients from technology concerns and enabling them to focus on running their businesses. More than 1,000 customers worldwide with over $3 trillion of assets under management put their trust in ECI. At ECI, success is believed to be driven by passion and purpose, with a commitment to empowering employees worldwide. The Opportunity: An exciting opportunity at ECI for a Sr. Backend Engineer has arisen. We are looking for a skilled Backend Engineer with expertise in ETL processes and code conversion. The ideal candidate will excel in managing data ingestion, maintaining code repositories, and rewriting Python scripts in C#. This is an Onsite role. What you will do: ETL Support: - Develop and maintain ETL processes for ingesting data from third-party FTP/SFTP accounts. - Create reports based on business use cases. - Ensure data integrity and accuracy during the ETL process. Code Rewriting: - Rewrite existing Python code (using Flask library) in C# within the .NET framework. - Ensure the new codebase is efficient, maintainable, and scalable. Application Support: - Provide support for front-end applications. - Upgrade AngularJS components as needed. - Troubleshoot and resolve application issues promptly. Code Maintenance: - Access and maintain code in GitHub. - Manage manual deployment processes. - Conduct code reviews and ensure adherence to best practices. Who you are: Education: Bachelors or Masters degree in Computer Science, Engineering, or a related field. Experience: Proven experience in ETL processes, backend development, and code conversion. Technical Skills: - Proficiency in Python and C#. - Experience with Flask and .NET framework. - Strong knowledge of front-end technologies, including AngularJS. - Familiarity with GitHub for code management. - Understanding of FTP/SFTP protocols for data ingestion. Analytical Skills: Strong problem-solving abilities and experience with data analysis. Communication Skills: Excellent verbal and written communication skills to collaborate effectively with cross-functional teams. Project Management: Ability to manage multiple tasks and meet deadlines in a fast-paced environment. Bonus (Nice to have): - Experience with cloud platforms such as AWS, Azure, or Google Cloud. - Knowledge of DevOps practices and tools for continuous integration and deployment. - Familiarity with modern JavaScript frameworks (e.g., React, Vue.js). ECI's culture revolves around connection - with clients, technology, and each other. Besides working with an amazing global team, ECI offers a competitive compensation package and more. If you are ready for an exciting opportunity and believe you would be a great fit, we would like to hear from you! Love Your Job, Share Your Technology Passion, Create Your Future Here!,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

bhubaneswar

On-site

The Informatica Master Data Management (MDM) Expert plays a critical role in the organization by ensuring the integrity, consistency, and accuracy of master data across all business units. This position is essential for driving data governance initiatives and for supporting various data integration and management processes. As an MDM Expert, you will leverage your knowledge of Informatica tools to develop and implement MDM strategies that align with organizational goals. You will collaborate with cross-functional teams, providing expertise in data modeling, quality management, and ETL processes. This role requires a deep understanding of master data concepts as well as the ability to address complex data challenges, ensuring reliable data inputs for analytical and operational needs. In addition, you'll drive improvements in data processes, lead troubleshooting efforts for MDM-related incidents, and train other team members in best practices. Your contributions will not only enhance data quality but will also support strategic decision-making and business outcomes across the organization. Key Responsibilities - Design and implement Informatica MDM solutions according to business requirements. - Lead the development of data governance frameworks and best practices. - Integrate MDM with existing data management and analytics solutions. - Collaborate with IT and business stakeholders to gather requirements. - Perform data profiling and analysis to ensure governance standards are met. - Develop and maintain data quality metrics and KPIs. - Document data management processes, data flows, and MDM-related architecture. - Provide troubleshooting support for MDM incidents and data discrepancies. - Facilitate data model design and validation with stakeholders. - Conduct training sessions for users on MDM tools and procedures. - Stay current with industry trends and best practices in MDM. - Coordinate with ETL teams to ensure smooth data integration. - Manage ongoing MDM projects, ensuring timely delivery and quality. - Support audit and compliance efforts related to data governance. - Enhance and optimize existing MDM processes for efficiency. Required Qualifications - Bachelor's degree in Computer Science, Information Technology, or a related field. - 5+ years of experience in data management, with a focus on MDM. - Proven expertise in Informatica MDM and the Informatica toolset. - Strong understanding of data governance principles and practices. - Proficiency in SQL and relational database management. - Experience with data modeling concepts and best practices. - Knowledge of ETL processes and tools, particularly Informatica PowerCenter. - Familiarity with XML and data transformation techniques. - Prior experience with cloud-based data solutions is a plus. - Excellent analytical and problem-solving skills. - Strong communication and interpersonal abilities. - Ability to train and mentor junior team members. - Hands-on experience with data quality tools and methodologies. - Strong organizational skills with the ability to manage multiple projects. - Experience in agile project management methodologies. - Relevant certifications in Informatica or data governance are desirable. Skills: management, agile project management methodologies, data management, data governance, data modeling, cloud-based data solutions, organizational skills, SQL, interpersonal skills, data transformation techniques, MDM, data integration, data quality, Informatica MDM, data, analytical skills, problem-solving skills, communication skills, data profiling, ETL, ETL processes, master data, relational database management, Informatica, data quality metrics, SQL proficiency,

Posted 1 month ago

Apply

8.0 - 12.0 years

0 Lacs

jaipur, rajasthan

On-site

As a Senior Salesforce BI & Analytics Architect at Asymbl, you will be responsible for leading the design and implementation of robust analytics solutions utilizing Salesforce Data Cloud and other advanced analytics tools such as Tableau CRM and Tableau. Your role will involve integrating complex data sources, designing customer-centric data models, and building advanced dashboards to provide actionable insights to business users. You will collaborate with both technical and business stakeholders to translate analytics requirements into scalable solutions while ensuring data quality, governance, and security within the Salesforce ecosystem. Joining Asymbl means being part of a culture driven by relentless curiosity and belief, grounded in trust and integrity. You will have the opportunity to work on challenging projects that shape the future of data-driven transformation, where your expertise will have a real business impact. We offer competitive compensation, professional growth opportunities, and a vibrant company culture that values continuous learning and innovation. Key Responsibilities: - Lead the design and architecture of Salesforce analytics solutions, focusing on Salesforce Data Cloud, Tableau CRM, and Tableau. - Integrate and harmonize data from diverse sources to ensure data quality, consistency, and scalability. - Design and implement customer-centric data models using Salesforce Data Cloud for real-time analytics and insights. - Develop advanced dashboards, reports, and visualizations that deliver actionable insights to business users. - Collaborate with stakeholders to understand reporting and analytics requirements and translate them into scalable solutions. - Implement data governance, security, and compliance best practices within the Salesforce ecosystem. - Optimize the performance of analytics solutions to enable efficient data processing and timely delivery of insights. - Provide technical leadership and mentorship to junior architects, developers, and analysts. - Stay updated on emerging trends and innovations in data analytics to ensure solutions leverage the latest technologies and practices. Qualifications: - Bachelor's degree in Computer Science, Data Analytics, or a related field; Advanced degrees preferred. - 8+ years of experience in BI/Analytics architecture, with at least 3 years specializing in Salesforce Data Cloud and analytics tools. - Expertise in Salesforce Data Cloud, Tableau CRM, Tableau, and data modeling within the Salesforce ecosystem. - Strong knowledge of data integration techniques, ETL processes, and APIs within Salesforce. - Experience working with large-scale, complex datasets and building real-time analytics solutions. - Understanding of data governance, security, and compliance standards within Salesforce environments. - Hands-on experience with Salesforce Analytics Query Language (SAQL), Tableau Server/Online, and advanced dashboard design. - Salesforce certifications such as Tableau CRM & Einstein Discovery Consultant, or Data Architect preferred. - Excellent communication and stakeholder management skills to present complex data concepts clearly and concisely. - Familiarity with additional enterprise analytics tools or platforms (e.g., Power BI, Snowflake) is a plus.,

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

We are looking for a highly skilled Technical Data Analyst to be a part of our growing team. This role demands a strong technical background in Oracle PL/SQL and Python, along with proficiency in data analysis tools and techniques. The ideal candidate should possess strategic thinking abilities and a track record of leading and mentoring data analyst teams. You will play a crucial role in delivering data-driven insights and contributing to important business decisions. Additionally, you will be responsible for exploring and assessing emerging AI tools and techniques for potential application in data analysis projects. As a Technical Data Analyst, your key responsibilities will include designing, developing, and managing complex Oracle PL/SQL queries and procedures for data extraction, transformation, and loading (ETL) processes. You will utilize Python scripting for data analysis, automation, and reporting purposes. Conducting thorough data analysis to identify trends, patterns, and anomalies will be essential in providing actionable insights to enhance business performance. Collaboration with cross-functional teams to comprehend business requirements and translating them into technical specifications will also be part of your role. You will be instrumental in establishing and maintaining data quality standards to ensure data integrity across various systems. Furthermore, leveraging data analysis and visualization tools such as Tableau, Power BI, and Qlik Sense to create interactive dashboards and reports for business stakeholders will be a key aspect of your responsibilities. It is crucial to stay updated with the latest data analysis tools, techniques, and industry best practices, including advancements in AI/ML. As a Technical Data Analyst, you will be required to research and evaluate emerging AI/ML tools and techniques for potential application in data analysis projects. Preferred Qualifications: - Hands-on experience as a Technical Data Analyst (not a business analyst) with expertise in Oracle PL/SQL and Python programming to interpret analytical tasks and analyze large datasets. - Proficiency in Python scripting for data analysis and automation. - Expertise in data visualization tools such as Tableau, Power BI, or Qlik Sense. - Awareness and understanding of AI/ML tools and techniques in data analytics, including machine learning algorithms, natural language processing, and predictive modeling. - Practical experience in applying AI/ML techniques in data analysis projects is considered a plus. - Strong analytical, problem-solving, communication, and interpersonal skills. - Experience in the financial services industry is preferred. This role is suitable for a Data Scientist with expertise in Python, Cloud technologies, and advanced data analytics. Detailed job description will be provided to HR for updating this posting.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

At H.E. Services vibrant tech Center in Hyderabad, you will have the opportunity to contribute to technology innovation for Holman Automotive, a leading American fleet management and automotive services company. Our goal is to continue investing in people, processes, and facilities to ensure expansion in a way that allows us to support our customers and develop new tech solutions. Holman has come a long way during its first 100 years in business. The automotive markets Holman serves include fleet management and leasing; vehicle fabrication and up fitting; component manufacturing and productivity solutions; powertrain distribution and logistics services; commercial and personal insurance and risk management; and retail automotive sales as one of the largest privately owned dealership groups in the United States. Join us and be part of a team that's transforming the way Holman operates, creating a more efficient, data-driven, and customer-centric future. Roles & Responsibilities: - Design, develop, and maintain data pipelines using Databricks, Spark, and other Azure cloud technologies. - Optimize data pipelines for performance, scalability, and reliability, ensuring high speed and availability of data warehouse performance. - Develop and maintain ETL processes using Databricks and Azure Data Factory for real-time or trigger-based data replication. - Ensure data quality and integrity throughout the data lifecycle, implementing new data validation methods and analysis tools. - Collaborate with data scientists, analysts, and stakeholders to understand and meet their data needs. - Troubleshoot and resolve data-related issues, providing root cause analysis and recommendations. - Manage a centralized data warehouse in Azure SQL to create a single source of truth for organizational data, ensuring compliance with data governance and security policies. - Document data pipeline specifications, requirements, and enhancements, effectively communicating with the team and management. - Leverage AI/ML capabilities to create innovative data science products. - Champion and maintain testing suites, code reviews, and CI/CD processes. Must Have: - Strong knowledge of Databricks architecture and tools. - Proficient in SQL, Python, and PySpark for querying databases and data processing. - Experience with Azure Data Lake Storage (ADLS), Blob Storage, and Azure SQL. - Deep understanding of distributed computing and Spark for data processing. - Experience with data integration and ETL tools, including Azure Data Factory. Advanced-level knowledge and practice of: - Data warehouse and data lake concepts and architectures. - Optimizing performance of databases and servers. - Managing infrastructure for storage and compute resources. - Writing unit tests and scripts. - Git, GitHub, and CI/CD practices. Good to Have: - Experience with big data technologies, such as Kafka, Hadoop, and Hive. - Familiarity with Azure Databricks Medallion Architecture with DLT and Iceberg. - Experience with semantic layers and reporting tools like Power BI. Relevant Work Experience: - 5+ years of experience as a Data Engineer, ETL Developer, or similar role, with a focus on Databricks and Spark. - Experience working on internal, business-facing teams. - Familiarity with agile development environments. Education and Training: - Bachelor's degree in computer science, Engineering, or a related field, or equivalent work experience.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

ANP is a leading consulting firm currently seeking Professionals in OneStream practice to join their dynamic team. This role is ideal for an experienced Professional aiming to enhance and optimize financial planning, forecasting, and business processes using OneStream. As a candidate, you will be instrumental in OneStream model solutioning and implementations, business planning process optimization, and stakeholder collaboration to provide effective planning solutions. This position offers valuable hands-on experience and professional growth in the enterprise performance management (EPM) and planning ecosystem. Location: PAN India Key Responsibility: - Implementing OneStream Solutions covering Requirements and Design, Development, Testing, Training, and support. - Assisting in pre-sales meetings with potential clients, including supporting client demos and proof-of-concept projects. - Collaborating effectively with internal and client-side resources and communicating efficiently across various audiences. - Demonstrating proficiency in Anaplan, multi-dimensional modeling, Excel, data integration tools, and ETL processes. - Approaching challenges creatively and leveraging technology to address business issues. - Adhering to clients" delivery methodology and project standards to ensure timely completion of project deliverables. - Thriving in a fast-paced, dynamic environment and effectively navigating ambiguity. - Embracing the clients" culture of "All Business is personal" and taking full ownership of tasks with an outcome-driven strategy. Qualifications: Educational Background: Bachelors degree in finance, Accounting, Business, Computer Science, or related field; or Chartered Accountant / MBA Finance - 3+ Years of OneStream experience and a total of 5+ Years of EPM implementations. - Certified OneStream Professional. - Proficiency in OneStream, multi-dimensional modeling, Excel, data integration tools, and ETL processes. - Solid understanding of financial and accounting processes, including experience with financial close, consolidations, financial reporting, FP&A. - Experience in data integration between different systems/sources, with REST API knowledge as an advantage. Preferred Skills: - Strong client-facing skills, organizational, and detail-oriented. - Excellent communication and interpersonal abilities. - Proven capability to thrive in a demanding, fast-paced environment and manage high workloads. - Familiarity with data visualization tools like Oracle. - Experience with data visualization tools such as Tableau or PowerBI.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

ahmedabad, gujarat

On-site

As the AI/ML Solutions Lead, you will play a crucial role in leading the design, development, and implementation of enterprise-scale AI/ML solutions tailored for real-world applications. Your responsibilities include collaborating with cross-functional teams to decipher business requirements and translate them into robust AI/ML strategies. You will be tasked with developing and optimizing generative AI solutions using cutting-edge tools and frameworks like LangChain, LangSmith, LlamaIndex, MCP, Semantic Kernel, Autogen, and Agents. Your expertise in Python and frameworks such as PyTorch and TensorFlow will be pivotal in constructing sophisticated machine learning models and algorithms. Additionally, you will analyze data, identify trends, and deliver actionable insights to stakeholders. Mentoring junior data scientists to foster a culture of innovation and continuous learning within the team will also be part of your role. Engaging with clients to provide project updates, understand their analytical needs, and deliver tailored AI solutions is another key aspect. Your strong analytical skills will be essential in solving complex problems and enhancing existing models and algorithms. To excel in this role, you are required to have a minimum of 5 years of experience in developing and implementing AI/ML-powered services and algorithms. Deep understanding of deep learning models for structured/unstructured data is crucial. A proven track record of building enterprise-scale AI/ML solutions showcasing initiative and strategic thinking is highly valued. Proficiency in generative AI and large language models (LLMs) is a must, along with hands-on experience in Python and a deep understanding of ML frameworks like PyTorch and TensorFlow. Familiarity with cloud ML stacks such as Azure and AWS is advantageous. Your strong analytical and problem-solving skills will enable you to extract insights from complex datasets effectively. Excellent communication skills are essential for leading cross-functional teams successfully. Moreover, you should possess knowledge of ETL processes, data pipelines, model evaluation and validation techniques, time series analysis and forecasting, model deployment, and monitoring tools like Docker, Kubernetes, MLflow, and Airflow. Experience in building reproducible pipelines for training and serving models is also beneficial. Your ability to lead teams and contribute as an individual will be crucial in this role. Joining our team offers you a dynamic and thriving culture with a flat-hierarchical, friendly, engineering-oriented, and growth-focused environment. You will have ample opportunities for learning and growth, free health insurance, access to office facilities including a game zone and an in-office kitchen with affordable lunch service and free snacks. We offer sponsorship for certifications/events and library services, flexible work timing, leaves for life events, work-from-home, and hybrid options. (ref:hirist.tech),

Posted 1 month ago

Apply

5.0 - 10.0 years

0 Lacs

hyderabad, telangana

On-site

You should have at least 7-10 years of work experience and must have worked as a Database Tech Lead in Hyderabad with an office-based work setup. Your responsibilities will include: - Demonstrating over 5 years of professional experience as an MSSQL Developer or Database Developer. - Showcasing expertise in writing intricate SQL queries, stored procedures, and functions. - Having a strong command over query optimization, performance tuning, and database indexing. - Possessing knowledge or experience in Duck Creek Data Insights. - Demonstrating familiarity with ETL Processes (SSIS) and data integration techniques. - Having experience with Power BI or SSRS. - Being well-versed in database design principles and normalization techniques. - Having hands-on experience with one or more relational database management systems (e.g., SQL Server, Oracle, PostgreSQL, MySQL) will be considered a plus. - Exhibiting excellent problem-solving skills and attention to detail. - Demonstrating strong communication and teamwork skills. - Any leadership experience and the ability to mentor junior developers will be an added advantage.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Python Engineer, you will play a vital role in developing robust and scalable data pipelines that interact with various internal and external systems. Your responsibilities will include participating in architectural decisions, ensuring platform health through monitoring, setting up infrastructure, and optimizing CI/CD processes. The platform you will work on processes extensive datasets, running advanced models to detect potential market abuse, making your contributions essential to the success of the project. Collaboration with data scientists and business stakeholders will be a significant part of your role, where you will translate their requirements into efficient technical solutions. Additionally, you will advocate for development and deployment best practices within the team, driving continuous improvement initiatives. Your main responsibilities will be to develop and maintain Python micro-services that involve multiple data pipelines and algorithms aimed at identifying market abuse. You will also be required to enhance ETL processes to seamlessly integrate new data sources. Collaboration with quantitative analysts and data scientists to understand and implement requirements for new algorithms, data onboarding, quality checks, and timeliness will be a crucial aspect of your work. Building strong relationships with clients and stakeholders, understanding their needs, and effectively prioritizing work will be key to your success in this role. Working within a multidisciplinary team environment, you will closely interact with fellow developers, quants, data scientists, and production support teams. Qualifications: - 5-8 years of relevant experience - Proven expertise in designing and implementing Python-based backend services - Proficiency in building data pipelines utilizing Big Data technologies, preferably Spark and Python - Experience with front-end frameworks such as Angular, React, or similar, and a general understanding of full-stack development principles are advantageous - Strong database skills, including working with SQL and NoSQL technologies like SQL Server and MongoDB - Experience collaborating with data scientists and developing pipelines that support statistical algorithms - Demonstrated experience in working in a DevOps environment, including familiarity with CI/CD tools, monitoring tools, and log aggregation tools. Experience with Docker/Kubernetes is required - Ability to automate and streamline the build, test, and deployment of data pipelines - Extensive experience with software engineering best practices, including unit testing, automation, design patterns, peer review, etc. - Proven track record of providing technical vision and guidance to a data team - Ability to thrive in a dynamic environment, managing multiple tasks simultaneously while maintaining high work standards Education: - Bachelor's degree/University degree or equivalent experience Please note that this job description offers a summary of the primary duties performed. Other job-related tasks may be assigned as needed.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

You are a highly skilled DBA Developer responsible for various IT development, analysis, information management (DBA), and QA activities that require expertise in specialized technologies. Your role is crucial in ensuring the performance, availability, and security of databases, as well as providing support in application development and system troubleshooting. Your responsibilities include modifying and maintaining existing databases and DBMS, designing and implementing logical and physical database models, creating and maintaining database documentation, analyzing business requirements for database solutions, developing backup, recovery, and security procedures, writing code for database access using SQL, PL/SQL, T-SQL, etc., collaborating with technical and business teams, estimating project implementation time and costs, monitoring database performance, troubleshooting issues, and supporting continuous improvement initiatives. You must possess strong communication skills, problem-solving abilities, and the capability to work independently while managing multiple priorities. Additionally, you should have in-depth knowledge of SDLC, SQL development, performance tuning, database platforms like Oracle, SQL Server, etc., database monitoring tools, ETL processes, data warehousing, and troubleshooting complex database and application issues. Familiarity with version control systems, CI/CD pipelines, NoSQL databases, scripting languages, cloud database platforms, and database security practices is advantageous. In summary, as a DBA Developer, you will play a vital role in database management, application development, and system optimization by leveraging your expertise in database technologies and problem-solving skills to meet project requirements effectively and contribute to the continuous enhancement of database solutions.,

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

The Retail Specialized Data Scientist will play a pivotal role in utilizing advanced analytics, machine learning, and statistical modeling techniques to help the retail business make data-driven decisions. You must have a solid understanding of retail industry dynamics, including key performance indicators (KPIs) such as sales trends, customer segmentation, inventory turnover, and promotions. Your ability to communicate complex data insights to non-technical stakeholders is crucial, including senior management, marketing, and operational teams. You should be meticulous in ensuring data quality, accuracy, and consistency when handling large, complex datasets. Gathering and cleaning data from various retail sources, such as sales transactions, customer interactions, inventory management, website traffic, and marketing campaigns is a key responsibility. Proficiency in Python for data manipulation, statistical analysis, and machine learning (libraries like Pandas, NumPy, Scikit-learn) is a must. Expertise in supervised and unsupervised learning algorithms is also required. You will use advanced analytics to optimize pricing strategies based on market demand, competitor pricing, and customer price sensitivity. It would be good to have familiarity with big data processing platforms like Apache Spark, Hadoop, or cloud-based platforms such as AWS or Google Cloud for large-scale data processing. Experience with ETL processes and tools like Apache Airflow to automate data workflows is advantageous. Familiarity with designing scalable and efficient data pipelines and architecture is beneficial. Experience with tools like Tableau, Power BI, Matplotlib, and Seaborn to create meaningful visualizations that present data insights clearly is a plus. The ideal candidate should have a strong analytical and statistical skills, expertise in machine learning and AI, experience with retail-specific datasets and KPIs, proficiency in data visualization and reporting tools, ability to work with large datasets and complex data structures, strong communication skills to interact with both technical and non-technical stakeholders, a solid understanding of the retail business and consumer behavior. Programming Languages: Python, R, SQL, Scala. Data Analysis Tools: Pandas, NumPy, Scikit-learn, TensorFlow, Keras. Visualization Tools: Tableau, Power BI, Matplotlib, Seaborn. Big Data Technologies: Hadoop, Spark, AWS, Google Cloud. Databases: SQL, NoSQL (MongoDB, Cassandra). In this role, you will collaborate with teams across marketing, product management, supply chain, and customer insights to drive business strategies and innovations. You will leverage your deep understanding of the retail industry to design AI solutions that address critical retail business needs. Applying machine learning algorithms to enhance predictive models and using AI-driven techniques for personalization, demand forecasting, and fraud detection are key responsibilities. Staying updated on the latest trends in data science and retail technology is essential. Collaborating with executives, product managers, and marketing teams to translate insights into business actions is also part of your role. About Our Company | Accenture,

Posted 1 month ago

Apply

12.0 - 16.0 years

0 Lacs

haryana

On-site

You will have the opportunity to design and architect Generative AI solutions utilizing AWS services such as Bedrock, S3, PG Vector, Kendra, and SageMaker. Collaboration with developers to implement solutions and provide technical guidance throughout the development lifecycle will be a key responsibility. You will lead the resolution of complex technical issues and challenges in AI/ML projects, ensuring adherence to best practices and company standards. Navigating governance processes and obtaining necessary approvals for initiatives will be part of your role. Making critical architectural and design decisions aligned with organizational policies and industry best practices is essential. You will liaise with onshore technical teams, present solutions, and offer expert analysis on proposed approaches. Conducting technical sessions and knowledge-sharing workshops on AI/ML technologies and AWS services will also be part of your responsibilities. Evaluation and integration of emerging technologies and frameworks like LangChain into solution designs are expected. Developing and maintaining technical documentation, including architecture diagrams and design specifications, will be crucial. Mentoring junior team members and fostering a culture of innovation and continuous learning is an important aspect of the role. Collaboration with data scientists and analysts to ensure optimal use of data in AI/ML solutions will be necessary. Coordinating with clients, data users, and key stakeholders to achieve long-term objectives for data architecture is another key responsibility. Staying updated on the latest trends and advancements in AI/ML and cloud and data technologies is essential to excel in this role. Key experience for this role includes extensive experience (12-16 years) in software development and architecture, focusing on AI/ML solutions. A deep understanding of AWS services, especially those related to AI/ML (Bedrock, SageMaker, Kendra, etc.), is required. Proven track record in designing and implementing data, analytics, reporting, and/or AI/ML solutions is essential. Strong knowledge of data structures, algorithms, and software design patterns is expected. Proficiency in at least one programming language commonly used in AI/ML (e.g., Python, Java, Scala) and familiarity with DevOps practices and CI/CD pipelines are necessary. Understanding of AI ethics, bias mitigation, and responsible AI principles is important. Basic knowledge of data pipelines and ETL processes, with the ability to design and implement efficient data flows for AI/ML models, will be crucial. The primary location for this position is Gurgaon, and the schedule is from 12:00 PM to 8:30 PM. The job category is Advanced Analytics, and the posting end date is 30/08/2025.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

You will be joining as a Data Engineer in Hyderabad (Work from Office) with expertise in data engineering, ETL, and Snowflake development. Your primary responsibilities will include SQL scripting, performance tuning, Matillion ETL, and working with cloud platforms such as AWS, Azure, or GCP. A strong proficiency in Python or scripting languages, API integrations, and knowledge of data governance is essential for this role. Possession of Snowflake certifications (SnowPro Core/Advanced) is preferred. As a Data Engineer, you should have a minimum of 5+ years of experience in data engineering, ETL, and Snowflake development. Your expertise should encompass SQL scripting, performance tuning, and a solid understanding of data warehousing concepts. Hands-on experience with Matillion ETL for creating and managing ETL jobs is a key requirement. Additionally, you should demonstrate a strong understanding of cloud platforms (AWS, Azure, or GCP) and cloud-based data architectures. Proficiency in SQL, Python, or other scripting languages for automation and data transformation is crucial for this role. Experience with API integrations and data ingestion frameworks will be advantageous. Knowledge of data governance, security policies, and access control within Snowflake environments is also expected. Excellent communication skills are essential as you will be required to engage with both business and technical stakeholders. Being a self-motivated professional capable of working independently and delivering projects on time is highly valued in this position. The ideal candidate will possess expertise in data engineering, ETL processes, Snowflake development, SQL scripting, and performance tuning. Hands-on experience with Matillion ETL, cloud platforms (AWS, Azure, or GCP), and API integrations is crucial. Proficiency in Python or scripting languages along with knowledge of data governance, security policies, and access control will be beneficial for excelling in this role.,

Posted 1 month ago

Apply

5.0 - 10.0 years

4 - 8 Lacs

Noida, Uttar Pradesh, India

On-site

Job Summary: The incumbent will work on real-world projects, helping design, develop, and deploy AI models and data-driven solutions. You will play a crucial role in helping businesses make informed decisions by collaborating with stakeholders, designing data models, creating algorithms, and sharing meaningful insights to drive use case success. Key Responsibilities: AI Solution Development: Design, train, fine-tune, and evaluate generative models (e.g., GPT, LLaMA, Stable Diffusion, DALLE, etc.). Collaborate with cross-functional teams including product, research, and engineering to integrate GenAI capabilities into products. Develop a platform approach, optimize models for performance, latency, and cost in production environments. Model Integration: Integrate foundational models and retrieval-augmented generation (RAG) techniques. Backend Development: Implement backend services and API endpoints to support AI solutions. Frontend Integration: Develop or adapt frontend interfaces for user interaction. Mentor junior engineers and contribute to best practices in model development and deployment. Ensure ethical and responsible AI practices in model design and usage. Person Profile: Qualification: Bachelor's or Master's degree in Computer Science, Machine Learning, Mathematics / Statistics, or a related field. Experience / Must Have: 35+ years of experience in machine learning, with at least 2 years focused on generative AI. Preferably with GenAI use cases in Chemical, Pharma, or Manufacturing industries. Highly agile individuals who would like to make an impact in a global setup. Proficiency in Python for data handling and AI-related scripting. Familiarity with data pipeline development, ETL processes, and data preprocessing techniques. Knowledge of testing frameworks and tools to ensure model robustness, as well as experience with automated testing for QA processes. Familiarity with Retrieval-Augmented Generation techniques and their applications. Knowledge of prompt engineering techniques to optimize generative AI models for specific tasks and enhance output relevance. Experience in predictive modeling areas, such as traditional supervised and unsupervised learning. Basic familiarity with foundational generative AI models and Natural Language Processing (NLP).

Posted 1 month ago

Apply

1.0 - 3.0 years

4 - 10 Lacs

Mumbai, Maharashtra, India

On-site

Responsibilities: Design and develop AI/ML algorithms for ad decisioning, bid auctioning, ad creative optimization, forecasting, and data classification in Digital Advertising. Identify valuable data sources and automate data engineering and ETL processes from ad serving events. Develop strategies for data preparation, including structured and unstructured data. Perform feature engineering and analyse large amounts of data from ad serving event logs to identify trends, patterns, and anomalies in bidding and ad decisioning. Ensure data privacy and security while conducting data analysis and modelling. Conduct experiments to validate hypotheses and test new algorithms. Build Minimum Viable Products (MVPs) using a proof-of-concept approach, focusing on scalability, speed, and accuracy. Build and maintain predictive models, dashboards, and automated reports to monitor performance and support decision-making. Collaborate with cross-functional teams, such as product, engineering, and sales, to ensure seamless integration of algorithms with other systems and processes. Stay up to date with the latest industry developments, advancements, and technologies in prediction and classification. Understanding and implementing diverse strategic ML/AI algos as per industry best practises and creating high impact, high value training models. Communicate results and recommendations to stakeholders, including non-technical audiences, using data visualization techniques. Required Skills: 3+ years of proven experience as a Data Scientist. Experience applying Data Science techniques in the field of Digital Advertising is a must. Extensive knowledge and practical experience in machine learning, neural networks, and statistics, with a focus on data modelling, engineering and mining. Understanding of machine learning and operations research, along with experience using machine learning libraries (e.g., TensorFlow, PyTorch, scikit-learn, XGBoost, etc.) and real-time data processing. Understanding latest advancements announced via industry research communities and research papers. Proficiency in programming languages like R, Julia, Python, SQL, or Apache Spark. Excellent analytical and problem-solving aptitude, critical thinking skills, strong mathematical abilities (statistics, algebra), and effective communication and presentation skills. Experience using Business Intelligence Tools and Distributed cloud data frameworks. Curiosity and a strong desire for continuous learning. Bachelors or Masters degree (or equivalent) in Computer Science, Artificial Intelligence, Machine Learning, Statistics, Operations Research, or a related field.

Posted 1 month ago

Apply

5.0 - 8.0 years

8 - 12 Lacs

Nasik, Maharashtra, India

On-site

Design, develop, and maintain robust and scalable data pipelines to support data integration, transformation, and analytics Build and maintain data architecture for both structured and unstructured data Develop and optimize ETL processes using tools such as Apache Spark, Kafka, or Airflow Work with data analysts, data scientists, and business stakeholders to understand data requirements and deliver solutions Ensure high data quality and integrity across the pipeline Implement data security and governance standards Monitor and troubleshoot performance issues in data pipelines Proficiency in SQL and experience with relational and NoSQL databases Hands-on experience with data pipeline tools Good understanding of data warehousing concepts and tools Strong communication and collaboration skills Analytical mindset and attention to detail Familiarity with SSRS (SQL Server Reporting Services) for creating reports Expertise in SQL query optimization, stored procedures, and database tuning Hands-on experience with data migration and replication techniques, including Change Data Capture (CDC) and SQL Server Integration Services (SSIS) Strong understanding of dimensional modelling and data warehousing concepts, including star and snowflake schemas. Familiarity with Agile methodologies and DevOps practices for continuous integration and deployment (CI/CD) pipelines. Familiarity with Data Cubes Soft Skills would include : Good problem solving and decision making skills Dynamic self-starter, independent (but confident to ask questions to clarify), adaptable and able to handle pressure. Good communication skills and team player Attention to detail Curious and Continuous learning mindset Time management skills Accountability and ownership

Posted 1 month ago

Apply

3.0 - 5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description: ETL Test Automation - Senior Test Engineer We are looking for a highly skilled and experienced ETL Test Automation as a Senior Test Engineer. Technical Expertise Experience should have 3 years to 5 years of ETL/DW test automation. Strong knowledge of ETL processes, data warehouse concepts and database testing. Experience in big data testing, focusing on both automated and manual testing for data validation. Proficient in writing complex SQL queries (preferably BigQuery) and understanding database concepts. Understanding of GCP tools: BigQuery, Dataflow, Dataplex, Cloud Storage. Ability to transform simple/complex business logic into SQL queries. Hands-on experience in Python for test automation. Familiarity with test automation frameworks. Excellent communication and client-facing skills. Experience with version control systems like GITlab and test management tools such as JIRA and confluence. Demonstrated experience working in an Agile/SCRUM environment. GCP certifications or training in cloud data engineering. Familiarity with data governance, metadata management, and data forms. Exposure to real-time/streaming data systems, including monitoring, validation, and scaling strategies. Key Responsibilities Design, execute, and maintain QA strategies for ETL/Data Warehouse workflows on Google Cloud Platform (GCP). Validate large-scale data migrations to ensure accuracy and completeness between source and target systems. Develop and maintain automation scripts using Python or any relevant automation tool. Identify, investigate and resolve data anomalies and quality issues Write and optimize complex SQL queries (preferably for BigQuery) to validate transformations, mapping, and business rules. Work closely with data engineers, architects and analysts to understand data requirements and support data quality initiatives. Collaborate in an Agile/SCRUM development environment. Perform manual and automated data validations for high-volume pipelines. Track and manage defects using JIRA and maintain transparency via Confluence. Show more Show less

Posted 1 month ago

Apply

12.0 - 14.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Snowflake Data Warehouse Good to have skills : NA Minimum 12 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that application requirements are met, overseeing the development process, and providing guidance to team members. You will also engage in problem-solving activities, ensuring that solutions are effectively implemented across multiple teams, while maintaining a focus on quality and efficiency in application delivery. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Expected to provide solutions to problems that apply across multiple teams. - Facilitate knowledge sharing sessions to enhance team capabilities. - Monitor project progress and ensure alignment with strategic goals. Professional & Technical Skills: - Must To Have Skills: Proficiency in Snowflake Data Warehouse. - Strong understanding of data modeling and ETL processes. - Experience with cloud-based data solutions and architecture. - Familiarity with SQL and data querying techniques. - Ability to troubleshoot and optimize data workflows. Additional Information: - The candidate should have minimum 12 years of experience in Snowflake Data Warehouse. - This position is based at our Bengaluru office. - A 15 years full time education is required. Show more Show less

Posted 1 month ago

Apply

5.0 - 7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Snowflake Data Warehouse Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that application requirements are met, overseeing the development process, and providing guidance to team members. You will also engage in problem-solving activities, ensuring that the applications are aligned with business objectives and user needs, while maintaining a focus on quality and efficiency throughout the project lifecycle. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Facilitate knowledge sharing sessions to enhance team capabilities. - Monitor project progress and ensure timely delivery of milestones. Professional & Technical Skills: - Must To Have Skills: Proficiency in Snowflake Data Warehouse. - Strong understanding of data modeling and ETL processes. - Experience with cloud-based data solutions and architecture. - Familiarity with SQL and data querying techniques. - Ability to troubleshoot and optimize data workflows. Additional Information: - The candidate should have minimum 5 years of experience in Snowflake Data Warehouse. - This position is based at our Bengaluru office. - A 15 years full time education is required. Show more Show less

Posted 1 month ago

Apply

6.0 - 15.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

We are seeking experienced BODS Consultants with a strong background in Data Migration to join our global consulting team. This is a remote opportunity to work on high-impact projects across industries such as Financial Services, Pharmaceuticals & Life Sciences, Manufacturing, and Utilities. Key Responsibilities Deliver high-quality technology and business solutions across diverse industry domains. Independently develop and implement data migration deliverables. Translate functional specifications into technical solutions using SAP BODS. Conduct data profiling, cleansing, transformation, and validation activities. Perform data analysis, documentation, and reporting as per project requirements. Collaborate with onsite/offshore teams on large-scale ERP or transformation projects. Customize and configure data solutions based on client needs. Mentor and support junior consultants on project execution and best practices. Ensure timely and accurate delivery of all assigned work products. Maintain adherence to organizational policies, compliance, and security standards. Engage in continuous professional development through training and client interactions. Required Qualifications Bachelors degree in Computer Science, Information Systems, or a related field. 6 to 15 years of experience in SAP BODS with a focus on data migration projects. Hands-on experience in full-cycle ERP implementations (SAP preferred). Experience in data profiling, quality analysis, and ETL processes. Proficiency in identifying and resolving data-related issues in complex environments. Strong analytical, problem-solving, and communication skills. Ability to work independently and as part of a cross-functional global team. Application Process If you meet the above requirements and are available to join within 30 days, we encourage you to apply today. This is a unique opportunity to grow your career with a company that values innovation, integrity, and excellence in data solutions. Show more Show less

Posted 1 month ago

Apply

5.0 - 10.0 years

0 Lacs

noida, uttar pradesh

On-site

As an Azure Data Factory Developer, you will be responsible for developing, implementing, and optimizing data pipelines in cloud environments. With 5-10 years of experience, you will have a strong grasp of Azure Data Factory (ADF) and proficiency in SQL Server development. Your expertise in troubleshooting and optimizing ETL processes will be crucial in ensuring smooth data flow. The role will require you to work from the office in Noida for 3 days a week, with a shift timing from 2:00 PM to 10:30 PM IST. You will be part of a structured interview process including a virtual round followed by a face-to-face interview. Key skills that are essential for this role include hands-on experience with Azure Data Factory, proficiency in SQL Server development and optimization, and the ability to build and manage data pipelines effectively. Familiarity with Azure services and cloud data architecture will be considered a strong advantage. If you are a proactive problem solver with a passion for data management and optimization, we invite you to share your resume at chakravarthy@moxieit.com and be part of our dynamic team.,

Posted 1 month ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

As the Data Warehousing and Reporting Engineering Lead, you will play a crucial role in driving our data and reporting strategy to support key business areas such as Finance, Compliance, Anti-Financial Crime, Risk, and Legal. Your primary responsibilities will include defining and implementing the data warehousing and reporting strategy, leading the design and development of data warehousing solutions using platforms like Snowflake, Teradata, Hadoop, or equivalent technologies, and collaborating with cross-functional teams to curate data marts tailored to their specific needs. You will be responsible for developing and optimizing data models to ensure efficient data storage and retrieval, thereby enabling high-quality business intelligence and reporting. Additionally, ensuring data accuracy, consistency, and security across all data repositories and reporting systems will be a key aspect of your role. Leading and mentoring team members to foster a culture of collaboration and innovation will also be part of your responsibilities. To be successful in this role, you should have a background in Computer Science, Data Science, Engineering, or a related field, along with extensive experience in data warehousing platforms such as Snowflake, Teradata, Hadoop, or similar technologies. Proven expertise in data modeling, data communication, and curating data marts to support business functions is essential. You should also possess a solid understanding of relational and non-relational database systems, experience with data integration tools and ETL processes, and strong problem-solving skills to design scalable and efficient solutions. In addition, you should demonstrate strong leadership skills with experience in managing and mentoring high-performing technical teams. Effective interpersonal skills are also crucial for collaborating with both technical and non-technical partners. Preferred skills for this role include experience supporting Finance, Compliance, Anti-Financial Crime, Risk, and Legal data strategies, working with large-scale enterprise data ecosystems, familiarity with cloud-based data warehousing environments and tools, as well as knowledge of data governance, compliance, and regulatory requirements. Join us at LSEG, a leading global financial markets infrastructure and data provider, where our purpose is driving financial stability, empowering economies, and enabling sustainable growth. At LSEG, you will be part of a diverse and dynamic organization that values your individuality and encourages new ideas. Together, we are committed to re-engineering the financial ecosystem to support sustainable economic growth. Explore the exciting opportunities at LSEG and be part of our collaborative and creative culture. LSEG offers a range of tailored benefits and support, including healthcare, retirement planning, paid volunteering days, and wellbeing initiatives. If you are ready to make a difference and drive innovation in data warehousing and reporting, join us at LSEG and be part of our journey towards sustainable economic growth.,

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies