Jobs
Interviews

4894 Data Processing Jobs - Page 25

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 12.0 years

18 - 25 Lacs

bengaluru

Work from Office

Snaplogic Lead : 7 Plus Years 5 Plus years of strong SnapLogic development experience Ability to engage with multiple technical teams with a supportive attitude to achieve a shared goal Ability to communicate effectively with technical and non-technical individuals. Should understand the overall system landscape, upstream and downstream systems Should be able understand ETL tech specifications and develop the code efficiently Ability to leverage Snaplogic features/functions to the achieve best results Should have good Redshift or Oracle (or comparable database) experience with BI/DW deployments Exposure to Other ETL tools, SAP, Oracle ERP connectivity with Snaplogic, Connectivity with API Development experience in minimum 1 year in Python scripting and Unix scripting is an added advantage Familiar with AWS technologies Experience with Big Data technologies - Data processing and data transformation flows Ability to multi-task, organize, prioritize, research and resolve issues in a timely and effective manner. Strong analytical skills and enjoys solving complex technical problems ID Name: Babitha N Email ID: babitha18vaakruthi@gmail.com

Posted 1 week ago

Apply

1.0 - 3.0 years

0 - 1 Lacs

noida

Work from Office

Job Overview We are seeking a detail-oriented and efficient Data Entry & Reporting Executive to manage, maintain, and analyze company data with accuracy and integrity. The ideal candidate should be proficient in Excel, capable of handling large datasets, and able to provide timely administrative and reporting support to different departments. Key Responsibilities Enter and update data accurately into the system. Maintain and organize databases and spreadsheets. Generate and analyze reports using Excel. Ensure data integrity and make corrections as necessary. Handle confidential and sensitive data appropriately. Perform regular audits to verify data accuracy. Provide administrative support such as creating presentations and documents. Communicate with other departments for data retrieval or clarification. Identify and resolve discrepancies or errors in data promptly. Manage large datasets efficiently and ensure deadlines are met. Perform repetitive tasks with high accuracy and attention to detail. Required Skills, Qualifications & Experience Bachelors degree in Business, Information Systems, or a related field (preferred). Proven experience in data entry, data management, or reporting. Advanced proficiency in Microsoft Excel. Strong computer skills (MS Office Suite). High typing speed with excellent accuracy and attention to detail. Strong time management and multitasking abilities. Ability to handle sensitive and confidential data responsibly. Excellent communication and problem-solving skills.

Posted 1 week ago

Apply

0.0 - 2.0 years

2 - 4 Lacs

bhiwandi

Work from Office

Job Description Data Entry Operator Position Title: Data Entry Operator Department: Administration / Operations Role Overview We are seeking a detail-oriented and efficient Data EntryOperator to manage accurate input, updating, and maintenance of companyrecords in digital and physical formats. This role is crucial for ensuringsmooth workflow, error-free data management, and timely reporting acrossdepartments. Key Responsibilities Enter, update, and verify data in company systems, databases, and spreadsheets. Maintain and organize digital records, files, and documents. Ensure accuracy and completeness of all data entries. Cross-check and validate information with relevant departments. Generate basic reports, summaries, or lists as required by management. Scan, upload, and file physical documents into digital systems. Maintain confidentiality and security of company and client information. Assist in administrative tasks such as preparing letters, invoices, and forms when required. Meet daily/weekly targets for data processing and reporting. Escalate discrepancies or issues in data to the supervisor promptly. Requirements Minimum qualification: 12th Pass / Graduate in any discipline. Proven experience in data entry, clerical work, or administration preferred. Proficiency in MS Office (Excel, Word) and basic computer applications. Good typing speed (minimum 3040 words per minute) with high accuracy. Strong attention to detail and ability to spot errors quickly. Basic communication skills in English/Hindi. Desired Traits Reliable and disciplined with strong work ethics. Ability to handle repetitive tasks with accuracy. Organized and methodical approach to work. Willingness to assist in other admin tasks as needed. Work Conditions Work timings: [Insert shift details, e.g., 9:00 AM 6:00 PM] Office-based role with computer/laptop usage throughout the day. Training on company-specific software will be provided. Compensation & Benefits Salary: As per industry standards. Provident Fund (PF), ESIC, and statutory benefits. Tea/refreshments and subsidized meals (if applicable). Career growth opportunities in administration and operations.

Posted 1 week ago

Apply

1.0 - 6.0 years

3 - 8 Lacs

bengaluru

Work from Office

You are a strategic thinker passionate about driving solutions in data science and analytics. You have found the right team. As an Applied AI/ML Analyst in our Data Science COE within the Firmwide Planning & Analysis (FP&A) organization, you will spend each day defining, refining, and delivering enhanced forecasting and analytical capabilities. You will focus on AI/ML-based solutions to boost productivity across the FP&A organization. In this role, you will analyze business problems, experiment with statistical and machine learning models, and develop solutions using state-of-the-art machine learning and deep learning algorithms. You will be part of an innovative team, collaborating closely with cross-functional teams to create data-driven solutions that enhance our products, services, and business operations. We are seeking someone with a strong background in data analysis, machine learning, and statistical modeling, who is passionate about uncovering insights and solving complex problems. Job Responsibilities Design, develop and successfully implement AI/ML solutions, based on state-of-the-art ML models (e.g., ChatGPT), to support various use cases such as Document Q&A, Information Retrieval/Summarization, Search, etc. Collaborate with stakeholders to understand business requirements and translate them into data science projects. Assist leadership in defining the problem statements, execution roadmap. Collaborate with multiple partner teams such as Finance, Technology, Product Management, Legal, Compliance, Strategy and Business Management to deploy solutions into production. Communicate findings and insights through clear and concise visualizations and reports. Stay up-to-date with the latest advancements in data science, machine learning, and related technologies. Collaborate closely with the data science/AI/ML community within the firm, actively identify and introduce off-the-shelf solutions to the FP&A organization. Mentor and provide guidance to junior data scientists and analysts. Design and conduct trainings on AI/ML topics to elevate awareness and adoption of AI/ML solutions in the general Finance organization. Develop and maintain data pipelines and workflows for efficient data processing. Clean, preprocess, and validate data to ensure data quality and integrity. Conduct exploratory data analysis to identify trends, patterns, and insights from large datasets. Perform statistical analysis and hypothesis testing to support data-driven decision-making. Required qualifications, capabilities, and skills. Engineer in Computer Science, Data Science, Statistics, Mathematics, or Machine Learning. Strong background in Mathematics and Machine Learning. 1 year of experience in data science, machine learning, or a related role. Proficiency in programming languages such as Python or R. Experience with LLMs and Prompt Engineering techniques. Strong knowledge of statistical analysis and modeling techniques. Familiarity with data analysis and visualization. Experience with SQL and relational databases. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Ability to work independently and as part of a team.

Posted 1 week ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

gurugram

Work from Office

About the Role We are looking for a skilled and detail-oriented Python Developer to join our team. In this role, you will focus on writing efficient scripts, analyzing complex data, and delivering actionable insights through clear and accurate visualizations. The ideal candidate will be a fast learner with a strong foundation in software development principles and a passion for solving challenging problems. Key Responsibilities Design and develop efficient Python scripts for data analysis and automation. Build accurate and insightful data visualizations to support decision-making. Implement and maintain alerting tools to monitor data workflows and systems. Collaborate with cross-functional teams to gather requirements and deliver solutions. Apply strong knowledge of data structures, algorithms, and OOP concepts to write performant, maintainable code. Continuously optimize and improve data processing pipelines for scalability and performance. Key Requirements Experience: 2 5 years as a Software Engineer or Python Developer. Strong proficiency in Python with the ability to quickly learn new frameworks and libraries. Hands-on experience with SQL for querying and data manipulation. Experience with data visualization/plotting tools (e.g., Matplotlib, Seaborn, Plotly). Familiarity with alerting and monitoring tools . Solid understanding of data structures, algorithms, and OOP concepts . Excellent problem-solving skills, analytical thinking, and attention to detail. Benefits: Our open and casual work culture gives you the space to innovate and deliver. Our cubicle free offices, disdain for bureaucracy and insistence to hire the very best creates a melting pot for great ideas and technology innovations. Everyone on the team is approachable, there is nothing better than working with friends! Our perks have you covered. Competitive compensation 6 weeks of paid vacation Monthly after work parties Catered breakfast and lunch Fully stocked kitchen International team outing

Posted 1 week ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

gurugram

Work from Office

About the Role We are looking for a skilled and detail-oriented Python Developer to join our team. In this role, you will focus on writing efficient scripts, analyzing complex data, and delivering actionable insights through clear and accurate visualizations. The ideal candidate will be a fast learner with a strong foundation in software development principles and a passion for solving challenging problems. Key Responsibilities Design and develop efficient Python scripts for data analysis and automation. Build accurate and insightful data visualizations to support decision-making. Implement and maintain alerting tools to monitor data workflows and systems. Collaborate with cross-functional teams to gather requirements and deliver solutions. Apply strong knowledge of data structures, algorithms, and OOP concepts to write performant, maintainable code. Continuously optimize and improve data processing pipelines for scalability and performance. Key Requirements Experience: 2 5 years as a Software Engineer or Python Developer. Strong proficiency in Python with the ability to quickly learn new frameworks and libraries. Hands-on experience with SQL for querying and data manipulation. Experience with data visualization/plotting tools (e.g., Matplotlib, Seaborn, Plotly). Familiarity with alerting and monitoring tools . Solid understanding of data structures, algorithms, and OOP concepts . Excellent problem-solving skills, analytical thinking, and attention to detail. Benefits: Our open and casual work culture gives you the space to innovate and deliver. Our cubicle free offices, disdain for bureaucracy and insistence to hire the very best creates a melting pot for great ideas and technology innovations. Everyone on the team is approachable, there is nothing better than working with friends! Our perks have you covered. Competitive compensation 6 weeks of paid vacation Monthly after work parties Catered breakfast and lunch Fully stocked kitchen International team outing

Posted 1 week ago

Apply

2.0 - 7.0 years

4 - 9 Lacs

bengaluru

Work from Office

GSS-Sourcing & Procurement Sr. Team Member Express Pay JPMorgan Chase seeks to do business with suppliers who provide the best price, quality and capability to meet our business needs. With over 600 team members in 11 countries, our Global Supplier Services (GSS) organization works proactively with line of business colleagues to identify capable suppliers, lead the competitive sourcing process and negotiate and contract with the chosen suppliers. Leveraging firm-wide buying power and controlling risk are consistent overarching goals. GSS then ensures that the contracted goods and services can be obtained by our employees in an efficient, cost-effective manner globally. Job Summary As a Sr. Team Member in the GSS-Sourcing & Procurement team, you will work in a fast-paced, high-volume environment managing and reviewing Express Pay Purchase Order and Payments requests. You will focus on gathering, entering, and reviewing purchase request information, processing payments, and participating in process improvements. This role offers the opportunity to enhance your skills in procurement and supplier management while delivering first-class service to our customers. This position will work in a fast-paced high-volume team that is responsible for the management and/or review of Express Pay Purchase Order and Payments requests. Primary focus on gathering, entering, and reviewing Purchase request information as a maker and/or processing Payments to the supplier. This role also includes participating in identifying process improvements, developing enhancement request requirements, writing test scripts and test execution. You may also be asked to take on stretch assignments and projects related to Express Pay Process. Job responsibilities Lead QA of Express Pay Request along with reviewing requests in Ariba. Manage supplier and LOB to collect the necessary information needed for Express Pay Request. Lead express Pay Payment review and processing payments to the supplier Manage SAP and OFAC review Create requestion in Ariba as per intake form. Deliver first class service to our customers & Improve end user experience Manage 24-48 SLA timeline to process requests Ensure quality of work is accurate and pass the monthly internal audit reviews Required qualifications, capabilities and skills 2 + years experience in data processing, procurement, accounts payable, customer service, or an operations environment. Personal computer skills with proficiency in Word, Excel, SharePoint, and web based systems. Knowledge of SAP and Oracle supplier details Should be willing to work flexi shift (may include US hours) Preferred qualifications, capabilities and skills 1+ years of SAP, Oracle , Ariba (ePurchase) or Concur experience Knowledge of Procure to Pay / Source to Pay Operations

Posted 1 week ago

Apply

3.0 - 9.0 years

5 - 11 Lacs

gurugram

Work from Office

At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, youll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. You Lead the Way. We ve Got Your Back. At American Express, you ll be recognized for your contributions, leadership, and impact every colleague has the opportunity to share in the company s success. Together, we ll win as a team, striving to uphold our company values and powerful backing promise to provide the world s best customer experience every day. And we ll do it with the utmost integrity, and in an environment where everyone is seen, heard and feels like they belong. Join Team Amex s Global Loyalty and Benefits organization and lets lead the way together. About the job : The position of Engineer will be a part of a dynamic and growing team within the GRCT organization at American Express. Key Responsibilities: Evaluates data requirements and stories, documents them for seamless integration into existing architectures, and maintains data models to support business needs Builds and enhances data pipelines and database designs to meet performance, scalability, and security requirements Collaborates in design reviews and testing, and provides production environment support with guidance from peers and leaders Operates data assets according to consistent standards, guidelines, and policies Completes work reviews and fosters a collaborative learning environment Communicates and collaborates with business and product teams to facilitate changes and implementation Completes Big Data requirements by implementing basic partitioning and indexing solutions Collaborates and co-creates effectively with teams in product and the business to align technology initiatives with business objectives GCP Expertise- Hands-on experience with GCP services like Big Query, Dataflow, Cloud Storage, Pub/Sub, Cloud Composer, and Cloud Functions. Programming Languages- Strong proficiency in Python and SQL for data processing and manipulation. ETL/ELT - Experience with ETL/ELT processes and data warehousing concepts. Data Modeling - Understanding of data modeling principles and techniques. Cloud Technologies - Knowledge of cloud computing concepts and standard processes. Problem-Solving - Excellent analytical and problem-solving skills.

Posted 1 week ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

bengaluru

Work from Office

Job Description Unlock the power of data with our expert Databricks Developer, transforming complex datasets into actionable insights with seamless efficiency. Elevate your business intelligence and drive innovation through cutting-edge data engineering solutions. As a Databricks Developer at JPMorgan Chase within our Corporate Technology division, you will play a pivotal role in our cloud transformation journey. You will design, develop, and implement cloud-based solutions to replace our existing SQL/On Prem infrastructure, enhancing our data processing capabilities and empowering end users to swiftly access data and derive insights. Job Responsibilities Collaborate with cross-functional teams to understand business requirements and design cloud-based solutions. Lead the migration of existing SQL databases/ETL Layer and applications to Databricks and other cloud platforms. Drive the development and deployment of processes using the DBX framework. Develop, test, and deploy scalable and efficient cloud applications. Optimize data processing workflows while ensuring data integrity and security. Provide technical guidance and support to team members and stakeholders. Stay abreast of the latest cloud technologies and best practices. Required Qualifications, Capabilities, and Skills Formal training or certification on software engineering concepts and 3+ years applied experience Proficiency with hands-on experience in programming languages such as Python and Big Data technologies like Spark, Kafka. Strong expertise in Databricks and cloud platforms such as AWS Cloud. Proven experience in SQL database management and development. Experience with data integration, ETL processes, and data warehousing. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Ability to work independently and efficiently. Preferred Qualifications, Capabilities, and Skills Certification in cloud technologies (e.g., AWS Certified Solutions Architect). Knowledge of ETL technologies like Abinitio/Informatica/Prophecy. Knowledge of DevOps practices and CI/CD pipelines, containerization - Docker, Kubernetes. Databricks certified Data Engineer or equivalent. Strong financial and business analytical skills.

Posted 1 week ago

Apply

4.0 - 9.0 years

6 - 11 Lacs

hyderabad

Work from Office

We are looking for a skilled Data Engineer with strong expertise in SQL Development, SSIS, and Microsoft Fabric. The ideal candidate should have hands-on experience in T-SQL Development and working across the bronze and gold layers of Microsoft Fabric. Additionally, experience in Power BI will be a plus. Key Responsibilities: 1. SQL & SSIS Development: Design, develop, and optimize SQL queries, stored procedures, and functions. Develop and maintain SSIS packages for ETL processes. Ensure data quality, performance tuning, and error handling in ETL pipelines. 2. MS Fabric & Data Engineering: Work on Bronze & Gold layers of Microsoft Fabric for data transformation and enrichment. Develop and manage T-SQL scripts for data processing and transformation. Implement best practices for data modeling and data pipeline optimization. 3. Power BI Integration (Preferred): Collaborate with BI teams to support Power BI dashboards and reports. Optimize datasets and queries for efficient reporting. 4. Collaboration & Process Improvement: Work closely with data analysts, business teams, and other engineers to deliver data solutions. Ensure best practices in database design, performance tuning, and security. Automate workflows and optimize data pipelines for scalability. Required Skills: Technical Expertise: Strong hands-on experience in SQL Development & T-SQL scripting. Experience in SSIS (SQL Server Integration Services) for ETL processes. Knowledge of Microsoft Fabric, specifically Bronze & Gold layers. Familiarity with data engineering concepts, data modeling, and data warehousing. Exposure to Power BI for data visualization (preferred). Other Skills: Ability to troubleshoot and optimize SQL queries for performance. Experience in handling large datasets and optimizing ETL processes. Strong problem-solving and analytical skills. Excellent communication and teamwork abilities. Experience with Azure Data Services (Azure Synapse, Data Factory, etc.) is a plus. Certifications in Microsoft SQL Server, Azure Data Engineering, or Power BI are beneficial.

Posted 1 week ago

Apply

4.0 - 8.0 years

6 - 10 Lacs

bengaluru

Work from Office

Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate & Summary In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Responsibilities Design and build data pipelines & Data lakes to automate ingestion of structured and unstructured data that provide fast, optimized, and robust endtoend solutions Knowledge about the concepts of data lake and dat warehouse Experience working with AWS big data technologies Improve the data quality and reliability of data pipelines through monitoring, validation and failure detection. Deploy and configure components to production environments Technology Redshift, S3, AWS Glue, Lambda, SQL, PySpark, SQL Mandatory skill sets AWS Data Engineer Preferred skill sets AWS Data Engineer Years of experience required 48 years Education qualification Btech/MBA/MCA Education Degrees/Field of Study required Bachelor Degree, Master Degree Degrees/Field of Study preferred Required Skills Data Engineering Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Travel Requirements Available for Work Visa Sponsorship

Posted 1 week ago

Apply

4.0 - 7.0 years

6 - 9 Lacs

bengaluru

Work from Office

Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate & Summary . In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decisionmaking for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC Responsibilities Handson experience in Azure Databricks, ADF, or Synapse Analytics Proficiency in Python for data processing and scripting. Strong command over SQL writing complex queries, performance tuning, etc. Experience working with Azure Data Lake Storage and Data Warehouse concepts (e.g., dimensional modeling, star/snowflake schemas). Understanding CI/CD practices in a data engineering context. Excellent problemsolving and communication skills. Mandatory skill sets ADF, SQL & Python Preferred skill sets Experienced in Delta Lake, Power BI, or Azure DevOps. Knowledge of Spark, Scala, or other distributed processing frameworks. Exposure to BI tools like Power BI, Tableau, or Looker. Familiarity with data security and compliance in the cloud. Experience in leading a development team. Years of experience required 4 7 yrs Education qualification B.tech/MBA Education Degrees/Field of Study required Bachelor of Technology, Master of Business Administration Degrees/Field of Study preferred Required Skills Azure Synapse Analytics, Databricks Platform Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 16 more} No

Posted 1 week ago

Apply

4.0 - 7.0 years

6 - 9 Lacs

bengaluru

Work from Office

Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate & Summary . In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decisionmaking for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC Responsibilities Handson experience in Azure Databricks, ADF, or Synapse Analytics Proficiency in Python for data processing and scripting. Strong command over SQL writing complex queries, performance tuning, etc. Experience working with Azure Data Lake Storage and Data Warehouse concepts (e.g., dimensional modeling, star/snowflake schemas). Understanding CI/CD practices in a data engineering context. Excellent problemsolving and communication skills. Mandatory skill sets ADF, SQL & Python Preferred skill sets Experienced in Delta Lake, Power BI, or Azure DevOps. Knowledge of Spark, Scala, or other distributed processing frameworks. Exposure to BI tools like Power BI, Tableau, or Looker. Familiarity with data security and compliance in the cloud. Experience in leading a development team. Years of experience required 4 7 yrs Education qualification B.tech/MBA/MCA Education Degrees/Field of Study required Bachelor of Technology, Master of Business Administration Degrees/Field of Study preferred Required Skills Azure Synapse Analytics, Databricks Platform Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 16 more} No

Posted 1 week ago

Apply

4.0 - 7.0 years

6 - 9 Lacs

bengaluru

Work from Office

Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate & Summary . In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decisionmaking for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC Responsibilities Handson experience in Azure Databricks, ADF, or Synapse Analytics Proficiency in Python for data processing and scripting. Strong command over SQL writing complex queries, performance tuning, etc. Experience working with Azure Data Lake Storage and Data Warehouse concepts (e.g., dimensional modeling, star/snowflake schemas). Understanding CI/CD practices in a data engineering context. Excellent problemsolving and communication skills. Mandatory skill sets ADF, SQL & Python Preferred skill sets Experienced in Delta Lake, Power BI, or Azure DevOps. Knowledge of Spark, Scala, or other distributed processing frameworks. Exposure to BI tools like Power BI, Tableau, or Looker. Familiarity with data security and compliance in the cloud. Experience in leading a development team. Years of experience required 4 7 yrs Education qualification B.tech/MBA Education Degrees/Field of Study required Bachelor of Technology, Master of Business Administration Degrees/Field of Study preferred Required Skills Azure Synapse Analytics, Databricks Platform Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 16 more} No

Posted 1 week ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

bengaluru

Work from Office

Essential Job Functions: Participate in data engineering tasks, including data processing and integration activities. Assist in the development and maintenance of data pipelines. Collaborate with team members to collect, process, and store data. Contribute to data quality assurance efforts and adherence to data standards. Use data engineering tools and techniques to analyze and generate insights from data. Collaborate with data engineers and other analysts on data-related projects. Seek out opportunities to enhance data engineering skills and domain knowledge. Stay informed about data engineering trends and best practices. Basic Qualifications: Bachelors degree in a relevant field or equivalent combination of education and experience Typically, 5+ years of relevant work experience in industry, with a minimum of 2 years in a similar role Proven experience in data engineering Proficiencies in data engineering tools and technologies A continuous learner that stays abreast with industry knowledge and technology Other Qualifications: Advanced degree in a relevant field a plus Relevant certifications, such as Oracle Certified Professional, MySQL Database Administrator a plus

Posted 1 week ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

bengaluru

Work from Office

Netradyne harnesses the power of Computer Vision and Edge Computing to revolutionize the modern-day transportation ecosystem. We are a leader in fleet safety solutions. With growth exceeding 4x year over year, our solution is quickly being recognized as a significant disruptive technology. Our team is growing, and we need forward-thinking, uncompromising, competitive team members to continue to facilitate our growth. ABOUT NETRADYNE Founded in 2015, Netradyne is a technology company that leverages expertise in Artificial Intelligence, Deep Learning, and Edge Computing to bring transformational solutions to the transportation industry. Netradyne s technology is already deployed in thousands of vehicles; and our customers drive everything from passenger cars to semi-trailers on interstates, suburban roads, rural highways even off-road. Netradyne is looking for talented engineers to join our Analytics team comprised of graduates from IITs, IISC, Stanford, UIUC, UCSD etc. We build cutting edge AI solutions to enable drivers and fleets realize unsafe driving scenarios in real-time to prevent accidents from happening and reduce fatalities/injuries. ROLE AND RESPONSIBILITIES You will be embedded within a team of machine learning engineers and data scientists, responsible for building and productizing generative AI and deep learning solutions. You will: Design, develop and deploy production ready scalable solutions that utilizes GenAI, Traditional ML models, Data science and ETL pipelines Collaborate with cross-functional teams to integrate AI-driven solutions into business operations. Build and enhance frameworks for automation, data processing, and model deployment. Utilize Gen-AI tools and workflows to improve the efficiency and effectiveness of AI solutions. Conduct research and stay updated with the latest advancements in generative AI and related technologies. Deliver key product features within cloud analytics. Requirements: Tech, M. Tech or PhD in computer science, electrical engineering, statistics or math. At least 5 years of working experience in data science, computer vision, or related domain. Proven experience with building and deploying generative AI solutions. Strong programming skills in Python and solid fundamentals in computer science, particularly in algorithms, data structures, and OOP. Experience with Gen-AI tools and workflows. Proficiency in both vision-related AI and data analysis using generative AI. Experience with cloud platforms and deploying models at scale. Experience with transformer architectures and large language models (LLMs). Familiarity with frameworks such as TensorFlow, PyTorch, and Hugging Face. Proven leadership and team management skills. Desired skills: Working experience with AWS, Azure AI tools is a plus. Technologies such as: Kafka streams, Queues, Rest API systems. Programing language: Python, SQL, C++ (good to have) Tools: Pytorch, FastAPI, MLFlow, Hugging face pipelines, Langgraph, OpenAI Knowledge of best practices in software development, including version control, testing, and continuous integration

Posted 1 week ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

hyderabad, chennai, coimbatore

Work from Office

Data Engineer Preferred Knowledge/Skills: As a Sr. Software Engineer focused on data engineering, you will build and maintain data pipelines and infrastructure to enable efficient data processing and analytics across cloud environments. Requirements Expertise in building scalable ETL/ELT pipelines using Apache Spark / Pyspark Strong knowledge of SQL and experience with relational and distributed data systems . Hands-on experience with Python, Scala, or Java for data processing tasks. Familiarity with data modeling, data warehousing concepts, and schema design. Experience with cloud data platforms such as AWS, Azure, or Google Cloud. Responsibilities Develop and maintain reliable data pipelines and workflows to support analytical and operational data needs. Work closely with analysts, data scientists, and software engineers to integrate and serve data efficiently. Monitor and troubleshoot data systems for performance, quality, and scalability. Ensure data accuracy and consistency through robust validation and testing frameworks.

Posted 1 week ago

Apply

3.0 - 7.0 years

12 - 13 Lacs

pune

Work from Office

HSBC electronic data processing india pvt ltd is looking for Full Stack Developer/Consultant Specialist to join our dynamic team and embark on a rewarding career journey Developing front end website architecture. Designing user interactions on web pages. Developing back-end website applications. Ensuring responsiveness of applications. Working alongside graphic designers for web design features. Seeing through a project from conception to finished product. Designing and developing APIs. Meeting both technical and consumer needs.

Posted 1 week ago

Apply

1.0 - 4.0 years

2 - 5 Lacs

thane, hyderabad, howrah

Work from Office

SATYA SAI TRANSPORT-TEAM FOR YOU is looking for Data Entry to join our dynamic team and embark on a rewarding career journey Responsible for entering and maintaining accurate and up-to-date information into computer systems or databases Should be proficient in typing Responsibilities:1 Entering data into computer systems or databases, ensuring accuracy and completeness 2 Verifying and correcting data errors, inconsistencies, and discrepancies 3 Performing regular data quality checks to ensure that data is accurate and up-to-date 4 Sorting and organizing data for easy access and retrieval

Posted 1 week ago

Apply

8.0 - 13.0 years

25 - 30 Lacs

mumbai

Work from Office

About Affinity Affinity is pioneering newfrontiers in AdTech: developing solutions that push past today s limits andopen up new opportunities. We are a global AdTech company helping publishersdiscover better ways to monetize and enabling advertisers to reach the rightaudiences through new touchpoints. Operating across 10+ markets in Asia, theUS, and Europe with a team of over 450 experts, we are building privacy-firstad infrastructure that opens up opportunities beyond the walled gardens. Role: Director, Data Science Work Location: Mumbai (Malad) About Role: We areseeking a Head of Data Science to lead AI/ML initiatives across all our businessunits, driving measurable impact in digital advertising through sophisticatedalgorithms and team leadership. This high-impact role combines hands-ontechnical expertise with strategic vision, directly influencing millions inadvertising revenue. Youll collaborate with C-level executives while buildingindustry-leading AdTech solutions and establishing measurement frameworks thatset new standards for performance. Were looking for a technical visionary whocan balance algorithm development with strategic leadership across our globaladvertising ecosystem. Roles & Responsibility: Think Future, Build present Create scalable solutions addressing current challenges while building frameworks for growth. Design AI/MLalgorithms for performance and programmatic advertising platforms with emphasison floor price optimization and yield management. Build bid predictionmodels and supply path optimization algorithms to maximize publisher revenue. Develop algos andmodels which help various targeting for real-time ad delivery. Implement audiencesegmentation and lookalike modelling for brand campaigns. Think Data Derive data insights from processes, products, and integrations to achieve efficiency and performance goals. Establish KPI-drivenmeasurement frameworks focused on incrementality gains and attribution accuracy. Build predictivemodels for campaign forecasting and budget optimizations. Develop frauddetection algorithms and brand safety classification systems. Analyze data and identify trends, patterns, and anomalies in model behavior. Ensure data privacycompliance (GDPR, CCPA) and implement secure data handling practices andparticipate in AI policy-making. Think Technology Build enterprise-grade ML/ AI architectural solutions that drive real value, and measurable business impact. Develop MLOps and datapipelines from ad serving events, implementing real-time feature engineeringand model serving infrastructure catering to billions of ads. Build predictive models, dashboard / reports for performance monitoring. Conduct rigorous A/Btesting and statistical analysis to validate algorithmic improvements andbusiness impact with explainable-AI algorithms. ThinkCollaboration - Partner with cross-functional teams (stakeholders, product, developers and business) to deliver models, dashboards, solutions that drive revenue KPIs Think Leadership -Drive strategic ML/AI vision across business units, build and scalehigh-performing teams, and own P&L responsibility for data scienceinvestments. Collaborate with fellow leaders to establish company-wide AIgovernance and present ROI metrics to executive leadership. Required Skills: 8+ years experience as Data Scientist with 3+ years in advertising technology and KPIoptimisation MS/PhD in Computer Science, Statistics,Mathematics, or related quantitative field. Technical Expertise: Programming: Advanced Python, SQL, with experiencein Hadoop and Apache Spark for large-scale data processing. ML/ AI stack: Tensorflow, PyTorch, XGBoost, LLMs, scikit-learnfor time-series forecasting, recommendation systems, NLP optimisation, andcausal inference. Exposure to ML, NN, GenAI algorithms. Infrastructure: Cloud platforms (GCP/Azure/AWS),MLOps, real-time model inference, feature stores, and ML pipeline orchestration Visualization: Power BI, Looker, Jupyter Notebooks,and custom dashboard developments Production Systems: Building scalable ML systemswith real-time performance monitoring and A/B testing frameworks. Domain Knowledge Deep knowledge of digital marketing and advertisingtechnologies and concepts like RTB protocols, header bidding, programmaticadvertising ecosystems, and Google ADX. Understanding of Ad Server APIs, DSP/SSPintegrations, DMP usage, auction dynamics, attribution modelling, conversiontracking, and audience segmentation. Proven track record of optimising AdTech KPIswith demonstrated results. Leadership -Strong communication skills for technical and executive audiences with abilityto translate KPI improvements into business impact.

Posted 1 week ago

Apply

8.0 - 13.0 years

25 - 30 Lacs

mumbai

Work from Office

Affinity is pioneering newfrontiers in AdTech: developing solutions that push past today s limits andopen up new opportunities. We are a global AdTech company helping publishersdiscover better ways to monetize and enabling advertisers to reach the rightaudiences through new touchpoints. Operating across 10+ markets in Asia, theUS, and Europe with a team of over 450 experts, we are building privacy-firstad infrastructure that opens up opportunities beyond the walled gardens. Role: Director, Data Science Work Location: Mumbai (Malad) About Role: We areseeking a Head of Data Science to lead AI/ML initiatives across all our businessunits, driving measurable impact in digital advertising through sophisticatedalgorithms and team leadership. This high-impact role combines hands-ontechnical expertise with strategic vision, directly influencing millions inadvertising revenue. Youll collaborate with C-level executives while buildingindustry-leading AdTech solutions and establishing measurement frameworks thatset new standards for performance. Were looking for a technical visionary whocan balance algorithm development with strategic leadership across our globaladvertising ecosystem. Roles & Responsibility: Think Future, Build present Create scalable solutions addressing current challenges while building frameworks for growth. Design AI/MLalgorithms for performance and programmatic advertising platforms with emphasison floor price optimization and yield management. Build bid predictionmodels and supply path optimization algorithms to maximize publisher revenue. Develop algos andmodels which help various targeting for real-time ad delivery. Implement audiencesegmentation and lookalike modelling for brand campaigns. Think Data Derive data insights from processes, products, and integrations to achieve efficiency and performance goals. Establish KPI-drivenmeasurement frameworks focused on incrementality gains and attribution accuracy. Build predictivemodels for campaign forecasting and budget optimizations. Develop frauddetection algorithms and brand safety classification systems. Analyze data and identify trends, patterns, and anomalies in model behavior. Ensure data privacycompliance (GDPR, CCPA) and implement secure data handling practices andparticipate in AI policy-making. Think Technology Build enterprise-grade ML/ AI architectural solutions that drive real value, and measurable business impact. Develop MLOps and datapipelines from ad serving events, implementing real-time feature engineeringand model serving infrastructure catering to billions of ads. Build predictive models, dashboard / reports for performance monitoring. Conduct rigorous A/Btesting and statistical analysis to validate algorithmic improvements andbusiness impact with explainable-AI algorithms. ThinkCollaboration - Partner with cross-functional teams (stakeholders, product, developers and business) to deliver models, dashboards, solutions that drive revenue KPIs Think Leadership -Drive strategic ML/AI vision across business units, build and scalehigh-performing teams, and own P&L responsibility for data scienceinvestments. Collaborate with fellow leaders to establish company-wide AIgovernance and present ROI metrics to executive leadership. Required Skills: 8+ years experience as Data Scientist with 3+ years in advertising technology and KPIoptimisation MS/PhD in Computer Science, Statistics,Mathematics, or related quantitative field. Technical Expertise: Programming: Advanced Python, SQL, with experiencein Hadoop and Apache Spark for large-scale data processing. ML/ AI stack: Tensorflow, PyTorch, XGBoost, LLMs, scikit-learnfor time-series forecasting, recommendation systems, NLP optimisation, andcausal inference. Exposure to ML, NN, GenAI algorithms. Infrastructure: Cloud platforms (GCP/Azure/AWS),MLOps, real-time model inference, feature stores, and ML pipeline orchestration Visualization: Power BI, Looker, Jupyter Notebooks,and custom dashboard developments Production Systems: Building scalable ML systemswith real-time performance monitoring and A/B testing frameworks. Domain Knowledge Deep knowledge of digital marketing and advertisingtechnologies and concepts like RTB protocols, header bidding, programmaticadvertising ecosystems, and Google ADX. Understanding of Ad Server APIs, DSP/SSPintegrations, DMP usage, auction dynamics, attribution modelling, conversiontracking, and audience segmentation. Proven track record of optimising AdTech KPIswith demonstrated results. Leadership -Strong communication skills for technical and executive audiences with abilityto translate KPI improvements into business impact.

Posted 1 week ago

Apply

8.0 - 13.0 years

25 - 35 Lacs

mumbai

Work from Office

About Affinity Affinity is pioneering newfrontiers in AdTech: developing solutions that push past today s limits andopen up new opportunities. We are a global AdTech company helping publishersdiscover better ways to monetize and enabling advertisers to reach the rightaudiences through new touchpoints. Operating across 10+ markets in Asia, theUS, and Europe with a team of over 450 experts, we are building privacy-firstad infrastructure that opens up opportunities beyond the walled gardens. Role: Director, Data Science Work Location: Mumbai (Malad) About Role: We areseeking a Head of Data Science to lead AI/ML initiatives across all our businessunits, driving measurable impact in digital advertising through sophisticatedalgorithms and team leadership. This high-impact role combines hands-ontechnical expertise with strategic vision, directly influencing millions inadvertising revenue. Youll collaborate with C-level executives while buildingindustry-leading AdTech solutions and establishing measurement frameworks thatset new standards for performance. Were looking for a technical visionary whocan balance algorithm development with strategic leadership across our globaladvertising ecosystem. Roles & Responsibility: Think Future, Build present Create scalable solutions addressing current challenges while building frameworks for growth. Design AI/MLalgorithms for performance and programmatic advertising platforms with emphasison floor price optimization and yield management. Build bid predictionmodels and supply path optimization algorithms to maximize publisher revenue. Develop algos andmodels which help various targeting for real-time ad delivery. Implement audiencesegmentation and lookalike modelling for brand campaigns. Think Data Derive data insights from processes, products, and integrations to achieve efficiency and performance goals. Establish KPI-drivenmeasurement frameworks focused on incrementality gains and attribution accuracy. Build predictivemodels for campaign forecasting and budget optimizations. Develop frauddetection algorithms and brand safety classification systems. Analyze data and identify trends, patterns, and anomalies in model behavior. Ensure data privacycompliance (GDPR, CCPA) and implement secure data handling practices andparticipate in AI policy-making. Think Technology Build enterprise-grade ML/ AI architectural solutions that drive real value, and measurable business impact. Develop MLOps and datapipelines from ad serving events, implementing real-time feature engineeringand model serving infrastructure catering to billions of ads. Build predictive models, dashboard / reports for performance monitoring. Conduct rigorous A/Btesting and statistical analysis to validate algorithmic improvements andbusiness impact with explainable-AI algorithms. ThinkCollaboration - Partner with cross-functional teams (stakeholders, product, developers and business) to deliver models, dashboards, solutions that drive revenue KPIs Think Leadership -Drive strategic ML/AI vision across business units, build and scalehigh-performing teams, and own P&L responsibility for data scienceinvestments. Collaborate with fellow leaders to establish company-wide AIgovernance and present ROI metrics to executive leadership. Required Skills: 8+ years experience as Data Scientist with 3+ years in advertising technology and KPIoptimisation MS/PhD in Computer Science, Statistics,Mathematics, or related quantitative field. Technical Expertise: Programming: Advanced Python, SQL, with experiencein Hadoop and Apache Spark for large-scale data processing. ML/ AI stack: Tensorflow, PyTorch, XGBoost, LLMs, scikit-learnfor time-series forecasting, recommendation systems, NLP optimisation, andcausal inference. Exposure to ML, NN, GenAI algorithms. Infrastructure: Cloud platforms (GCP/Azure/AWS),MLOps, real-time model inference, feature stores, and ML pipeline orchestration Visualization: Power BI, Looker, Jupyter Notebooks,and custom dashboard developments Production Systems: Building scalable ML systemswith real-time performance monitoring and A/B testing frameworks. Domain Knowledge Deep knowledge of digital marketing and advertisingtechnologies and concepts like RTB protocols, header bidding, programmaticadvertising ecosystems, and Google ADX. Understanding of Ad Server APIs, DSP/SSPintegrations, DMP usage, auction dynamics, attribution modelling, conversiontracking, and audience segmentation. Proven track record of optimising AdTech KPIswith demonstrated results. Leadership -Strong communication skills for technical and executive audiences with abilityto translate KPI improvements into business impact.

Posted 1 week ago

Apply

9.0 - 14.0 years

35 - 45 Lacs

bengaluru

Work from Office

Join our Team About this opportunity: Join Ericsson as a Data Scientist. This position plays a crucial role in the development of Python-based solutions, their deployment within a Kubernetes-based environment, and ensuring the smooth data flow for our machine learning and data science initiatives. The ideal candidate will possess a strong foundation in Python programming, hands-on experience with ElasticSearch, Logstash, and Kibana (ELK), a solid grasp of fundamental Spark concepts, and familiarity with visualization tools such as Grafana and Kibana. Furthermore, a background in ML Ops and expertise in both machine learning model development and deployment will be highly advantageous. What you will do: Python Development: Write clean, efficient and maintainable Python code to support data engineering tasks including collection, transformation and integration with ML models. Data Pipeline Development: Design, build and maintain robust data pipelines to gather, process and transform data from multiple sources into formats suitable for ML and analytics, leveraging ELK, Python and other leading technologies. Spark Knowledge: Apply core Spark concepts for distributed data processing where required, and optimize workflows for performance and scalability. ELK Integration: Implement ElasticSearch, Logstash and Kibana for data ingestion, indexing, search and real-time visualization. Knowledge of OpenSearch and related tooling is beneficial. Dashboards and Visualization: Create and manage Grafana and Kibana dashboards to deliver real-time insights into application and data performance. Model Deployment and Monitoring: Deploy machine learning models and implement monitoring solutions to track model performance, drift, and health. Data Quality and Governance: Implement data quality checks and data governance practices to ensure data accuracy, consistency, and compliance with data privacy regulations. MLOps (Added Advantage): Contribute to the implementation of MLOps practices, including model deployment, monitoring, and automation of machine learning workflows. Documentation: Maintain clear and comprehensive documentation for data engineering processes, ELK configurations, machine learning models, visualizations, and deployments. The skills you bring: Core Skills: Strong Python programming skills, experience building data pipelines, and knowledge of ELK stack (ElasticSearch, Logstash, Kibana). Distributed Processing: Familiarity with Spark fundamentals and when to leverage distributed processing for large datasets. Cloud & Containerization: Practical experience deploying applications and services on Kubernetes. Familiarity with Docker and container best practices. Monitoring & Visualization: Hands-on experience creating dashboards and alerts with Grafana and Kibana. ML & MLOps: Experience collaborating on ML model development, and deploying and monitoring ML models in production; knowledge of model monitoring, drift detection and CI/CD for ML is a plus. Experience criteria is 9 to 14years Why join Ericsson What happens once you apply Primary country and city: India (IN) || Bangalore Req ID: 772044

Posted 1 week ago

Apply

10.0 - 15.0 years

40 - 45 Lacs

hyderabad

Work from Office

We are seeking a highly skilled Staff Engineer with strong expertise in Cribl Pipelines and data streaming to lead the design, development, and optimization of our observability and data engineering workflows. The ideal candidate will bring hands-on experience with Cribl Stream/Edge , advanced ETL practices , real-time streaming frameworks (Spark, Flink, Kafka) , and deep knowledge of observability platforms (Splunk, Prometheus, Grafana, TSDBs) . This role is both technical and strategic, requiring deep problem-solving skills, architectural vision, and the ability to mentor and guide engineering teams. What you get to do in this role: Cribl Pipelines : Architect and optimize large-scale data pipelines using Cribl Stream and Cribl Edge for ingestion, transformation, and routing. Streaming & Real-Time Processing : Design and implement real-time data pipelines using Apache Spark, Apache Flink, and Kafka Streaming to handle high-throughput, low-latency observability data. ETL/Data Engineering : Apply advanced ETL practices to cleanse, enrich, filter, and normalize diverse data sources before downstream ingestion. Observability Data Management : Manage high-volume telemetry data (logs, metrics, traces, events) and design strategies for noise reduction, performance optimization, and cost control . Integration : Build robust integrations with Splunk, Elasticsearch, Kafka, S3, Prometheus, VictoriaMetrics, InfluxDB, and other TSDBs . Scalability & Performance Tuning : Ensure Cribl and streaming pipelines perform reliably at scale, handling high-cardinality and high-throughput datasets . Best Practices & Governance : Define and enforce observability ingestion best practices, schema governance, and data quality standards. Leadership & Mentorship : Guide engineers in pipeline design, streaming technologies, and observability best practices . Innovation : Explore emerging technologies in observability, streaming, and AI-driven analytics to continuously improve architecture. To be successful in this role you have: Experience in leveraging or critically thinking about how to integrate AI into work processes, decision-making, or problem-solving. This may include using AI-powered tools, automating workflows, analyzing AI-driven insights, or exploring AIs potential impact on the function or industry. 10+ years of software/data engineering experience , with at least 5+ years hands-on in Cribl (Stream/Edge) . Strong background in ETL pipelines, real-time streaming, and distributed data processing . Hands-on expertise with Apache Spark (Structured Streaming), Apache Flink, and Kafka Streaming . Deep understanding of observability data (logs, metrics, traces) and platforms such as Splunk, Elastic, Prometheus, Grafana, OpenTelemetry . Experience with Time Series Databases (TSDBs) such as VictoriaMetrics, InfluxDB, TimescaleDB, or ClickHouse . Proficiency in scripting/programming (Python, Go, or Java) for pipeline extensions and automation. Strong knowledge of Kafka, S3, cloud-native services (AWS/GCP/Azure) for data transport and storage. Experience with scalability, performance tuning, and cost optimization in observability pipelines. Strong collaboration and leadership skills to influence cross-functional teams . Exposure to AI/ML-based anomaly detection or predictive observability use cases . Previous Staff/Principal Engineer experience in large-scale data systems. FD21

Posted 1 week ago

Apply

12.0 - 17.0 years

45 - 50 Lacs

mumbai

Work from Office

Company: Mercer Description: We are seeking a talented individual to join our Data Management team at Wealth Investments Mercer. This role will be based in Mumbai. This is a hybrid role that has a requirement of working at least three days a week in the office. Director - Data Management We are looking for a dynamic and entrepreneurial leader to establish and lead a brand-new Data Team with a significant mandate to support our business in all critical processes. This role is ideal for a visionary with a tech-first mindset who can blend people, process, and technology to deliver high-impact, pragmatic solutions that directly drive business value. The position will partner closely with the Global Head of Data and the Global Data Team and business stakeholders, enabling day-to-day operational excellence while innovating to meet evolving needs. The ideal candidate will think and act like a business owner focusing on ROI, mitigating risks, and ensuring value delivery. They will have deep expertise in the Investment Domain and a proven track record in managing data analysts, implementing robust data management practices, and delivering creative solutions that scale We will count on you to: Strategic Leadership Establish and lead a high-performing Data Team from the ground up, instilling a culture of accountability and excellence. Develop and execute a data strategy that aligns with business priorities and delivers measurable impact. Serve as a trusted partner working with the Global Data team, senior business leaders, anticipating needs and proactively offering solutions. Solution Delivery Design and deliver pragmatic, scalable solutions leveraging a balanced mix of people, processes, and technology. Collaborate with technology teams to build both tactical and strategic solutions ranging from quick-win tools to enterprise-scale platforms. Oversee data pipelines, sourcing, and management for investment advisory and OCIO business areas, ensuring timely, accurate, and high-quality data delivery. Introduce and oversee a lightweight development capability to quickly design and deploy targeted end-user solutions, using low-code, no-code, and scripting tools. Data Management Excellence Design, implement, and mature core data management practices across governance, quality, lineage, metadata, and stewardship. Establish policies, standards, and controls to ensure data is accurate, consistent, and fit for purpose across the investment lifecycle. Implement data quality monitoring and remediation processes, ensuring issues are proactively identified and addressed. Business Partnership & Communication Act as the voice of the data team, effectively communicating the team s value and impact to business leaders. Present data-driven recommendations to influence strategic decisions What you need to have: 12+ years of experience in Information Systems Management, or a related discipline with a focus on domain i.e. Investment Banking/Asset Management (Middle Office/Back office). Familiarity with designing data models and defining data architecture schema for Investment Banking or Asset Management purposes. Proficiency in data processing and working with data warehouse/big data environments relevant to Investment Banking or Asset Management. Excellent skillset in Reference Database Management System (RDBMS) to ingest, maintain and distribute data for Equity, Fixed Income, Alternatives and other Asset Classes. Proficiency identifying process flow mapping and process redesigning specifically for Investment Banking or Asset Management data. Proficiency in project management tools such as Jira Kanban and Confluence or similar tool is plus. Experience in business process improvement, including conducting data investigations to determine the root cause of data issues. Familiarity with ETL tools for efficiently and accurately importing new data from various external/internal sources, as well as data sourcing and data mapping specifically to Investment data. Good analytical and technical abilities as the role would involve ability to spot anomalies and create processes to put control on them. Good communication skills to ensure issues clearly identified and explained to various interested stakeholders. Ability to work collaboratively with multi-disciplinary and global teams to ensure goals are met What makes you stand out Experience and/or familiarity in no-code, low-code solutions and latest AI-based coding tools Knowledge and experience gained in implementing financial data solutions such as S&P EDM, Alpha Data Platform, and/or FactSet. Industry qualifications - CFA or CIPM or DAMA Demonstrated willingness to support and mentor colleagues Demonstrated experience in taking a hands-on approach working closely with the team and colleagues to design, develop and implement data solutions Excellent communication skills working closely with colleagues at all levels Experience of business process improvement in asset management for example, performing data investigations to determine root cause of data quality issues. Why join our team: We help you be your best through professional development opportunities, interesting work and supportive leaders. We foster a vibrant and inclusive culture where you can work with talented colleagues to create new solutions and have impact for colleagues, clients and communities. Our scale enables us to provide a range of career opportunities, as well as benefits and rewards to enhance your well-being.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies