Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
6 - 9 Lacs
Gurugram
Work from Office
Who We Are Konrad is a next generation digital consultancy. We are dedicated to solving complex business problems for our global clients with creative and forward-thinking solutions. Our employees enjoy a culture built on innovation and a commitment to creating best-in-class digital products in use by hundreds of millions of consumers around the world. We hire exceptionally smart, analytical, and hard working people who are lifelong learners. About The Role As a Senior Data Engineer you ll be tasked with designing, building, and maintaining scalable data platforms and pipelines. Your deep knowledge of data platforms such as Azure Fabric, Databricks, and Snowflake will be essential as you collaborate closely with data analysts, scientists, and other engineers to ensure reliable, secure, and efficient data solutions. What You ll Do Design, build, and manage robust data pipelines and data architectures. Implement solutions leveraging platforms such as Azure Fabric, Databricks, and Snowflake. Optimize data workflows, ensuring reliability, scalability, and performance. Collaborate with internal stakeholders to understand data needs and deliver tailored solutions. Ensure data security and compliance with industry standards and best practices. Perform data modelling, data extraction, transformation, and loading (ETL/ELT). Identify and recommend innovative solutions to enhance data quality and analytics capabilities. Qualifications Bachelor s degree or higher in Computer Science, Data Engineering, Information Technology, or a related field. 5+ years of professional experience as a Data Engineer or similar role. Proficiency in data platforms such as Azure Fabric, Databricks, and Snowflake. Hands-on experience with data pipeline tools, cloud services, and storage solutions. Strong programming skills in SQL, Python, or related languages. Experience with big data technologies and concepts (Spark, Hadoop, Kafka). Excellent analytical, troubleshooting, and problem-solving skills. Ability to effectively communicate technical concepts clearly to non-technical stakeholders. Nice to have Certifications related to Azure Data Engineering, Databricks, or Snowflake. Familiarity with DevOps practices and CI/CD pipelines #LI-Hybrid
Posted 2 weeks ago
3.0 - 10.0 years
7 - 11 Lacs
Bengaluru
Work from Office
Job_Description":" About tsworks: tsworks is a leading technology innovator, providing transformative products and services designed for the digital-first world. Our mission is to provide domain expertise, innovative solutions and thought leadership to drive exceptional user and customer experiences. Demonstrating this commitment , we have a proven track record of championing digital transformation for industries such as Banking, Travel and Hospitality, and Retail (including e-commerce and omnichannel), as well as Distribution and Supply Chain, delivering impactful solutions that drive efficiency and growth. We take pride in fostering a workplace where your skills, ideas, and attitude shape meaningful customer engagements. About This Role: tsworks Technologies India Private Limited is seeking driven and motivated Senior Data Engineers to join its Digital Services Team. You will get hands-on experience with projects employing industry-leading technologies. This would initially be focused on the operational readiness and maintenance of existing applications and would transition into a build and maintenance role in the long run. Requirements Position: DataEngineer II Experience: 3 to 10+Years Location: Bangalore, India Mandatory RequiredQualification Strongproficiency in Azure services such as Azure Data Factory, Azure Databricks,Azure Synapse Analytics, Azure Storage, etc. Expertise inDevOps and CI/CD implementation Good knowledgein SQL ExcellentCommunication Skills In This Role, YouWill Design,implement, and manage scalable and efficient data architecture on the Azurecloud platform. Develop andmaintain data pipelines for efficient data extraction, transformation, andloading (ETL) processes. Perform complexdata transformations and processing using Azure Data Factory, Azure Databricks,Snowflakes data processing capabilities, or other relevant tools. Develop andmaintain data models within Snowflake and related tools to support reporting,analytics, and business intelligence needs. Collaboratewith cross-functional teams to understand data requirements and designappropriate data integration solutions. Integrate datafrom various sources, both internal and external, ensuring data quality andconsistency. Ensure datamodels are designed for scalability, reusability, and flexibility. Implement dataquality checks, validations, and monitoring processes to ensure data accuracyand integrity across Azure and Snowflake environments. Adhere to datagovernance standards and best practices to maintain data security andcompliance. Handlingperformance optimization in ADF and Snowflake platforms Collaboratewith data scientists, analysts, and business stakeholders to understand dataneeds and deliver actionable insights Provideguidance and mentorship to junior team members to enhance their technicalskills. Maintaincomprehensive documentation for data pipelines, processes, and architecturewithin both Azure and Snowflake environments including best practices,standards, and procedures. Skills Knowledge Bachelors orMasters degree in Computer Science, Engineering, or a related field. 3 + Years ofexperience in Information Technology, designing, developing and executingsolutions. 3+ Years ofhands-on experience in designing and executing data solutions on Azure cloudplatforms as a Data Engineer. Strongproficiency in Azure services such as Azure Data Factory, Azure Databricks,Azure Synapse Analytics, Azure Storage, etc.
Posted 2 weeks ago
5.0 - 10.0 years
15 - 19 Lacs
Bengaluru
Work from Office
1 Utilizes software engineering principles to deploy and maintain fully automated data transformation pipelines that combine a large variety of storage and computation technologies to handle a distribution of data types and volumes in support of data architecture design. A Senior Data Engineer designs and oversees the entire data infrastructure, data products and data pipelines that are resilient to change, modular, flexible, scalable, reusable, and cost effective. Key Responsibilities: Oversee the entire data infrastructure to ensure scalability, operation efficiency and resiliency Mentor junior data engineers within the organization Design, develop, and maintain data pipelines and ETL processes using Microsoft Azure services (e.g., Azure Data Factory, Azure Synapse, Azure Databricks, Azure Fabric) Utilize Azure data storage accounts for organizing and maintaining data pipeline outputs. (e.g., Azure Data Lake Storage Gen 2 Azure Blob storage) Collaborate with data scientists, data analysts, data architects and other stakeholders to understand data requirements and deliver high-quality data solutions Optimize data pipelines in the Azure environment for performance, scalability, and reliability Ensure data quality and integrity through data validation techniques and frameworks Develop and maintain documentation for data processes, configurations, and best practices Monitor and troubleshoot data pipeline issues to ensure timely resolution Stay current with industry trends and emerging technologies to ensure our data solutions remain cutting-edge Manage the CI/CD process for deploying and maintaining data solutions Required Qualifications: Bachelor s degree in Computer Science, Engineering, or a related field (or equivalent experience) and able to demonstrate high proficiency in programming fundamentals At least 5 years of proven experience as a Data Engineer or similar role dealing with data and ETL processes. 5 - 10 years of experience Strong knowledge of Microsoft Azure services, including Azure Data Factory, Azure Synapse, Azure Databricks, Azure Blob Storage and Azure Data Lake Gen 2 Experience utilizing SQL DML to query modern RDBMS in an efficient manner (e.g., SQL Server, PostgreSQL) Strong understanding of Software Engineering principles and how they apply to Data Engineering (e.g., CI/CD, version control, testing) Experience with big data technologies (e.g., Spark) Strong problem-solving skills and attention to detail Excellent communication and collaboration skills Preferred Qualifications: Learning agility Technical Leadership Consulting and managing business needs Strong experience in Python is preferred but experience in other languages such as Scala, Java, C#, etc is accepted Experience building spark applications utilizing PySpark Experience with file formats such as Parquet, Delta, Avro Experience efficiently querying API endpoints as a data source Understanding of the Azure environment and related services such as subscriptions, resource groups, etc. Understanding of Git workflows in software development Using Azure DevOps pipeline and repositories to deploy and maintain solutions Understanding of Ansible and how to use it in Azure DevOps pipelines Chevron participates in E-Verify in certain locations as required by law.
Posted 2 weeks ago
5.0 - 10.0 years
16 - 19 Lacs
Bengaluru
Work from Office
2 The Instrumented Protective Systems (IPS) Engineer is part of the Reliability and Integrity team within the Chevron ENGINE Center and will offer comprehensive support of Instrumented Protective Systems in Chevron s Upstream, Downstream and Midstream facilities across the enterprise. Responsibilities will include tasks involving safeguard performance, and execution of functional assessments associated with the Safety Life Cycle, in addition to supporting facility execution of reliability and integrity tasks associated with instrumented protective systems. Key Responsibilities: IPS Reliability, Integrity Maintenance Support: Provide support of the Operate and Maintain phases of the IPS lifecycle. Optimize and improve asset strategies and reliability performance through data-driven decision-making. Elevate Business Unit data quality to improve capabilities for analysis and improve facility uptime Support functional safety concepts - Functional Safety Assessments, SIL calculations, Independent Protection Layers (IPL) and Safety Instrumented Function (SIF) Allocations, Safety Requirement Specifications Technical Services: Deliver technical assistance for typical Instrumented Protective Systems issues. Examples include Analyzing IPS Test results. Verifying SIL levels for Safety Instrumented systems; Supporting the collection; analysis and reporting of activation metrics; Maintaining Safety Requirements Specification (SRS documentation). Supporting change management for IPS. Providing IPS technical support for small capital projects. Documentation and Tools: Utilize instrumentation databases and design tools and support various IPS reviews. Examples include Surface Facilities Digital Twin, Maximo, Meridium, JDE, Documentum, Stature (ARA), MANGAN SLM, SMART (14c); Exsilentia; Isograph - Reliability Workbench Required Qualifications: Bachelor s degree in chemical or electrical engineering (B.Sc./B.Tech.) from a deemed/recognized (AICTE) university Strong knowledge of Instrumented Protective Systems and Management of Functional Safety reviews. Knowledge and understanding of industry practices and standards applicable to Instrumented Protective Systems (e.g., ISA, IEC, API, etc.). Strong written and verbal communication skills to interact with Chevron s global employee workforce Preferred Qualifications: 5+ years of direct field experience with an owner operator in a hydrocarbon production/processing environment in an Instrumented Protective Systems role Experience with Instrumented Protective Systems and Management of Functional Safety concepts - Functional Safety Assessments, SIL calculations, Independent Protection Layers (IPL) and Safety Instrumented Function (SIF) Allocations, Safety Requirement Specifications Development and Maintenance of IPS Tools - Mangan, Exsilentia. Working knowledge of data science/analytics Reliability Centered Maintenance (RCM), Central Maintenance Management Systems (CMMS), Safety Requirement Specifications (SRS), Pre-Startup Safety Review (PSSR), Management of Change (MOC) and Incident Investigation reporting Configuration and system experience with Yokogawa Prosafe, Honeywell Safety Manager, Triconex, DeltaV, SIS Chevron participates in E-Verify in certain locations as required by law.
Posted 2 weeks ago
5.0 - 10.0 years
9 Lacs
Bengaluru
Work from Office
About the position: We are seeking experienced engineers with expertise in Wells Engineering and operations. As a Wells Engineer, you will provide support across Chevron s global portfolio, leveraging modeling, analysis, and engineering insights. This position may also provide expertise for data analytics as well as the demonstrated ability to translate data analysis into successful business outcomes. Key responsibilities: Provide engineering modelling support for drilling, completions, and well performance areas to Chevrons Wells team leveraging modeling, analysis and engineering insights Monitor drilling and completions data, offering insights and recommendations Provide drilling and completions engineering modeling support including but not limited to Torque Drag, Hydraulics, Surge Swab, and Maximum Overpull, using industry-standard software packages Conduct lookbacks, generate reports, slide decks, and share with stakeholders to drive performance Collaborate with Wells teams to ensure safe and cost-effective operations Perform benchmarking analysis using various internal and external data sources, identify trends and technical comparisons between business units and competitors Collaborate closely with stakeholders in digital teams, business units, and center functions to ensure data quality, utilize analytical tools to report metrics and performance Explore how AI can be utilized in competitive performance work scope Required qualifications: Bachelor or Masters Degree from a recognized university in petroleum, mechanical, chemical, civil, or electrical engineering with minimum CGPA 7.5 and above Minimum 5 years of work experience in Oil and Gas Industry specializing in drilling with exposure to completions Experience in wells design, drilling operations, directional drilling, performance monitoring and optimization of well performance, performance analysis and benchmarking Highly experienced and skilled in running and interpreting drilling/completion models to improve wells performance In-depth knowledge of basic wells engineering concepts including drilling techniques, well control, fluids property management, drilling mechanics, performance monitoring Strong analytical skills to evaluate critical well design operating parameters through data and trend analysis Technical skills to query tabular data models, develop analytical performance reports and motivation to grow new technical capabilities Strong communication skills and demonstrated ability to work and collaborate effectively with diverse international workforce in a team environment Field experience on a rig site as field engineer, drill-site representative (DSR), or DD/MWD engineer Experience with engineering applications and software such as ERA, WellView, proNova, Corva and Power BI Flexibility to work in shifts, with opportunities to leverage flexible work hours Position has potential for international travel Chevron ENGINE supports global operations, supporting business requirements across the world. Accordingly, the work hours for employees will be aligned to support business requirements. The standard work week will be Monday to Friday. Working hours are 8:00am to 5:00pm or 1:30pm to 10:30pm. Chevron participates in E-Verify in certain locations as required by law.
Posted 2 weeks ago
10.0 - 15.0 years
22 - 30 Lacs
Bengaluru
Work from Office
1 Lead Data architects lead the design and implementation of data collection, storage, transformation, orchestration (movement) and consumption to achieve optimum value from data. They are the technical leaders within data delivery teams. They play a key role in modeling data for optimal reuse, interoperability, security and accessibility as well as in the design of efficient ingestion and transformation pipelines. They ensure data accessibility through a performant, cost-effective consumption layer that supports use by citizen developers, data scientists, AI, and application integration. And they instill trust through the employment of data quality frameworks and tools. The data architect at Chevron predominantly works within the Azure Data Analytics Platform, but they are not limited to it. The Senior Data architect is responsible for optimizing costs for delivering data. They are also responsible for ensuring compliance to enterprise standards and are expected to contribute to the evolution of those standards resulting from changing technologies and best practices. Key Responsibilities: Design and overseeing the entire data architecture strategy Mentor junior data architects to ensure skill development in alignment with the team strategy Design and implement complex scalable, high-performance data architectures that meet business requirements Model data for optimal reuse, interoperability, security and accessibility Develop and maintain data flow diagrams, and data dictionaries Collaborate with stakeholders to understand data needs and translate them into technical solutions Ensure data accessibility through a performant, cost-effective consumption layer that supports use by citizen developers, data scientists, AI, and application integration Ensure data quality, integrity, and security across all data systems Required Qualifications: Bachelor s degree in computer science, Information Technology, or a related field (or equivalent experience) Overall 10-15 years of experience with at least 5 years of proven experience as a Data Architect or similar role Strong knowledge of data modeling, data warehousing, and data integration techniques Proficiency in database management systems (e.g., SQL Server, Oracle, PostgreSQL) Experience with big data technologies (e.g., Hadoop, Spark) and data lake solutions (e.g., Azure Data Lake, AWS Lake Formation) Experience with big data technologies data lake solutions DBMS and cloud platforms Experience in data modeling, ERDs, Star and/or Snowflake, and physical model design for analytics and application integration Experience in designing data pipelines for optimal performance, resiliency, and cost efficiency Experience translating business objectives and goals into technical architecture for data solutions Familiarity with cloud platforms (e.g., Microsoft Azure, AWS, Google Cloud Platform) Strong understanding of data governance and security best practices Excellent problem-solving skills and attention to detail Strong communication and collaboration skills Track record for defining/implementing data architecture framework and governance around master data, meta data, modeling Preferred Qualifications: Experience in Erwin, Azure Synapse, Azure Databricks, Azure DevOps, SQL, Power BI, Spark, Python, R Ability to drive business results by building optimal cost data landscapes Familiarity with Azure AI/ML Services, Azure Analytics: Event Hub, Azure Stream Analytics, Scripting: Ansible Experience with machine learning and advanced analytics Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes) Understanding of CI/CD pipelines and automated testing frameworks Certifications such as AWS Certified Solutions Architect, IBM certified data architect or similar are a plus Chevron participates in E-Verify in certain locations as required by law.
Posted 2 weeks ago
3.0 - 5.0 years
12 - 15 Lacs
Bengaluru
Work from Office
4 Utilizes software engineering principles to deploy and maintain fully automated data transformation pipelines that combine a large variety of storage and computation technologies to handle a distribution of data types and volumes in support of data architecture design. A Data Engineer designs data products and data pipelines that are resilient to change, modular, flexible, scalable, reusable, and cost effective. Key responsibilities: Design, develop, and maintain data pipelines and ETL processes using Microsoft Azure services (e.g., Azure Data Factory, Azure Synapse, Azure Databricks, Azure Fabric). Utilize Azure data storage accounts for organizing and maintaining data pipeline outputs. (e.g., Azure Data Lake Storage Gen 2 Azure Blob storage). Collaborate with data scientists, data analysts, data architects and other stakeholders to understand data requirements and deliver high-quality data solutions. Optimize data pipelines in the Azure environment for performance, scalability, and reliability. Ensure data quality and integrity through data validation techniques and frameworks. Develop and maintain documentation for data processes, configurations, and best practices. Monitor and troubleshoot data pipeline issues to ensure timely resolution. Stay current with industry trends and emerging technologies to ensure our data solutions remain cutting-edge. Manage the CI/CD process for deploying and maintaining data solutions. Required Qualifications: Bachelor s degree in Computer Science, Engineering, or a related field (or equivalent experience) and able to demonstrate high proficiency in programming fundamentals. 3-5 years experience At least 2 years of proven experience as a Data Engineer or similar role dealing with data and ETL processes. Strong knowledge of Microsoft Azure services, including Azure Data Factory, Azure Synapse, Azure Databricks, Azure Blob Storage and Azure Data Lake Gen 2. Experience utilizing SQL DML to query modern RDBMS in an efficient manner (e.g., SQL Server, PostgreSQL). Strong understanding of Software Engineering principles and how they apply to Data Engineering (e.g., CI/CD, version control, testing). Experience with big data technologies (e.g., Spark). Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills. Preferred Qualifications: Learning agility Technical Leadership Consulting and managing business needs Strong experience in Python is preferred but experience in other languages such as Scala, Java, C#, etc is accepted. Experience building spark applications utilizing PySpark. Experience with file formats such as Parquet, Delta, Avro. Experience efficiently querying API endpoints as a data source. Understanding of the Azure environment and related services such as subscriptions, resource groups, etc. Understanding of Git workflows in software development. Using Azure DevOps pipeline and repositories to deploy and maintain solutions. Understanding of Ansible and how to use it in Azure DevOps pipelines. Chevron participates in E-Verify in certain locations as required by law.
Posted 2 weeks ago
6.0 - 10.0 years
27 - 42 Lacs
Chennai
Work from Office
Job Summary We are seeking a highly skilled Sr. Developer with 3 to 10 years of experience specializing in Reltio MDM. The ideal candidate will work in a hybrid model with day shifts. This role does not require travel. The candidate will contribute to the companys mission by developing and maintaining high-quality MDM solutions that drive business success and societal impact. Responsibilities Develop and maintain Reltio MDM solutions to ensure data quality and integrity. Collaborate with cross-functional teams to gather and analyze business requirements. Design and implement data models and workflows in Reltio MDM. Provide technical expertise and support for Reltio MDM configurations and customizations. Conduct performance tuning and optimization of Reltio MDM applications. Ensure compliance with data governance and security policies. Troubleshoot and resolve issues related to Reltio MDM. Create and maintain technical documentation for Reltio MDM solutions. Participate in code reviews and provide constructive feedback to team members. Stay updated with the latest trends and best practices in MDM and data management. Contribute to the continuous improvement of development processes and methodologies. Mentor junior developers and provide guidance on best practices. Collaborate with stakeholders to ensure successful project delivery. Qualifications Possess strong expertise in Reltio MDM and data management. Have a solid understanding of data modeling and data integration techniques. Demonstrate proficiency in performance tuning and optimization. Show experience in troubleshooting and resolving technical issues. Exhibit excellent communication and collaboration skills. Have a strong attention to detail and a commitment to quality. Be able to work independently and as part of a team. Display a proactive approach to learning and staying current with industry trends. Possess a bachelors degree in Computer Science or a related field. Have experience with Agile development methodologies. Show the ability to mentor and guide junior team members. Demonstrate strong problem-solving skills. Be committed to delivering high-quality solutions that meet business needs. Certifications Required N
Posted 2 weeks ago
5.0 - 10.0 years
25 - 40 Lacs
Hyderabad, Bengaluru, Delhi / NCR
Hybrid
Write specifications for Master Data Management builds Create requirements, including rules of survivorship, for migrating data to Markit EDM implementation of data governance Support testing Develop data quality reports for the data warehouse Required Candidate profile 3+ years of experience documenting data management requirements Experience writing technical specifications for MDM builds Familiarity with enterprise data warehouse Knowledge of data governance
Posted 2 weeks ago
6.0 - 9.0 years
15 - 20 Lacs
Mumbai
Work from Office
Overview We are seeking an experienced and detail-oriented Senior Associate to join our Real Estate Data Team. This role will focus on ensuring the accuracy, completeness, and reliability of real estate data within our systems, supporting decision-making, compliance, and reporting functions. The ideal candidate has a strong background in real estate data management, quality control, and analytics, with a keen eye for detail and a passion for data integrity. Responsibilities Working as part of a growing team of real estate performance analysts who provide real estate direct property indexes, benchmarks, performance analysis reports, and custom/ bespoke analysis to global real estate asset managers and asset owners Key Responsibilities: Data Quality Assurance: Implement and oversee data quality controls for real estate data, including validation, cleansing, and verification processes. Perform regular audits of data to ensure accuracy and compliance with internal and external standards. Develop and maintain data quality metrics and KPIs to track and improve data quality over time. Data Management & Improvement: Collaborate with cross-functional teams to understand data needs and requirements. Identify and address data quality issues and root causes by designing and implementing solutions that improve data reliability. Coordinate with data providers and vendors to ensure timely and accurate delivery of real estate data. Reporting & Analytics: Generate periodic reports on data quality performance, trends, and improvement areas for senior management. Support data-driven decisions by providing accurate data and insights to stakeholders across the organization. Assist in the development of dashboards and visualization tools for real-time monitoring of data quality metrics. Process Optimization & Automation: Identify opportunities to streamline and automate data quality processes, reducing manual intervention and enhancing efficiency. Participate in system upgrades, data migrations, and other initiatives, ensuring data integrity and smooth transitions. Compliance & Governance: Ensure adherence to data governance policies and industry regulations for real estate data. Assist in the development and implementation of data governance frameworks, standards, and best practices. Train team members and other stakeholders on data quality policies and protocols. Qualifications 6-9 years of experience in the financial services industry Proficiency in data quality tools and software (e.g., SQL, Python, R) and familiarity with data visualization tools (e.g., Tableau, Power BI). Strong analytical, problem-solving, and attention-to-detail skills. Ability to communicate complex data concepts to non-technical stakeholders effectively. Collaborative team player with a proactive approach to improving data quality processes. What we offer you Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing. Flexible working arrangements, advanced technology, and collaborative workspaces. A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results. A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients. Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development. Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles. We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups. All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Women’s Leadership Forum. At MSCI we are passionate about what we do, and we are inspired by our purpose – to power better investment decisions. You’ll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers. This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry. MSCI is a leading provider of critical decision support tools and services for the global investment community. With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios. We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process. MSCI Inc. is an equal opportunity employer. It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law. MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities. If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability.Assistance@msci.com and indicate the specifics of the assistance needed. Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries. To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes. Please do not forward CVs/Resumes to any MSCI employee, location, or website. MSCI is not responsible for any fees related to unsolicited CVs/Resumes. Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers. Read our full note on careers.msci.com
Posted 2 weeks ago
1.0 - 5.0 years
14 - 16 Lacs
Mumbai
Work from Office
Job Title: Sr. Software Engineer Job Code: 10144 Country: IN City: Mumbai Skill Category: IT\Technology Description: Job Overview AI/ML IT Analyst who will work with Snowflake data platform to support AI/ML initiatives, data analytics, and IT operations while helping to develop and implement machine learning solutions. Key Responsibilities Build and maintain data pipelines using Snowflake Assist in developing ML models using Snowflakes ML capabilities Write and optimize SQL queries for data extraction and transformation Support data warehousing operations in Snowflake Help integrate AI/ML solutions with Snowflake infrastructure Perform data quality checks and maintenance Create automated reporting solutions Assist in data migration projects Technical Requirements Core Skills Bachelors degree in Computer Science, Data Science, IT, or related field Strong SQL knowledge Python programming skills Understanding of data warehousing concepts Basic machine learning knowledge SnowflakeSpecific Skills Experience with Snowflakes core features: Data loading and unloading Time travel and data cloning Warehouse management Access control and security Familiarity with Snowpark for Python Understanding of Snowflakes ML capabilities Experience with Snowflake integrations Tools & Technologies Snowflake Data Cloud Python libraries (pandas, numpy, scikitlearn) BI tools (SAP BO, Power BI) ETL/ELT tools Version control (Git) Machine learning frameworks Data modeling tools Preferred Qualifications Snowflake SnowPro Core certification Experience with: Snowflake Streams and Tasks External tables Stored procedures UDFs (UserDefined Functions) Knowledge of cloud platforms (AWS, Azure, GCP) Understanding of data governance principles Soft Skills Strong analytical mindset Problemsolving abilities Team collaboration Clear communication Documentation skills Learning agility We are committed to providing equal opportunities throughout employment including in the recruitment, training and development of employees. We prohibit discrimination in the workplace whether on grounds of gender, marital or domestic partnership status, pregnancy, carer s responsibilities, sexual orientation, gender identity, gender expression, race, color, national or ethnic origins, religious belief, disability or age. *Applying for this role does not amount to a job offer or create an obligation on Nomura to provide a job offer. The expression "Nomura" refers to Nomura Services India Private Limited together with its affiliates.
Posted 2 weeks ago
2.0 - 7.0 years
9 - 10 Lacs
Bengaluru
Work from Office
Amazon Middle Mile is seeking a Risk Specialist to assist with identity verification and fraud mitigation for daily freight movements flowing into and out of our North American fulfillment centers and our associated fulfillment network. This is an exciting opportunity to join a new team in a huge growth area for Amazon. Amazon is looking for a Risk Specialist that has a background in transportation, risk management and data driven problem resolution skills. In this role the Risk Specialist will continuously work with stakeholders to build trust-based relationships in order to investigate suspicious activity and address escalations while creating long-term, systemic solutions for a world-class customer experience. In this role, the Risk Specialist will be responsible for a wide range of duties related to identity verification. Understanding of related accounts identification Provide data analysis & conduct investigations Pull data from numerous databases (using Excel, SQL and/or other database) and to perform ad hoc reporting and analysis as needed Take appropriate action to identify and help minimize the risk posed by fraud or abuse patterns and trends Identify and eliminate root causes of defects in order to drive efficiency in Amazon s transportation operations Understand the business impact of the trends and make decisions that make sense based on available data Knowledge to systematically escalate problems or variance in the information and data to the relevant owners and teams Work within various time constraints to meet critical business needs, while measuring and identifying activities performed Written and verbal communication experience, as you will be required to create a narrative outlining your weekly findings and the variances to goals, and present these finding in a review forum Participate in ad-hoc projects and assignments as necessary Partner with cross functional teams across Amazon for collaboration on fraud risks and investigations Apply risk management best practices to mitigate issues, identify operational inefficiencies and improve processes A day in the life Define SOPs, document new methods of abuse/frauds, and partner with stakeholder teams to drive gaps to closure Constantly monitor metrics to identify deviations, and spot emerging frauds/ MOs that are adopted by bad actors Drive program/ process goals independently with minimal to no intervention from leadership Partner with multiple fraud/abuse teams to learn and implement industry wide best practices and other identity verification mechanisms 2+ years of work experience in logistics/transportation industry Experience working on identity verification/ fraud detection processes Bachelor s Degree from an accredited university or equivalent Knowledge of MS Excel based tools and familiarity with Excel spreadsheets and ability to navigate and interpret data through SQL Data management & data quality control experience with experience pulling and analyzing large sets of data Knowledge using data to drive root cause elimination and process improvement Knowledge in data and experience spotting the trends and fixing gaps Experience in building Quick Sight dashboard
Posted 2 weeks ago
5.0 - 10.0 years
5 - 9 Lacs
Bengaluru
Work from Office
id="job_description_2_0"> Roku is the #1 TV streaming platform in the U.S., Canada, and Mexico, and weve set our sights on powering every television in the world. Roku pioneered streaming to the TV. Our mission is to be the TV streaming platform that connects the entire TV ecosystem. We connect consumers to the content they love, enable content publishers to build and monetize large audiences, and provide advertisers unique capabilities to engage consumers. From your first day at Roku, youll make a valuable - and valued - contribution. Were a fast-growing public company where no one is a bystander. We offer you the opportunity to delight millions of TV streamers around the world while gaining meaningful experience across a variety of disciplines. About the team The mission of Rokus Data Engineering team is to develop a world-class big data platform so that internal and external customers can leverage data to grow their businesses. Data Engineering works closely with business partners and Engineering teams to collect metrics on existing and new initiatives that are critical to business success. As Senior Data Engineer working on Device metrics, you will design data models & develop scalable data pipelines to capturing different business metrics across different Roku Devices. About the role Roku pioneered streaming to the TV. We connect users to the streaming content they love, enable content publishers to build and monetise large audiences, and provide advertisers with unique capabilities to engage consumers. Roku streaming players and Roku TV models are available around the world through direct retail sales and licensing arrangements with TV brands and pay-TV operators.With tens of million players sold across many countries, thousands of streaming channels and billions of hours watched over the platform, building scalable, highly available, fault-tolerant, big data platform is critical for our success.This role is based in Bangalore, India and requires hybrid working, with 3 days in the office. What youll be doing Build highly scalable, available, fault-tolerant distributed data processing systems (batch and streaming systems) processing over 10s of terabytes of data ingested every day and petabyte-sized data warehouse Build quality data solutions and refine existing diverse datasets to simplified data models encouraging self-service Build data pipelines that optimise on data quality and are resilient to poor quality data sources Own the data mapping, business logic, transformations and data quality Low level systems debugging, performance measurement & optimization on large production clusters Participate in architecture discussions, influence product roadmap, and take ownership and responsibility over new projects Maintain and support existing platforms and evolve to newer technology stacks and architectures Were excited if you have Extensive SQL Skills Proficiency in at least one scripting language, Python is required Experience in big data technologies like HDFS, YARN, Map-Reduce, Hive, Kafka, Spark, Airflow, Presto, etc. Proficiency in data modeling, including designing, implementing, and optimizing conceptual, logical, and physical data models to support scalable and efficient data architectures. Experience with AWS, GCP, Looker is a plus Collaborate with cross-functional teams such as developers, analysts, and operations to execute deliverables 5+ years professional experience as a data or software engineer BS in Computer Science; MS in Computer Science preferred #LI-AR3 Benefits Roku is committed to offering a diverse range of benefits as part of our compensation package to support our employees and their families. Our comprehensive benefits include global access to mental health and financial wellness support and resources. Local benefits include statutory and voluntary benefits which may include healthcare (medical, dental, and vision), life, accident, disability, commuter, and retirement options (401(k)/pension). Our employees can take time off work for vacation and other personal reasons to balance their evolving work and life needs. Its important to note that not every benefit is available in all locations or for every role. For details specific to your location, please consult with your recruiter. The Roku Culture We have a unique culture that we are proud of. We think of ourselves primarily as problem-solvers, which itself is a two-part idea. We come up with the solution, but the solution isnt real until it is built and delivered to the customer. That penchant for action gives us a pragmatic approach to innovation, one that has served us well since 2002. To learn more about Roku, our global footprint, and how weve grown, visit https: / / www.weareroku.com / factsheet . By providing your information, you acknowledge that you have read our Applicant Privacy Notice and authorize Roku to process your data subject to those terms.
Posted 2 weeks ago
3.0 - 8.0 years
9 - 10 Lacs
Gurugram
Work from Office
Location(s): Tower -11, (IT/ITES) SEZ of M/s Gurugram Infospace Ltd, Vill. Dundahera, Sector-21, Gurugram, Haryana, Gurugram, Haryana, 122016, IN Line Of Business: Data Estate(DE) Job Category: Engineering & Technology Experience Level: Experienced Hire At Moodys, we unite the brightest minds to turn today s risks into tomorrow s opportunities. We do this by striving to create an inclusive environment where everyone feels welcome to be who they are-with the freedom to exchange ideas, think innovatively, and listen to each other and customers in meaningful ways. If you are excited about this opportunity but do not meet every single requirement, please apply! You still may be a great fit for this role or other open roles. We are seeking candidates who model our values: invest in every relationship, lead with curiosity, champion diverse perspectives, turn inputs into actions, and uphold trust through integrity. Job Summary: The Data Specialist will explore and transform an existing data remediation environment to ensure the smooth execution and automation of data validation, reporting, and analysis tasks. The ideal candidate will have strong technical skills in Excel, SQL, and Python with proficiency in using Microsoft Office tools for reporting, and familiarity with data visualization tools like Power BI or Tableau. Excellent communication and leadership skills are essential to foster a collaborative and productive team environment. Responsibilities may include leading small or large teams of Full time Employees or contractors to focus on remediating data at scale. Key Responsibilities: Team Management: Work with strategic teams of 5-10 or more data analysts and specialists, as needed for specific initiatives Provide guidance, mentorship, and support to team members to achieve individual and team goals. Data Validation and Analysis: Oversee data validation processes to ensure accuracy and completeness of data. Utilize Excel, SQL, and Python for data manipulation, analysis, and validation tasks. Implement best practices for data quality and integrity. Quality Assurance (QA): Establish and maintain QA processes to ensure the accuracy and reliability of data outputs. Conduct regular audits and reviews of data processes to identify and rectify errors. Develop and enforce data governance policies and procedures. Reporting and Presentation: Create and maintain comprehensive reports using Microsoft PowerPoint, Word, and other tools. Develop insightful data visualizations and dashboards using Power BI and Tableau. Present data findings and insights to stakeholders in a clear and concise manner. Collaboration and Communication: Collaborate with cross-functional teams to understand data needs and deliver solutions. Communicate effectively with team members, stakeholders, and clients. Facilitate team meetings and discussions to ensure alignment and progress on projects. Continuous Improvement: Identify opportunities for process improvements and implement changes to enhance efficiency. Stay updated with industry trends and advancements in data management and reporting tools. Foster a culture of continuous learning and development within the team. Qualifications: Bachelors degree in Economics, Statistics, Computer Science, Information Technology or other related fields. 3+ Years of relevant experience in similar field. Strong proficiency in Excel, SQL, and Python for data analysis and validation. Advanced skills in Microsoft PowerPoint, Word, and other reporting tools. Familiarity with Power BI and Tableau for data visualization. Experience with Databricks Excellent communication, leadership, and interpersonal skills. Strong problem-solving abilities and attention to detail. Ability to work independently and manage multiple priorities in a fast-paced environment. For more information on the Securities Trading Program, please refer to the STP Quick Reference guide on ComplianceNet Please note: STP categories are assigned by the hiring teams and are subject to change over the course of an employee s tenure with Moody s.
Posted 2 weeks ago
2.0 - 3.0 years
10 - 11 Lacs
Mumbai
Work from Office
Relocation Assistance Offered Within Country Job Number #167341 - Mumbai, Maharashtra, India Who We Are Colgate-Palmolive Company is a global consumer products company operating in over 200 countries specialising in Oral Care, Personal Care, Home Care, Skin Care, and Pet Nutrition. Our products are trusted in more households than any other brand in the world, making us a household name! Join Colgate-Palmolive, a caring, innovative growth company reimagining a healthier future for people, their pets, and our planet. Guided by our core values Caring, Inclusive, and Courageous we foster a culture that inspires our people to achieve common goals. Together, lets build a brighter, healthier future for all. Title: Analyst / Sr. Analyst, Business Analytics Brief introduction - Role Summary/Purpose : The candidate will support Colgate Business teams across the globe by providing Data & Analysis support. The role requires you to have understanding of Internal & external data (Syndicated Market Data, Point of Sales etc.) and ability to develop and support the Analytical / Insights based Service & Solutions Great to have an understanding of necessary Data Transformation & Data Visualization Tools and Technologies to drive the service and solutions The Person should be Analytical problem solver with the ability to work on large data sets, collaborative and customer focused (proactive and Responsive to Business needs) and Effective in Written and verbal communication skills Responsibilities : Build Insights and Competition Intelligence solutions With constantly evolving business environment, you will find out different ways to tackle the business problem through Analytics solutions and leveraging technology (Data transformation, Data Visualization, Data Insights) - Use of Python, R, Snowflake is a must Ability to Query Data from Snowflake and Big Query Work on different datasets & systems (Marketing, Customers, Product masters, Finance, Digital, Point of Sales) and link the business rationales to develop & support Analytics solutions Build & support standard Business evaluation Trackers & Dashboards per agreed to SLAs and respond to ad hoc requests for reporting and first level analysis Data Quality and Sanity is essential so validating the data, trackers and dashboards is critical You will engage with Business teams in Corporate, Divisions, Hub (Cluster of Countries) and countries to understand business requirements and collaborate on solutions Work with Internal Analytics teams & Information technology teams to learn and advance on developing sustainable and standard reporting trackers Partner with external data vendors to ensure timely data availability with appropriate data sanity i.e. Nielsen, Kantar. Manage the contracts and set performance KPIs and conduct quarterly/annual reviews of data providers Required Qualifications : Graduate in Engineering/Sciences/Statistics , MBA Minimum 2-3 years experience working in Data Insights / Analytics role Experience with third-party data i.e. syndicated market data (Nielsen, Kantar, IRI) Point of Sales, etc. Should have worked in a client facing / stakeholder management role to understand business needs and draw hypothesis Knowledge of Data Transformation tools - R, Python, Snowflake, DBT Expertise in either of visualization tools like Tableau, DOMO, Looker Studio, Sigma Ability to Read, Analyze and Visualize data Strong Verbal & Written Communication skills for Business engagement Preferred Qualifications : Experience with third-party data i.e. syndicated market data (Nielsen, Kantar, IRI) , Point of Sales, etc. Created/worked on automation and developing Analytics solutions Working knowledge of consumer packaged goods industry Understanding of Colgate s processes, and tools supporting analytics (for internal candidates) Willingness and ability to experiment with new tools and techniques Good facilitation and project management skills Our Commitment to Inclusion Our journey begins with our people developing strong talent with diverse backgrounds and perspectives to best serve our consumers around the world and fostering an inclusive environment where everyone feels a true sense of belonging. We are dedicated to ensuring that each individual can be their authentic self, is treated with respect, and is empowered by leadership to contribute meaningfully to our business. Equal Opportunity Employer Colgate is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, colour, religion, gender, gender identity, sexual orientation, national origin, ethnicity, age, disability, marital status, veteran status (United States positions), or any other characteristic protected by law. Please complete this request form should you require accommodation.
Posted 2 weeks ago
5.0 - 10.0 years
7 - 11 Lacs
Chennai
Work from Office
What We Offer: What youll do Develop, maintain, and enhance new data sources and tables, contributing to data engineering efforts to ensure comprehensive and efficient data architecture. Serves as the liaison between Data Engineer team and the Airport operation teams, developing new data sources and overseeing enhancements to existing database; being one of the main contact points for data requests, metadata, and statistical analysis Migrates all existing Hive Metastore tables to Unity Catalog, addressing access issues and ensuring smooth transition of jobs and tables. Collaborate with IT teams to validate package (gold level data) table outputs during the production deployment of developed notebooks Develop and implement data quality alerting systems and Tableau alerting mechanisms for dashboards, setting up notifications for various thresholds. Create and maintain standard reports and dashboards to provide insights into airport performance, helping guide stations to optimize operations and improve performance. All youll need for success Preferred Qualifications- Education & Prior Job Experience Masters degree / UG Min 5 -10 years of experience Databricks (Azur op) Good Communication Experience developing solutions on a Big Data platform utilizing tools such as Impala and Spark Advanced knowledge/experience with Azure Databricks, PySpark , ( Teradata )/Databricks SQL Advanced knowledge/experience in Python along with associated development environments (e.g. JupyterHub, PyCharm, etc.) Advanced knowledge/experience in building Tableau Dashboard / Clikview / PowerBi Basic idea on HTML and JavaScript Immediate Joiner Skills, Licenses & Certifications Strong project management skills Proficient with Microsoft Office applications (MS Excel, Access and PowerPoint); advanced knowledge of Microsoft Excel Advanced aptitude in problem-solving, including the ability to logically structure an appropriate analytical framework Proficient in SharePoint, PowerApp and ability to use Graph API How We Partner To Protect You: TaskUs will neither solicit money from you during your application process nor require any form of payment in order to proceed with your application. Kindly ensure that you are always in communication with only authorized recruiters of TaskUs. DEI: In TaskUs we believe that innovation and higher performance are brought by people from all walks of life. We welcome applicants of different backgrounds, demographics, and circumstances. Inclusive and equitable practices are our responsibility as a business. TaskUs is committed to providing equal access to opportunities. If you need reasonable accommodations in any part of the hiring process, please let us know. We invite you to explore all TaskUs career opportunities and apply through the provided URL https://www.taskus.com/careers/ .
Posted 2 weeks ago
0.0 - 3.0 years
0 Lacs
Pune
Work from Office
Data Attestation function has a mandate to identify and resolve issues and attest to the quality of a prioritized set of key data elements. This team will be a part of Finance Data Services. The objectives of the data attestation process are to: Provide transparency to Group Finance on material data quality issues through a quantitative assessment Prevent financial misstatement by highlighting the impact of data quality issues on the financials, and Comply with recent regulations that require firms to identify data attributes, understand their authoritative source and have a proactive mechanism for identifying DQ issues Pursuing MBA in Finance Understanding on SL-GL reconciliation, data analysis, Variance analysis. Good product knowledge of Equities, derivatives and lending products Analyzing an organizations large finance data sets to provide actionable insights. Effectively communicating your insights and plans to cross-functional team members and management Strive for operational excellence and always on the lookout for ideas to increase performance Proactive, motivated, flexible and team-oriented, with an ability to work in a high paced environment with demanding deadline Desire to gain knowledge of front to back processes associated with the control environment across operations and Finance. Understanding about equity and fixed income products. Experience in dealing with IT, OPS team and able to drive changes , experienced in handling UAT testing etc. Knowledge about the Finance Data is desirable . An ability to analyze large volumes of data and navigate complex data flows. Excellent verbal and written communication skills Proficiency in MS excel and preferably, in Alteryx, Tableau, MS PowerPoint, Business Objects, SQL. UBS is the world s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors.
Posted 2 weeks ago
4.0 - 9.0 years
25 - 30 Lacs
Bengaluru
Work from Office
Lending team at Grab is dedicated to building safe, secure, and loan products catering to all user segments across SEA. Our mission is to promote financial inclusion and support underbanked partners across the region. Data plays a pivotal role in our lending operations, guiding decisions across credit assessment, collections, reporting, and beyond You will report to the Lead Data Engineer. This role is based in Bangalore. Get to Know the Role: As the Data engineer in the Lending Data Engineering team, you will work with data modellers, product analytics, product managers, software engineers and business stakeholders across the SEA in understanding the business and data requirements. You will build and manage the data asset, including acquisition, storage, processing and use channels, and using some of the most scalable and resilient open source big data technologies like Flink, Airflow, Spark, Kafka, Trino and more on cloud infrastructure. You are encouraged to think out of the box and have fun exploring the latest patterns and designs. The Critical Tasks You will Perform: Develop scalable, reliable ETL pipelines to ingest data from diverse sources. Build expertise in real-time data availability to support accurate real-time metric definitions. Implement data quality checks and governance best practices for data cleansing, assurance, and ETL operations. Use existing data platform tools to set up and manage pipelines. Improve data infrastructure performance to ensure, reliable insights for decision-making. Design next-gen data lifecycle management tools/frameworks for batch, real-time, API-based, and serverless use cases. Build solutions using AWS services like Glue, Redshift, Athena, Lambda, S3, Step Functions, EMR, and Kinesis. Use tools like Amazon MSK/Kinesis for real-time data processing and metric tracking. Read more Skills you need Essential Skills Youll Need: 4+ years of experience building scalable, secure, distributed, and data pipelines. Proficiency in Python, Scala, or Java for data engineering solutions. Knowledge of big data technologies like Flink, Spark, Trino, Airflow, Kafka, and AWS services (EMR, Glue, Redshift, Kinesis, and Athena). Solid experience with SQL, data modelling, and schema design. Hands-on with AWS storage and compute services (S3, DynamoDB, Athena, and Redshift Spectrum). Experience working with NoSQL, Columnar, and Relational databases. Curious and eager to explore new data technologies and solutions. Familiarity with in-house and AWS-native tools for efficient pipeline development. Design event-driven architectures using SNS, SQS, Lambda, or similar serverless technologies. Experience with data structures, algorithms, or ML concepts. Read more What we offer About Grab and Our Workplace Grab is Southeast Asias leading superapp. From getting your favourite meals delivered to helping you manage your finances and getting around town hassle-free, weve got your back with everything. In Grab, purpose gives us joy and habits build excellence, while harnessing the power of Technology and AI to deliver the mission of driving Southeast Asia forward by economically empowering everyone, with heart, hunger, honour, and humility. Read more Life at Grab Life at Grab We care about your well-being at Grab, here are some of the global benefits we offer: We have your back with Term Life Insurance and comprehensive Medical Insurance. With GrabFlex, create a benefits package that suits your needs and aspirations. Celebrate moments that matter in life with loved ones through Parental and Birthday leave , and give back to your communities through Love-all-Serve-all (LASA) volunteering leave We have a confidential Grabber Assistance Programme to guide and uplift you and your loved ones through lifes challenges. What We Stand For at Grab We are committed to building an inclusive and equitable workplace that enables diverse Grabbers to grow and perform at their best. As an equal opportunity employer, we consider all candidates fairly and equally regardless of nationality, ethnicity, religion, age, gender identity, sexual orientation, family commitments, physical and mental impairments or disabilities, and other attributes that make them unique. Read more
Posted 2 weeks ago
2.0 - 7.0 years
12 - 17 Lacs
Bengaluru
Work from Office
2 - 7 years of experience in Python Good understanding of Big data ecosystems and frameworks such as Hadoop, Spark etc. Experience in developing data processing task using PySpark . Expertise in at least one popular cloud provider preferably AWS is a plus. Good knowledge of any RDBMS/NoSQL database with strong SQL writing skills Experience on Datawarehouse tools like Snowflake is a plus. Experience with any one ETL tool is a plus Strong analytical and problem-solving capability Excellent verbal and written communications skills Client facing skills: Solid experience working with clients directly, to be able to build trusted relationships with stakeholders Ability to collaborate effectively across global teams Strong understanding of data structures, algorithm, object-oriented design and design patterns Experience in the use of multi-dimensional data, data curation processes, and the measurement/improvement of data quality. General knowledge of business processes, data flows and quantitative models that generate or consume data Independent thinker, willing to engage, challenge and learn new technologies Qualification Bachelor s degree or master s in computer science or related field. Certification from professional bodies is a plus. SELECTION PROCESS Candidates should expect 3 - 4 rounds of personal or telephonic interviews to assess fitment and communication skills .
Posted 2 weeks ago
5.0 - 10.0 years
25 - 30 Lacs
Bengaluru
Work from Office
At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, youll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. How will you make an impact in this role? Responsible for contacting clients with overdue accounts to secure the settlement of the account. Also they do preventive work to avoid future overdues with accounts that have a high exposure. As part of the Finance Data Governance Organization (FDG) within Corporate Controllership, this role is responsible for overseeing the end-to-end process of financial regulatory data attestation, ensuring the accuracy, completeness, and traceability of data submitted to regulatory bodies. The ideal candidate will have deep knowledge of financial regulations, a strong command of data governance principles, and proven experience implementing attestation processes in complex, regulated financial environments Responsibilities Lead the design, implementation, and ongoing execution of the regulatory data attestation framework across Finance. Establish standards, controls, and documentation protocols to ensure consistent and auditable sign-off on regulatory data submissions (e.g., FR Y-9C, CCAR, Basel, BCBS 239). Partner closely with various teams to define roles and responsibilities for data ownership, validation, and attestation. Develop and manage a formalized attestation process that includes data lineage, quality checks, control evidence, and sign-off workflows. Ensure alignment with internal policies, external regulatory expectations, and proactively highlight data quality issues that impact regulatory reporting. Drive continuous improvement through root cause analysis, remediation planning, and control enhancements. Lead a high-performing team of data governance professionals, data analysts, and regulatory specialists. Provide executive-level reporting on attestation status, data risks, and control effectiveness to senior leadership and regulators. Qualifications 10+ years of experience in regulatory reporting, finance data governance, or compliance roles in a large financial institution. Deep understanding of regulatory reporting processes, requirements, and controls across U.S. and global financial regulations. Proven experience establishing or managing attestation or data certification frameworks. Strong knowledge of data governance, control design, data quality, and lineage practices. Experience with data governance and workflow tools (e.g., Collibra, Informatica, Alation, ServiceNow). Excellent leadership, stakeholder engagement, and communication skills. Bachelor s degree in Finance, Accounting, Information Management, or related field; advanced degree or certifications (e.g., CA, CPA, CISA, CDMP) preferred.
Posted 2 weeks ago
0.0 - 4.0 years
6 - 7 Lacs
Pune
Work from Office
About KPI Partners. KPI Partners is a leading provider of data analytics solutions, dedicated to helping organizations transform data into actionable insights. Our innovative approach combines advanced technology with expert consulting, allowing businesses to leverage their data for improved performance and decision-making. Job Description. We are seeking a skilled and motivated Data Engineer with experience in Databricks to join our dynamic team. The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines and data processing solutions that support our analytics initiatives. You will collaborate closely with data scientists, analysts, and other engineers to ensure the consistent flow of high-quality data across our platforms. Key skills: Python, Pyspark, Databricks, ETL, Cloud (AWS, Azure, or GCP) Key Responsibilities. - Develop, construct, test, and maintain data architectures (e.g., large-scale data processing systems) in Databricks. - Design and implement ETL (Extract, Transform, Load) processes to move and transform data from various sources to target systems. - Collaborate with data scientists and analysts to understand data requirements and design appropriate data models and structures. - Optimize data storage and retrieval for performance and efficiency. - Monitor and troubleshoot data pipelines to ensure reliability and performance. - Engage in data quality assessments, validation, and troubleshooting of data issues. - Stay current with emerging technologies and best practices in data engineering and analytics. Qualifications. - Bachelors degree in Computer Science, Engineering, Information Technology, or related field. - Proven experience as a Data Engineer or similar role, with hands-on experience in Databricks. - Strong proficiency in SQL and programming languages such as Python or Scala. - Experience with cloud platforms (AWS, Azure, or GCP) and related technologies. - Familiarity with data warehousing concepts and data modeling techniques. - Knowledge of data integration tools and ETL frameworks. - Strong analytical and problem-solving skills. - Excellent communication and teamwork abilities. Why Join KPI Partners? - Be part of a forward-thinking team that values innovation and collaboration. - Opportunity to work on exciting projects across diverse industries. - Continuous learning and professional development opportunities. - Competitive salary and benefits package. - Flexible work environment with hybrid work options. If you are passionate about data engineering and excited about using Databricks to drive impactful insights, we would love to hear from you! KPI Partners is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
Posted 2 weeks ago
3.0 - 5.0 years
16 - 18 Lacs
Hyderabad
Work from Office
Sr.Data Scientist - Soulpage IT Solutions Home Sr.Data Scientist May 28, 2024 Job Title: Data Scientist Work Exp: 3+ yrs Approximate CTC: Industry Standards Location: Hyderabad Functional Area: IT Software System Programming Qualification: B.Tech/B.E/M.Tech/B.Sc(math or statistics)/MSc(Statistics) Job Description: We are looking for an experienced Data Scientist with a strong hold on Machine Learning and Deep Learning model building, deployment and has problem solving skills. Skills & Responsibilities: Excellent skills in Deep learning-based algorithms with image and text data and ML algorithms with structured datasets. Strong hold on computer vision libraries like OpenCV and Pillow and NLP libraries like Huggingface and Spacy. Ability to conceptualize and implement different deep neural network architectures in Python using Pytorch and Tensorflow. Strong mathematical statistical understanding behind the algorithms. Ideate, conceptualize and formulate Data Science use case of significant impact for the business Develop and evaluate various Machine Learning models before zeroing in on the best one and ability to run Machine Learning models on huge amount of data Drive discussions with Business to ensure the complete understanding with respect to the data science use case; Gain and demonstrate Data and Domain knowledge Evaluate the Data Quality and Metadata information of Key Data Elements before any modelling effort to ensure minimal surprises in the later part of the project Design holistic data science solution covering Descriptive, Predictive & Prescriptive analytics Build reusable ML code for faster turnaround time to business problem solving Explain the Machine Learning model implementation to business stakeholders in a way that they can understand and appreciate the solution Build storytelling dashboards to make all insights and model output available to end users in a form which is highly helpful for decision making Manage relationship with business stakeholders acting as embedded data scientist constantly thinking about data science solutions to make business better Key Skills Required: Python, Pytorch, Tensorflow, OpenCV, Scikit Learn, Pandas and Numpy, Flask, Django, AWS, Deep Learning including CNNs, Transformers, and RNNs, and Statistical Modelling. About Company: Soulpage IT Solutions Pvt. Ltd. SOULPAGE is a Data Science Technology company based in Hyderabad, India. A simple organization with a strong commitment to customer service. We are committed to helping enterprises explore unventured technical avenues to tap unprecedented value creation. Our effort is to provide innovative software solutions using the latest technological advancements in the areas of automation, Data Science, AI, and application development for our clients to stay relevant and adapt to changing times.
Posted 2 weeks ago
3.0 - 8.0 years
25 - 30 Lacs
Chennai
Work from Office
Amazon.com is looking for a talented and enthusiastic SDE to join the Digital Acceleration team. You will be involved in enabling AI, data, metrics and analytics at scale to several Amazon digital teams. We specialize in building AI, analytics and data engineering related products specifically around metadata management, privacy compliance, data quality and customer lifecycle management analytics. We are seeking a SDE with a strong knowledge of distributed systems to build enterprise-scale, mission-critical, multi-tiered applications using tools that are well out in front on the technology wave. You must enjoy working on complex software systems in a customer-centric environment and be passionate not only about building good software but also ensuring that the same software achieves its goals in operational reality. 3+ years of non-internship professional software development experience 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience Experience programming with at least one software programming language 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience Bachelors degree in computer science or equivalent
Posted 2 weeks ago
5.0 - 10.0 years
18 - 20 Lacs
Kolkata, Mumbai, New Delhi
Work from Office
You are a strategic thinker passionate about driving solutions in Data Management. You have found the right team. As a Data Management Associate within the Client Account Services team at JPMorgan Chase, you ensure that client data is accessible, comprehensible, and of high quality. You will cross-reference client accounts with party identifiers to support data enrichment, map client accounts to JPM legal entities, hierarchies, and parties of interest, and engage in data analysis, creation, migration, and uplift for strategic and regulatory initiatives. You may also deploy tools and graphic concepts to facilitate data analysis and influence decision-making. The Client Account Services team is responsible for the timely and accurate setup and maintenance of client/counterparty static data, facilitating trading and settlement functions by managing client/counterparty account reference data and standard settlement instructions on core processing platforms. This includes handling reference data such as cash and stock settlement instructions, confirmations, account-level details like names, addresses, country of citizenship, and trading restriction flags. Additionally, you will support the setup and maintenance of other static data, including portfolio references, books, salesperson, trader, depots, commission updates, accounting tables, and monitoring exception queues. Job responsibilities Execute documented processes and procedures with minimal supervision. Lead individuals through the data management lifecycle, utilizing data-related tools. Validate Standard Settlement Instructions with end-to-end SWIFT knowledge. Analyze and document metadata using workflow tools for processing output. Collaborate with stakeholders to assess data quality impacts and ensure documentation accuracy. Provide status updates to measure performance using workflow tools. Direct activities, monitor details, and set priorities. Escalate process issues and risks appropriately. Review root cause analysis and identify best practices. Collaborate with team members to achieve common goals. Demonstrate dedication, strong work ethic, and willingness to learn and take feedback. Required qualifications, capabilities and skills Minimum 5+ years of related experience Bachelors degree required Good verbal and written communication skills Good problem solving and analytics skills Attention to Details and optimal accuracy rate in processing critical request. Ability to create workflows and BRD s for Automation Programs. Understanding of settlement instructions and set-up/enriching trade confirmations Adheres to CAS God Standards
Posted 2 weeks ago
3.0 - 8.0 years
2 - 6 Lacs
Hyderabad
Work from Office
Azure Data Engineer - Soulpage IT Solutions Home Azure Data Engineer March 6, 2025 Position: Azure Data Engineer Skill set: Azure Databricks and Data Lake implementation Experience: 3+ years Notice Period: Immediate Immediate to 15 days Location: WFO, Hyderabad Job Type : Full-Time Positions: 2 Job Summary: We are looking for a highly skilled Azure Data Engineer with expertise in Azure Databricks and Data Lake implementation to design, develop, and optimize our data pipelines. The engineer will be responsible for integrating data from multiple sources, ensuring data is cleaned, standardized, and normalized for ML model building. This role involves working closely with stakeholders to understand data requirements and ensuring seamless data flow across different platforms. Key Responsibilities: Data Lake & Pipeline Development Design, develop, and implement scalable Azure Data Lake solutions . Build robust ETL/ELT pipelines using Azure Databricks, Data Factory, and Synapse Analytics . Optimize data ingestion and processing from multiple structured and unstructured sources. Implement data cleaning, standardization, and normalization processes to ensure high data quality. Implement best practices for data governance, security, and compliance . Optimize data storage and retrieval for performance and cost-efficiency. Monitor and troubleshoot data pipelines, ensuring minimal downtime. Work closely with data scientists, analysts, and business stakeholders to define data needs. Maintain thorough documentation for data pipelines, transformations, and integrations. Assist in developing ML-ready datasets by ensuring consistency across integrated data sources. Required Skills & Qualifications: 3+ years of experience in data engineering, with a focus on Azure cloud technologies . Expertise in Azure Databricks, Data Factory, Data Lake Strong proficiency in Python, SQL, and PySpark for data processing and transformations. Understanding of ML data preparation workflows , including feature engineering and data normalization. Knowledge of data security and governance principles . Experience in optimizing ETL pipelines for scalability and performance. Strong analytical and problem-solving skills. Excellent written and verbal communication skills. Preferred Qualifications: Azure Certifications Azure Data Engineer Associate, Azure Solutions Architect . Why Join Us? Work on cutting-edge Azure cloud and data technologies . Collaborate with a dynamic and innovative team solving complex data challenges. Competitive compensation and career growth opportunities. Application Process: Interested candidates can send their resumes to [email protected] with the subject line: Application for Azure Data Engineer We look forward to welcoming passionate individuals to our team!
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane