Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 10.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Data Science Decision Analyst Assistant Vice President at Barclays, you will embark on a transformative journey to shape the future by providing guidance on coding efficiency and working with Big Data. Your role will involve safeguarding the business and customers from financial risks while also offering competitive benefits and opportunities for career advancement in the banking industry. **Key Responsibilities:** - Ability to work dedicated shifts between 12 Noon IST to 12 AM IST. - Minimum 6 years of experience in Data Science Domain focusing on Analytics and Reporting. - Mandatory proficiency in writing, studying, and correcting Python and SQL code. - Working with Big Data and articulating Value Proposition and Enhancement opportunities. - Exposure to Visualization Tools like Power BI and Tableau, as well as AWS Services. - Knowledge of Machine Learning Operations and Py Spark. **Qualifications Required:** - Strong knowledge in Python, SQL, Big Data, Power BI, Tableau, and strategic thinking. - Proficiency in job-specific technical skills. In this role based out of Noida, your purpose will be to implement data quality processes and procedures to ensure reliable and trustworthy data. You will extract actionable insights to improve operations and optimize resources. **Additional Details:** As an Assistant Vice President, you are expected to advise and influence decision-making, contribute to policy development, and ensure operational effectiveness. Collaborating closely with other functions and business divisions, you will lead a team in performing complex tasks and setting objectives. Your leadership behaviours should align with the LEAD principles: Listen and be authentic, Energise and inspire, Align across the enterprise, and Develop others. You will be responsible for identifying ways to mitigate risks, developing new policies and procedures, and managing risk and strengthening controls related to your work. Collaborating with other areas of work, you will engage in complex data analysis and communicate complex information effectively to influence stakeholders and achieve desired outcomes. All colleagues at Barclays are expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship, along with the Barclays Mindset to Empower, Challenge, and Drive in their behavior.,
Posted 4 days ago
3.0 - 7.0 years
0 Lacs
noida, uttar pradesh
On-site
Join us as a MI and Reporting Analyst at Barclays, where you will spearhead the evolution of the digital landscape, driving innovation and excellence. You will utilize cutting-edge technology to revolutionize digital offerings, ensuring unparalleled customer experiences. Your responsibilities will include ongoing support and enhancement to Warehouse, Reporting, Information Management, Dashboard Creation, Maintenance, Automation, Data Set Curation, and Data Set Transformation. Given the high exposure to Data, Reporting, Warehouses, and Dashboards, your role will be governed by Service Level Agreements and you will be responsible for adhering to Data Standards and Timelines. You will deliver insights to enable earlier, faster, and smarter decisions, requiring stakeholder management, partnership, relationship building, and the ability to present a value proposition. Additionally, you will act as a data/technical expert for warehouses and tools, providing support to colleagues and guidance on data sources/uses. Your extensive knowledge of Python, PySpark, and/or SQL will enable you to guide team members on coding efficiency and effectiveness. You will also serve as a Subject Matter Expert, supporting colleagues by sharing knowledge and best practices in coding and data. To be successful in this role, you should have a Graduate degree in any discipline, the ability to work dedicated shifts from 12 Noon IST to 12 AM IST, experience in the Data Science Domain (Analytics and Reporting), proficiency in Python and SQL code, working with Big Data, articulating a Value Proposition and Enhancement opportunity, and exposure to Visualization Tools like Power BI and Tableau. Desirable skillsets include exposure to AWS Services, Machine Learning Operations, PySpark, and SAS. This role is based in Noida. The purpose of this role is to implement data quality processes and procedures, ensuring reliable and trustworthy data. You will extract actionable insights to help the organization improve operations and optimize resources. Your accountabilities will involve investigating and analyzing data issues, executing data cleansing and transformation tasks, designing and building data pipelines, and applying advanced analytical techniques such as machine learning and AI. You will also document data quality findings and recommendations for improvement. As an Analyst, you are expected to perform activities in a timely and high-standard manner, driving continuous improvement. You must have in-depth technical knowledge in your area of expertise, lead and supervise a team, guide professional development, and coordinate resources. If you have leadership responsibilities, you are expected to demonstrate leadership behaviors. For individual contributors, technical expertise is developed, and advisory role is taken when appropriate. You will impact the work of related teams, partner with other functions, take responsibility for operational activities, escalate breaches of policies, advise decision-making, manage risk, and strengthen controls. Understanding how your sub-function integrates with the function and the organization's products, services, and processes is crucial. You will resolve problems, guide team members, act as a contact point for stakeholders, and build a network of contacts. All colleagues are expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence, and Stewardship, as well as the Barclays Mindset to Empower, Challenge, and Drive.,
Posted 1 week ago
6.0 - 8.0 years
8 - 18 Lacs
hyderabad
Work from Office
Job title: Data Engineer ( Python + PySpark + SQL ) Candidate Specification: Minimum 6 to 8 years of experience in Data Engineer Job Description: Data Engineer with strong expertise in Python, PySpark, and SQL Expert in Kafka Design, develop, and maintain robust data pipelines using PySpark and Python Strong understanding of SQL and relational databases (eg, PostgreSQL, MySQL, SQL Server) Proficiency in Python for data engineering tasks and scripting Hands-on experience with PySpark in distributed data processing environments Strong command of SQL for data manipulation and querying large datasets
Posted 1 week ago
5.0 - 8.0 years
10 Lacs
hyderabad
Work from Office
We are seeking an experienced Data Engineer ;with a strong background in building and managing data solutions. The ideal candidate will have expertise in Power BI, Snowflake, ETL processes,& Data Lakes, and Data Warehouses, along with solid database fundamentals. Knowledge of will be an added advantage. Key Responsibilities Design, develop, and maintain & scalable data pipelines and ETL workflows to support business intelligence and analytics requirements. Manage and optimize Snowflake data warehouse environments, ensuring performance, reliability, and cost efficiency. Implement and maintain data lakes and structured/unstructured data storage solutions. Collaborate with business teams to develop and publish,Power BI dashboards and reports for actionable insights. Ensure data integrity, quality, and governance across various data sources. Analyze and troubleshoot issues related to data pipelines, performance, and data consistency. Support ad-hoc data requests and assist in building data models for reporting and analytics. Work closely with cross-functional teams (Data Scientists, Analysts, and Business Teams) to ensure data accessibility and usability. Required Skills & Qualifications 5+ years of hands-on experience as a Data Engineer or in a similar role. Strong expertise in ETL development, data integration, and data pipeline design. Proficiency in Snowflake, SQL, and database performance tuning. In-depth knowledge of data warehouse concepts, data modeling, and data lake & architectures. Proficiency in building and optimizing Power BI dashboards and datasets. Strong understanding of database fundamentals (indexes, normalization, transactions, etc.). Knowledge of DOMO or willingness to learn is a plus. Proficiency in at least one scripting or programming language (e.g., Python, SQL, or similar) for data transformation and automation. Familiarity with cloud platforms (AWS, Azure, or GCP) is preferred. Preferred Qualifications Experience with ;CI/CD pipelines for data workflows. Understanding of data governance, security, and compliance standards. Experience with version control (Git) and Agile methodologies. Contact Person - Christopher Email id- christopher@gojobs.biz
Posted 2 weeks ago
8.0 - 13.0 years
20 - 22 Lacs
hyderabad, chennai, bengaluru
Hybrid
Key Responsibilities: Design, build, and maintain scalable data pipelines on Azure cloud. Develop and optimize ETL workflows using Azure Data Factory (ADF). Implement advanced data processing and transformation logic using Databricks and PySpark. Write efficient, scalable, and reusable Python scripts for data ingestion and processing. Manage and optimize complex SQL queries and stored procedures. Integrate data from multiple sources (e.g., APIs, flat files, databases) into centralized repositories. Ensure data quality, security, governance, and compliance best practices. Collaborate with data scientists, analysts, and other stakeholders to support data-driven initiatives. Monitor data workflows and troubleshoot performance or data integrity issues. Strong expertise in Microsoft Azure ecosystem: Azure Data Factory (ADF) Azure Storage (Blob/Data Lake) Azure Functions Azure Synapse Analytics (nice to have) Location: Chennai, Bengaluru, Hyderabad, Pune Contact Person: Christopher Email: christopher@gojobs.biz
Posted 3 weeks ago
6.0 - 8.0 years
1 - 6 Lacs
Noida
Work from Office
Urgent Hiring... Microsoft Fabric Cloud Architect 6-8yrs Noida Immediate to 30 days Skills- Azure Cloud, MS Fabric, Py spark, DAX, Python, Azure Synapse, ADF, Data Bricks, MS-Fabric, ETL Pipelines.
Posted 2 months ago
12.0 - 14.0 years
12 - 14 Lacs
Hyderabad, Bengaluru
Hybrid
Bachelors or master’s degree in computer science, Engineering, or a related field. 10+ years of overall experience and 8+ years of relevant in Data bricks, DLT, Py spark and Data modelling concepts-Dimensional Data Modelling (Star Schema, Snowflake Schema) Proficiency in programming languages such as Python, Py spark, Scala, SQL. Proficiency in DLT Proficiency in SQL Proficiency in Data Modelling concepts - Dimensional Data Modelling (Star Schema, Snowflake Schema) Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark. Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services. Proven track record of delivering scalable and reliable data solutions in a fast-paced environment. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills with the ability to work effectively in cross-functional teams. Good to have experience with containerization technologies such as Docker and Kubernetes. Knowledge of DevOps practices for automated deployment and monitoring of data pipelines
Posted 3 months ago
4.0 - 9.0 years
5 - 10 Lacs
Chennai, Bengaluru
Work from Office
Job Purpose: We are seeking an experienced Azure Data Engineer with over 4 to 13 years of proven expertise in Data lakes, Lake house, Synapse Analytic, Data bricks, Tsql, sql server, Synapse Db, Data warehouse and should have work exp in ETL, Data catalogs, Meta data, DWH, mpp systems, OLTP, and OLAP systems with strong communication skills. Requirements: We are seeking an experienced Azure Data Engineer with over 4 to 13 years of proven expertise in Data lakes, Lake house, Synapse Analytic, Data bricks, Tsql, sql server, Synapse Db, Data warehouse and should have work exp in ETL, Data catalogs, Meta data, DWH, mpp systems, OLTP, and OLAP systems with strong communication skills. The ideal candidate should have: Key Responsibilities: Create Data lakes from scratch, configure existing systems and provide user support Understand different datasets and Storage elements to bring data Have good knowledge and work experience in ADF, Synapse Data pipelines Have good knowledge in python, Py spark and spark sql Implement Data security at DB and data movement layers Should have experience in ci/cd data pipelines Work with internal teams to design, develop and maintain software Qualifications & Key skills required: Expertise in Datalakes, Lakehouse, Synapse Analytics, Data bricks, Tsql, sql server, Synapse Db, Data warehouse Hands-on experience in ETL, ELT, handling large volume of data and files. Working knowledge in json, parquet, csv, xl, structured, unstructured data and other data sets Exposure to any Source Control Management, like TFS/Git/SVN Understanding of non-functional requirements Should be proficient in Data catalogs, Meta data, DWH, mpp systems, OLTP, and OLAP systems Experience in Azure Data Fabric, MS Purview, MDM tools is an added advantage A good team player and excellent communicator
Posted 3 months ago
10 - 18 years
35 - 55 Lacs
Hyderabad, Bengaluru, Mumbai (All Areas)
Hybrid
Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 8 Yrs - 18 Yrs Location- Pan India Job Description : - Experience in Synapase with pyspark Knowledge of Big Data pipelinesData Engineering Working Knowledge on MSBI stack on Azure Working Knowledge on Azure Data factory Azure Data Lake and Azure Data lake storage Handson in Visualization like PowerBI Implement endend data pipelines using cosmosAzure Data factory Should have good analytical thinking and Problem solving Good communication and coordination skills Able to work as Individual contributor Requirement Analysis CreateMaintain and Enhance Big Data Pipeline Daily status reporting interacting with Leads Version controlADOGIT CICD Marketing Campaign experiences Data Platform Product telemetry Analytical thinking Data Validation of the new streams Data quality check of the new streams Monitoring of data pipeline created in Azure Data factory updating the Tech spec and wiki page for each implementation of pipeline Updating ADO on daily basis If interested please forward your updated resume to sankarspstaffings@gmail.com / Sankar@spstaffing.in With Regards, Sankar G Sr. Executive - IT Recruitment
Posted 4 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |