Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description This role is within enterprise data office and product solution team; focused on ensuring accurate, timely, and fit for purpose data for business, risk management and regulatory reporting requirements. Engage with Markets, Risk, Finance, Tech, Client org, and Data Engineering Teams to gather requirements, understand application processing, identify gaps and design systematic solution for business needs. Average day is highly collaborative, focusing on reaching out to application teams and users to understand Markets products processing in Regulatory Reporting data Flow. Document product data flows along with transformation /mapping/enrichments/logic within enterprise systems. Key Responsibilities: Understand Derivatives data flows within CITI for Equities, FX, IRS, Fixed Income, Commodities etc. Data analysis for derivatives products across systems for target state adoption and resolution of data gaps/issues Works in close co-ordination with Technology, Business Managers and other stakeholders to fulfill the delivery objectives Creates a product vision aligned to business priorities and corresponding road-map to delivery Partners with senior team members and leaders and a widely distributed global user community to define and implement solutions Lead assessment of end-to-end data flows for all data elements used in Regulatory Reports Document current and target states data mapping and produce gap assessment Coordinate with the business for identifying critical data elements, defining standards and quality expectations, and prioritize remediation of data issues Identify appropriate strategic source for critical data elements Design and Implement data governance controls including data quality rules and data reconciliation Design systematic solution for elimination of manual processes/adjustments and remediation of tactical solutions Prepare detailed requirement specifications containing calculations, data transformations and aggregation logic Perform functional testing and data validations Skills & Qualification 10+ years of combined experience in banking and financial services industry, information technology and/or data controls and governance. Preferably Engineering Graduate with Post Graduation in Finance Extensive experience in the capital markets business and processes Deep understanding of Derivative products (i.e., Equities, FX, IRS, Commodities etc.) Strong Data analysis skills using Excel, SQL, Python, Pyspark etc. Experience with data management processes and tools and applications, including process mapping and lineage toolsets Actively managed various aspects of data initiatives including analysis, planning, execution, and day-to-day production management Ability to identify and solve problems throughout the product development process Analytical thinking – ability to break down complex data structures and processes to identify issues and develop logical models that meet business needs. Strong knowledge of structured/unstructured databases, data modeling, data management, rapid / iterative development methodologies and data governance tools Strong understanding of data governance issues, policies, regulatory requirements, and industry information affecting the business environment Demonstrated stakeholder management skills Excellent communication skills – needs to be able to communicate with technical and non-technical stakeholders to gather requirement, and develop clear documentation Excellent presentation skills, business and technical writing, and verbal communication skills to support decision-making and actions Excellent problem-solving and critical thinking skills to recognize and comprehend complex data flow and designs. Self-motivated and able to dynamically determine priorities Data visualization skills – can help in creating visual representation of data models and provide input to UX / UI team to help make it easier to communicate complex model relationships with stakeholders ------------------------------------------------------ Job Family Group: Product Management and Development ------------------------------------------------------ Job Family: Product Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 2 weeks ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Duties for this role include but not limited to: supporting the design, build, test and maintain data pipelines at big data scale. Assists with updating data from multiple data sources. Work on batch processing of collected data and match its format to the stored data, make sure that the data is ready to be processed and analyzed. Assisting with keeping the ecosystem and the pipeline optimized and efficient, troubleshooting standard performance, data related problems and provide L3 support. Implementing parsers, validators, transformers and correlators to reformat, update and enhance the data. Data Engineers play a pivotal role within Dataworks, focused on creating and driving engineering innovation and facilitating the delivery of key business initiatives. Acting as a “universal translator” between IT, business, software engineers and data scientists, data engineers collaborate across multi-disciplinary teams to deliver value. Data Engineers will work on those aspects of the Dataworks platform that govern the ingestion, transformation, and pipelining of data assets, both to end users within FedEx and into data products and services that may be externally facing. Day-to-day, they will be deeply involved in code reviews and large-scale deployments. Essential Job Duties & Responsibilities Understanding in depth both the business and technical problems Dataworks aims to solve Building tools, platforms and pipelines to enable teams to clearly and cleanly analyze data, build models and drive decisions Scaling up from “laptop-scale” to “cluster scale” problems, in terms of both infrastructure and problem structure and technique Collaborating across teams to drive the generation of data driven operational insights that translate to high value optimized solutions. Delivering tangible value very rapidly, collaborating with diverse teams of varying backgrounds and disciplines Codifying best practices for future reuse in the form of accessible, reusable patterns, templates, and code bases Interacting with senior technologists from the broader enterprise and outside of FedEx (partner ecosystems and customers) to create synergies and ensure smooth deployments to downstream operational systems Skill/Knowledge Considered a Plus Technical background in computer science, software engineering, database systems, distributed systems Fluency with distributed and cloud environments and a deep understanding of optimizing computational considerations with theoretical properties Experience in building robust cloud-based data engineering and curation solutions to create data products useful for numerous applications Detailed knowledge of the Microsoft Azure tooling for large-scale data engineering efforts and deployments is highly preferred. Experience with any combination of the following azure tools: Azure Databricks, Azure Data Factory, Azure SQL D, Azure Synapse Analytics Developing and operationalizing capabilities and solutions including under near real-time high-volume streaming conditions. Hands-on development skills with the ability to work at the code level and help debug hard to resolve issues. A compelling track record of designing and deploying large scale technical solutions, which deliver tangible, ongoing value Direct experience having built and deployed robust, complex production systems that implement modern, data processing methods at scale Ability to context-switch, to provide support to dispersed teams which may need an “expert hacker” to unblock an especially challenging technical obstacle, and to work through problems as they are still being defined Demonstrated ability to deliver technical projects with a team, often working under tight time constraints to deliver value An ‘engineering’ mindset, willing to make rapid, pragmatic decisions to improve performance, accelerate progress or magnify impact Comfort with working with distributed teams on code-based deliverables, using version control systems and code reviews Ability to conduct data analysis, investigation, and lineage studies to document and enhance data quality and access Use of agile and devops practices for project and software management including continuous integration and continuous delivery Demonstrated expertise working with some of the following common languages and tools: Spark (Scala and PySpark), Kafka and other high-volume data tools SQL and NoSQL storage tools, such as MySQL, Postgres, MongoDB/CosmosDB Java, Python data tools Azure DevOps experience to track work, develop using git-integrated version control patterns, and build and utilize CI/CD pipelines Working knowledge and experience implementing data architecture patterns to support varying business needs Experience with different data types (json, xml, parquet, avro, unstructured) for both batch and streaming ingestions Use of Azure Kubernetes Services, Eventhubs, or other related technologies to implement streaming ingestions Experience developing and implementing alerting and monitoring frameworks Working knowledge of Infrastructure as Code (IaC) through Terraform to create and deploy resources Implementation experience across different data stores, messaging systems, and data processing engines Data integration through APIs and/or REST service PowerPlatform (PowerBI, PowerApp, PowerAutomate) development experience a plus Additional Job Description Analytical Skills, Accuracy & Attention to Detail, Planning & Organizing Skills, Influencing & Persuasion Skills, Presentation Skills FedEx was built on a philosophy that puts people first, one we take seriously. We are an equal opportunity/affirmative action employer and we are committed to a diverse, equitable, and inclusive workforce in which we enforce fair treatment, and provide growth opportunities for everyone. All qualified applicants will receive consideration for employment regardless of age, race, color, national origin, genetics, religion, gender, marital status, pregnancy (including childbirth or a related medical condition), physical or mental disability, or any other characteristic protected by applicable laws, regulations, and ordinances. Our Company FedEx is one of the world's largest express transportation companies and has consistently been selected as one of the top 10 World’s Most Admired Companies by "Fortune" magazine. Every day FedEx delivers for its customers with transportation and business solutions, serving more than 220 countries and territories around the globe. We can serve this global network due to our outstanding team of FedEx team members, who are tasked with making every FedEx experience outstanding. Our Philosophy The People-Service-Profit philosophy (P-S-P) describes the principles that govern every FedEx decision, policy, or activity. FedEx takes care of our people; they, in turn, deliver the impeccable service demanded by our customers, who reward us with the profitability necessary to secure our future. The essential element in making the People-Service-Profit philosophy such a positive force for the company is where we close the circle, and return these profits back into the business, and invest back in our people. Our success in the industry is attributed to our people. Through our P-S-P philosophy, we have a work environment that encourages team members to be innovative in delivering the highest possible quality of service to our customers. We care for their well-being, and value their contributions to the company. Our Culture Our culture is important for many reasons, and we intentionally bring it to life through our behaviors, actions, and activities in every part of the world. The FedEx culture and values have been a cornerstone of our success and growth since we began in the early 1970’s. While other companies can copy our systems, infrastructure, and processes, our culture makes us unique and is often a differentiating factor as we compete and grow in today’s global marketplace.
Posted 2 weeks ago
2.0 - 6.0 years
8 - 18 Lacs
Gurugram
Remote
Role Characteristics: Analytics team provides analytical support to multiple stakeholders (Product, Engineering, Business development, Ad operations) by developing scalable analytical solutions, identifying problems, coming up with KPIs and monitor those to measure impact/success of product improvements/changes and streamlining processes. This will be an exciting and challenging role that will enable you to work with large data sets, expose you to cutting edge analytical techniques, work with latest AWS analytics infrastructure (Redshift, s3, Athena, and gain experience in the usage of location data to drive businesses. Working in a dynamic start up environment will give you significant opportunities for growth within the organization. A successful applicant will be passionate about technology and developing a deep understanding of human behavior in the real world. They would also have excellent communication skills, be able to synthesize and present complex information and be a fast learner. You Will: Perform root cause analysis with minimum guidance to figure out reasons for sudden changes/abnormalities in metrics Understand objective/business context of various tasks and seek clarity by collaborating with different stakeholders (like Product, Engineering) Derive insights and putting them together to build a story to solve a given problem Suggest ways for process improvements in terms of script optimization, automating repetitive tasks Create and automate reports and dashboards through Python to track certain metrics basis given requirements Automate reports and dashboards through Python Technical Skills (Must have) B.Tech degree in Computer Science, Statistics, Mathematics, Economics or related fields 4-6 years of experience in working with data and conducting statistical and/or numerical analysis Ability to write SQL code Scripting/automation using python Hands on experience in data visualisation tool like Looker/Tableau/Quicksight Basic to advance level understanding of statistics Other Skills (Must have) Be willing and able to quickly learn about new businesses, database technologies and analysis techniques Strong oral and written communication Understanding of patterns/trends and draw insights from those Preferred Qualifications (Nice to have) Experience working with large datasets Experience with AWS analytics infrastructure (Redshift, S3, Athena, Boto3) Hands on experience on AWS services like lambda, step functions, Glue, EMR + exposure to pyspark What we offer At GroundTruth, we want our employees to be comfortable with their benefits so they can focus on doing the work they love. Parental leave- Maternity and Paternity Flexible Time Offs (Earned Leaves, Sick Leaves, Birthday leave, Bereavement leave & Company Holidays) In Office Daily Catered Lunch Fully stocked snacks/beverages Health cover for any hospitalization. Covers both nuclear family and parents Tele-med for free doctor consultation, discounts on health checkups and medicines Wellness/Gym Reimbursement Pet Expense Reimbursement Childcare Expenses and reimbursements Employee assistance program Employee referral program Education reimbursement program Skill development program Cell phone reimbursement (Mobile Subsidy program) Internet reimbursement Birthday treat reimbursement Employee Provident Fund Scheme offering different tax saving options such as VPF and employee and employer contribution up to 12% Basic Creche reimbursement Co-working space reimbursement NPS employer match Meal card for tax benefit Special benefits on salary account We are an equal opportunity employer and value diversity, inclusion and equity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
Posted 2 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Our vision is to transform how the world uses information to enrich life for all . Micron Technology is a world leader in innovating memory and storage solutions that accelerate the transformation of information into intelligence, inspiring the world to learn, communicate and advance faster than ever. About Profile – Smart Manufacturing And AI (Data Science Engineer) Micron Technology’s vision is to transform how the world uses information to enrich life and our commitment to people, innovation, tenacity, collaboration, and customer focus allows us to fulfill our mission to be a global leader in memory and storage solutions. This means conducting business with integrity, accountability, and professionalism while supporting our global community. Describe the function of the role and how it fits into your department? As a Data Science Engineer at Micron Technology Inc., you will be a key member of a multi-functional team responsible for developing and growing Micron’s methods and systems for applied data analysis, modeling and reporting. You will be collaborating with other data scientists, engineers, technicians and data mining teams to design and implement systems to transform and process data extracted from Micron’s business systems, applying advanced statistical and mathematical methods to analyze the data, creating diagnostic and predictive models, and creating dynamic presentation layers for use by high-level engineers and managers throughout the company. You will be creating new solutions, as well as, supporting, configuring, and improving existing solutions. Why would a candidate love to work for your group and team? We are a Smart Manufacturing and AI organization with a goal to spearhead Industry 4.0 transformation and enable accelerated intelligence and digital operations in the company. Our teams deal with projects to help solve complex real-time business problems that would significantly help improve yield, cycle time, quality and reduce cost of our products. This role also gives a great opportunity to closely work with data scientists, I4.0 analysts and engineers and with the latest big data and cloud-based platforms/skillsets. We highly welcome new ideas and are large proponent of Innovation. What are your expectations for the position? We are seeking Data Science Engineers who are highly passionate about data and associated analysis techniques, can quickly adapt to learning new skills and can design/implement state-of-art Data Science and ML pipelines on-prem and on cloud. You will interact with experienced Data Scientists, Data Engineers, Business Areas Engineers, and UX teams to identify questions and issues for Data Science, AI and Advanced analysis projects and improvement of existing tools. In this position, you will help develop software programs, algorithms and/or automated processes to transform and process data from multiple sources, to apply statistical and ML techniques to analyze data, to discover underlying patterns or improve prediction capabilities, and to deploy advanced visualizations on modern UI platforms. There will be significant opportunities to perform exploratory and new solution development activities Roles & responsibilities can include but are not limited to: Broad Knowledge And Experience In Strong desire to grow career as Data Scientist in highly automated industrial manufacturing doing analysis and machine learning on terabytes and petabytes of diverse datasets. Ability to extract data from different databases via SQL and other query languages and applying data cleansing, outlier identification, and missing data techniques. Ability to apply latest mathematical and statistical techniques to analyze data and uncover patterns. Interested to build web application as part of job scope. Knowledge in Cloud based Analytics and Machine Learning Modeling Knowledge in building APIs for application integration. Knowledge in the areas: statistical modeling, feature extraction and analysis, feature engineering, supervised/unsupervised/semi-supervised learning. Data Analysis and Validation skills Strong software development skills. Above Average Skills In Programming Fluency in Python Knowledge in statistics, Machine learning and other advanced analytical methods Knowledge in javascript, AngularJS 2.0, Tableau will be added advantage. Knowledge in OOPS background is added advantage. Understanding of pySpark and/or libraries for distributed and parallel processing is added advantage. Knowledge in Tensorflow, and/or other statistical software including scripting capability for automating analyses Knowledge with time series data, images, semi-supervised learning, and data with frequently changing distributions is a plus Understanding of Manufacturing Execution Systems (MES) is a plus Demonstrated Ability To Work in a dynamic, fast-paced, work environment Self-motivated with the ability to work under minimal direction To adapt to new technologies and learn quickly A passion for data and information with strong analytical, problem solving, and organizational skills Work in multi-functional groups, with diverse interests and requirements, to a common objective Communicate very well with distributed teams (written, verbal and presentation) Education Bachelor’s or Master’s Degree in Computer Science,Mathematics, , Data Science and Physics. CGPA requirements = 7.0 CGPA & Above About Micron Technology, Inc. We are an industry leader in innovative memory and storage solutions transforming how the world uses information to enrich life for all . With a relentless focus on our customers, technology leadership, and manufacturing and operational excellence, Micron delivers a rich portfolio of high-performance DRAM, NAND, and NOR memory and storage products through our Micron® and Crucial® brands. Every day, the innovations that our people create fuel the data economy, enabling advances in artificial intelligence and 5G applications that unleash opportunities — from the data center to the intelligent edge and across the client and mobile user experience. To learn more, please visit micron.com/careers All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran or disability status. To request assistance with the application process and/or for reasonable accommodations, please contact hrsupport_india@micron.com Micron Prohibits the use of child labor and complies with all applicable laws, rules, regulations, and other international and industry labor standards. Micron does not charge candidates any recruitment fees or unlawfully collect any other payment from candidates as consideration for their employment with Micron. AI alert : Candidates are encouraged to use AI tools to enhance their resume and/or application materials. However, all information provided must be accurate and reflect the candidate's true skills and experiences. Misuse of AI to fabricate or misrepresent qualifications will result in immediate disqualification. Fraud alert: Micron advises job seekers to be cautious of unsolicited job offers and to verify the authenticity of any communication claiming to be from Micron by checking the official Micron careers website in the About Micron Technology, Inc.
Posted 2 weeks ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Extensive implementation experience in data analytics space or a senior developer role in one of the modern technology stack Excellent programming skills and proficiency in at least one of the major programming scripting languages used in Gen AI orchestration such as Python or PySpark or Java Ability to build API based scalable solutions and debug troubleshoot software or design issues Hands on exposure to integrating atleast one of the popular LLMs Open AI GPT PaLM 2 Dolly Claude 2 Cohere etc using API endpoints Thorough understanding of prompt engineering; implementation exposure to LLM agents like LangChain vector databases Pinecone or Chroma or FAISS Ability to quickly conduct experiments and analyze the features and capabilities of newer versions of the LLM models as they come into market Basic data engineering skills to load structured unstructured data from source systems to target data stores Work closely with Gen AI leads and other team members to address requirements from the product backlog Build and maintain data pipelines and infrastructure to support AI Solutions Desirable Hands on exposure to using cloud Azure GCP AWS services for storage serverless logic search transcription and chat Extensive experience with data engineering and ETL tools is a big plus Masters Bachelors degree in Computer Science or Statistics or Mathematics
Posted 2 weeks ago
5.0 - 10.0 years
11 - 21 Lacs
Bengaluru
Hybrid
The requirement is a Data Engineer on Google stack, and not Microsoft, AWS or other tools/ Cloud Data Extraction and Analysis: Using Google SQL or Google Big Query to query and manipulate data from databases, and any programming language like Python / Go / Perl to extract and update data, via scripts. Would be good to have Map-Reduce experience. Data Visualization: Creating reports and dashboards using tools like Google Analytics, Google Looker Studio to present findings in a clear and understandable way. Ability to create effective visualizations to communicate insights. Problem Solving: Identifying Data engineering requirements, developing data-pipeline, and using data to propose solutions and recommendations. Performance Optimization: Analyzing data , SQLs and stored procedures to identify areas for improvement in query performance. Applying performance improvement techniques like Indexing , partitioning. Familiar with Data pipeline performance improvements using parallel processing , Caching , Efficient Data storage formats , In-memory computing . Scheduling : Familiar with one job scheduler tool like Dreampipe , Airflow and how to schedule jobs. Release Engineering : Familiar with on-demand and scheduled release management. process
Posted 2 weeks ago
7.0 years
0 Lacs
India
Remote
Job Title: MS Fabric Solution Engineer lead and Architect role Experience: 7-10 Years Location: Remote Budget : 1.2 LPM for 7+Years(lead role) & 1.4 LPM for 8+Years(Architect) Shift : IST JD for MS Fabric Solution Engineer Key Responsibilities: ● Lead the technical design, architecture, and hands-on implementation of Microsoft Fabric PoCs. This includes translating business needs into effective data solutions, often applying Medallion Architecture principles within the Lakehouse.. ● Develop and optimize ELT/ETL pipelines for diverse data sources: o Static data (e.g., CIM XML, equipment models, Velocity Suite data). o Streaming data (e.g., measurements from grid devices, Event Hub and IoT Hub). ● Seamlessly integrate Fabric with internal systems (e.g., CRM, ERP) using RESTful APIs, data mirroring, Azure Integration Services, and CDC (Change Data Capture) mechanisms. ● Hands-on configuration and management of core Fabric components: OneLake, Lakehouse, Notebooks (PySpark/KQL), and Real-Time Analytics databases. ● Facilitate data access via GraphQL interfaces, Power BI Embedded, and Direct Lake connections, ensuring optimal performance for self-service BI and adhering to RLS/OLS. ● Work closely with Microsoft experts, SMEs, and stakeholders. ● Document architecture, PoC results, and provide recommendations for production readiness and data governance (e.g., Purview integration). ______________ Required Skills & Experience: ● 5–10 years of experience in Data Engineering / BI / Cloud Analytics, with at least 1–2 projects using Microsoft Fabric (or strong Power BI + Synapse background transitioning to Fabric). ● Proficient in: o OneLake, Data Factory, Lakehouse, Real-Time Intelligence, Dataflow Gen2 o Ingestion using CIM XML, CSV, APIs, SDKs o Power BI Embedded, GraphQL interfaces o Azure Notebooks / PySpark / Fabric SDK ● Experience with data modeling (asset registry, nomenclature alignment, schema mapping). ● Familiarity with real-time streaming (Kafka/Kinesis/IoT Hub) and data governance concepts. ● Strong problem-solving and debugging skills. ● Prior experience with PoC/Prototype-style projects with tight timelines. ______________ Good to Have: ● Knowledge of grid operations / energy asset management systems. ● Experience working on Microsoft-Azure joint engagements. ● Understanding of AI/ML workflow integration via Azure AI Foundry or similar. ● Relevant certifications: DP-600/700 or DP-203.
Posted 2 weeks ago
0 years
0 Lacs
Andhra Pradesh, India
On-site
We are seeking a highly skilled and motivated Big Data Engineer to join our data engineering team. The ideal candidate will have hands-on experience with Hadoop ecosystem, Apache Spark, and programming expertise in Python (PySpark), Scala, and Java. You will be responsible for designing, developing, and optimizing scalable data pipelines and big data solutions to support analytics and business intelligence initiatives.
Posted 2 weeks ago
4.0 - 9.0 years
20 - 35 Lacs
Gurugram
Work from Office
Job Description - The candidate should have extensive production experience (2+ Years ) in GCP - Strong background in Data engineering 2-3 Years of exp in Big Data technologies including, Hadoop, NoSQL, Spark, Kafka etc. - Exposure to enterprise application development is a must. Roles & Responsibilities 4-10 years of IT experience range is preferred. Able to effectively use GCP managed services e.g. Dataproc, Dataflow, pub/sub, Cloud functions, Big Query, GCS - At least 4 of these Services. Good to have knowledge on Cloud Composer, Cloud SQL, Big Table, Cloud Function. Strong experience in Big Data technologies Hadoop, Sqoop, Hive and Spark including DevOPs. Good hands on expertise on either Python or Java programming. Good Understanding of GCP core services like Google cloud storage, Google compute engine, Cloud SQL, Cloud IAM. Good to have knowledge on GCP services like App engine, GKE, Cloud Run, Cloud Built, Anthos. Ability to drive the deployment of the customers workloads into GCP and provide guidance, cloud adoption model, service integrations, appropriate recommendations to overcome blockers and technical road-maps for GCP cloud implementations. Experience with technical solutions based on industry standards using GCP - IaaS, PaaS and SaaS capabilities. Extensive, real-world experience designing technology components for enterprise solutions and defining solution architectures and reference architectures with a focus on cloud technologies. Act as a subject-matter expert OR developer around GCP and become a trusted advisor to multiple teams. Technical ability to become certified in required GCP technical certifications.
Posted 2 weeks ago
5.0 - 10.0 years
20 - 30 Lacs
Hyderabad
Work from Office
About Position: Grow your career with an exciting opportunity with us, where you will be a part of creating software solutions that help to change lives - millions of lives. As a Data Engineer , you will have the opportunity to be a member of a focused team dedicated to helping to make the health care system work better for everyone. Here, you'll partner with some of the smartest people you've ever worked with to design solutions to meet a wide range of health consumer needs Role: Azure Data Engineer Location: Hyderabad Experience: 5 to 10 Years Job Type: Full Time Employment What You'll Do: Design and implement scalable ETL/ELT pipelines using Azure Data Factory. Develop and optimize big data solutions using Azure Databricks and PySpark. Write efficient and complex SQL queries for data extraction, transformation, and analysis. Collaborate with data architects, analysts, and business stakeholders to understand data requirements. Ensure data quality, integrity, and security across all data pipelines. Monitor and troubleshoot data workflows and performance issues. Implement best practices for data engineering, including CI/CD, version control, and documentation. Expertise You'll Bring: 3+ years of experience in data engineering with a strong focus on Azure cloud technologies. Proficiency in Azure Data Factory, Azure Databricks, PySpark, and SQL. Experience with data modeling, data warehousing, and performance tuning. Familiarity with version control systems like Git and CI/CD pipelines. Benefits: Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment: Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a values-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry's best Let's unleash your full potential at Persistent "Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind."
Posted 2 weeks ago
7.0 - 12.0 years
11 - 21 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Must have experience in ETL/ELT tools and pipeline Working experience with Python libraries Pandas, NumPy, and SQL Alchemy for ETL Strong understanding of Data Warehouse and Development Experience with relational SQL and NoSQL databases.
Posted 2 weeks ago
7.0 - 12.0 years
25 - 35 Lacs
Kochi, Bengaluru, Thiruvananthapuram
Hybrid
Position: Data Engineer Azure Databricks Experience: 7+ Years Locations: Trivandrum, Kochi, Bangalore No. of Positions: 20 Notice Period: 0 – 15 Days (Strictly) CTC: Up to 40 LPA (Case-to-case basis) Mandatory Skills: Azure Databricks PySpark SQL Python Key Responsibilities: Develop and optimize robust data pipelines using Databricks, PySpark, and Azure Work on complex ETL/ELT processes , transforming and modeling data for analytics and reporting Build scalable data solutions using relational and big data engines Apply strong understanding of data warehousing concepts (e.g., Kimball/Star Schema ) Collaborate with cross-functional teams in Agile environments Ensure clean code, versioning, documentation , and pipeline maintainability Must be able to work on a MacBook Pro (mandatory for script compatibility) Requirements: 7+ years of hands-on experience in data engineering Expertise in Azure cloud platform and Databricks notebooks Proficiency in SQL , Python , and PySpark Good communication and collaboration skills Solid documentation and version control practices Preferred Candidates: Immediate joiners or those with 0–15 days notice Comfortable working from Trivandrum, Kochi, or Bangalore locations Previous experience in data-heavy environments with real-time or batch processing
Posted 2 weeks ago
5.0 - 10.0 years
15 - 25 Lacs
Hyderabad/Secunderabad, Bangalore/Bengaluru, Delhi / NCR
Hybrid
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose the relentless pursuit of a world that works better for people – we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Lead Consultant-Data Engineer, AWS+Python, Spark, Kafka for ETL! Responsibilities Develop, deploy, and manage ETL pipelines using AWS services, Python, Spark, and Kafka. Integrate structured and unstructured data from various data sources into data lakes and data warehouses. Design and deploy scalable, highly available, and fault-tolerant AWS data processes using AWS data services (Glue, Lambda, Step, Redshift) Monitor and optimize the performance of cloud resources to ensure efficient utilization and cost-effectiveness. Implement and maintain security measures to protect data and systems within the AWS environment, including IAM policies, security groups, and encryption mechanisms. Migrate the application data from legacy databases to Cloud based solutions (Redshift, DynamoDB, etc) for high availability with low cost Develop application programs using Big Data technologies like Apache Hadoop, Apache Spark, etc with appropriate cloud-based services like Amazon AWS, etc. Build data pipelines by building ETL processes (Extract-Transform-Load) Implement backup, disaster recovery, and business continuity strategies for cloud-based applications and data. Responsible for analysing business and functional requirements which involves a review of existing system configurations and operating methodologies as well as understanding evolving business needs Analyse requirements/User stories at the business meetings and strategize the impact of requirements on different platforms/applications, convert the business requirements into technical requirements Participating in design reviews to provide input on functional requirements, product designs, schedules and/or potential problems Understand current application infrastructure and suggest Cloud based solutions which reduces operational cost, requires minimal maintenance but provides high availability with improved security Perform unit testing on the modified software to ensure that the new functionality is working as expected while existing functionalities continue to work in the same way Coordinate with release management, other supporting teams to deploy changes in production environment Qualifications we seek in you! Minimum Qualifications Experience in designing, implementing data pipelines, build data applications, data migration on AWS Strong experience of implementing data lake using AWS services like Glue, Lambda, Step, Redshift Experience of Databricks will be added advantage Strong experience in Python and SQL Proven expertise in AWS services such as S3, Lambda, Glue, EMR, and Redshift. Advanced programming skills in Python for data processing and automation. Hands-on experience with Apache Spark for large-scale data processing. Experience with Apache Kafka for real-time data streaming and event processing. Proficiency in SQL for data querying and transformation. Strong understanding of security principles and best practices for cloud-based environments. Experience with monitoring tools and implementing proactive measures to ensure system availability and performance. Excellent problem-solving skills and ability to troubleshoot complex issues in a distributed, cloud-based environment. Strong communication and collaboration skills to work effectively with cross-functional teams. Preferred Qualifications/ Skills Master’s Degree-Computer Science, Electronics, Electrical. AWS Data Engineering & Cloud certifications, Databricks certifications Experience with multiple data integration technologies and cloud platforms Knowledge of Change & Incident Management process Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.
Posted 2 weeks ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Role Description Role Proficiency: This role requires proficiency in data pipeline development including coding and testing data pipelines for ingesting wrangling transforming and joining data from various sources. Must be skilled in ETL tools such as Informatica Glue Databricks and DataProc with coding expertise in Python PySpark and SQL. Works independently and has a deep understanding of data warehousing solutions including Snowflake BigQuery Lakehouse and Delta Lake. Capable of calculating costs and understanding performance issues related to data solutions. Outcomes Act creatively to develop pipelines and applications by selecting appropriate technical options optimizing application development maintenance and performance using design patterns and reusing proven solutions.rnInterpret requirements to create optimal architecture and design developing solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code adhering to best coding standards debug and test solutions to deliver best-in-class quality. Perform performance tuning of code and align it with the appropriate infrastructure to optimize efficiency. Validate results with user representatives integrating the overall solution seamlessly. Develop and manage data storage solutions including relational databases NoSQL databases and data lakes. Stay updated on the latest trends and best practices in data engineering cloud technologies and big data tools. Influence and improve customer satisfaction through effective data solutions. Measures Of Outcomes Adherence to engineering processes and standards Adherence to schedule / timelines Adhere to SLAs where applicable # of defects post delivery # of non-compliance issues Reduction of reoccurrence of known defects Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirements Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). Average time to detect respond to and resolve pipeline failures or data issues. Number of data security incidents or compliance breaches. Outputs Expected Code Development: Develop data processing code independently ensuring it meets performance and scalability requirements. Define coding standards templates and checklists. Review code for team members and peers. Documentation Create and review templates checklists guidelines and standards for design processes and development. Create and review deliverable documents including design documents architecture documents infrastructure costing business requirements source-target mappings test cases and results. Configuration Define and govern the configuration management plan. Ensure compliance within the team. Testing Review and create unit test cases scenarios and execution plans. Review the test plan and test strategy developed by the testing team. Provide clarifications and support to the testing team as needed. Domain Relevance Advise data engineers on the design and development of features and components demonstrating a deeper understanding of business needs. Learn about customer domains to identify opportunities for value addition. Complete relevant domain certifications to enhance expertise. Project Management Manage the delivery of modules effectively. Defect Management Perform root cause analysis (RCA) and mitigation of defects. Identify defect trends and take proactive measures to improve quality. Estimation Create and provide input for effort and size estimation for projects. Knowledge Management Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release Management Execute and monitor the release process to ensure smooth transitions. Design Contribution Contribute to the creation of high-level design (HLD) low-level design (LLD) and system architecture for applications business components and data models. Customer Interface Clarify requirements and provide guidance to the development team. Present design options to customers and conduct product demonstrations. Team Management Set FAST goals and provide constructive feedback. Understand team members' aspirations and provide guidance and opportunities for growth. Ensure team engagement in projects and initiatives. Certifications Obtain relevant domain and technology certifications to stay competitive and informed. Skill Examples Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning of data processes. Expertise in designing and optimizing data warehouses for cost efficiency. Ability to apply and optimize data models for efficient storage retrieval and processing of large datasets. Capacity to clearly explain and communicate design and development aspects to customers. Ability to estimate time and resource requirements for developing and debugging features or components. Knowledge Examples Knowledge Examples Knowledge of various ETL services offered by cloud providers including Apache PySpark AWS Glue GCP DataProc/DataFlow Azure ADF and ADLF. Proficiency in SQL for analytics including windowing functions. Understanding of data schemas and models relevant to various business contexts. Familiarity with domain-related data and its implications. Expertise in data warehousing optimization techniques. Knowledge of data security concepts and best practices. Familiarity with design patterns and frameworks in data engineering. Additional Comments Skills Cloud Platforms ( AWS, MS Azure, GC etc.) Containerization and Orchestration ( Docker, Kubernetes etc..) APIs - Change APIs to APIs development Data Pipeline construction using languages like Python, PySpark, and SQL Data Streaming (Kafka and Azure Event Hub etc..) Data Parsing ( Akka and MinIO etc..) Database Management ( SQL and NoSQL, including Clickhouse, PostgreSQL etc..) Agile Methodology ( Git, Jenkins, or Azure DevOps etc..) JS like Connectors/ framework for frontend/backend Collaboration and Communication Skills Aws Cloud,Azure Cloud,Docker,Kubernetes
Posted 2 weeks ago
5.0 - 9.0 years
0 - 3 Lacs
New Delhi, Pune, Delhi / NCR
Work from Office
Roles and Responsibilities Develop high-quality code in Python using PySpark, SQL, Flink/Spark Streaming and other relevant technologies. Design, develop, test, deploy, and maintain large-scale data processing pipelines using Azure Databricks . Troubleshoot issues related to On-prem(Hadoop) / Databricks clusters, and big data processing tasks Develop complex SQL queries to extract insights from large datasets stored in relational databases such as PostgreSQL. Desired Candidate Profile 6-8 years of experience in software development with expertise in BI & Analytics domain. Bachelor's degree in Any Specialization (B.Tech/B.E.). Strong understanding of cloud computing concepts on Microsoft Azure platform. Proficiency in programming languages such as Python with hands-on experience working with PySpark.
Posted 2 weeks ago
5.0 - 10.0 years
17 - 20 Lacs
Chennai
Work from Office
Skills required Senior ETL Developer: Mandatory skills (8+ Years of experience in ETL development with 4+ Years on AWS Pyspark scripting) 1. Experience deploying and running AWS-based data solutions using services or products such as S3, Lambda, SNS, Cloud Step Functions. 2. Person should be strong in Pyspark 3. Hands on and working knowledge in Python packages like NumPy, Pandas, Etc 4. Experience deploying and running AWS-based data solutions using services or products such as S3, Lambda, SNS, Cloud Step Functions. Sound knowledge in AWS services is must. 5. Person should work as Individual contributor 6. Good to have familiar with metadata management, data lineage, and principles of data governance. Good to have: 1. Experience to process large set of data transformations both semi and structured data 2. Experience to build data lake & configuration on delta tables. 3. Good experience with computing & cost optimization. 4. Understanding the environment and use case and ready to build holistic Data Integration frame works. 5. Good experience in MWAA (airflow orchestration) Soft skill: 1. Having good communication to interact with IT-Stake holders and Business. 2. Understand the pain point to delivery.
Posted 2 weeks ago
6.0 - 10.0 years
9 - 17 Lacs
Pune
Work from Office
Strong experience with IBM DataStage for ETL development and data transformation. Proficiency in Azure Data Factory (ADF), Snowflake, Pyspark Interested candidate please fill the google form https://forms.gle/A5ieWPGMFWrCZSGy5
Posted 2 weeks ago
8.0 - 11.0 years
15 - 25 Lacs
Pune
Work from Office
8+ years of experience in ETL/DW projects, having migration experience and team management having delivery experience with team of 10+ resources. Proven expertise in Snowflake data warehousing, ETL, and data governance.
Posted 2 weeks ago
4.0 - 6.0 years
12 - 22 Lacs
Pune, Gurugram
Hybrid
Role: Data Engineer Years of Experience: 3-6 Key Skills: Pyspark, SQL, Azure, Python Requirements: 2+ years of hands-on experience with Pyspark development 2 + years of experience in SQL queries Strong SQL and data manipulation skills Azure Cloud experience is good to have
Posted 2 weeks ago
3.0 years
0 Lacs
Kolkata, West Bengal, India
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are seeking a hands-on and motivated Azure DataOps Engineer to support our cloud-based data operations and workflows. This role is ideal for someone with strong foundational knowledge of Azure data services and data pipelines who is looking to grow in a fast-paced environment. You will work closely with senior engineers and analysts to manage data pipelines, ensure data quality, and assist in deployment and monitoring activities. Your Key Responsibilities Support the execution and monitoring of Azure Data Factory (ADF) pipelines and Azure Synapse workloads. Assist in maintaining data in Azure Data Lake and troubleshoot ingestion and access issues. Collaborate with the team to support Databricks notebooks and manage small transformation tasks. Perform ETL operations and ensure timely and accurate data movement between systems. Write and debug intermediate-level SQL queries for data validation and issue analysis. Monitor pipeline health using Azure Monitor and Log Analytics, and escalate issues as needed. Support deployment activities using Azure DevOps pipelines. Maintain and update SOPs, assist in documenting known issues and recurring tasks. Participate in incident management and contribute to resolution and knowledge sharing. Skills And Attributes For Success Strong understanding of cloud-based data workflows, especially in Azure environments. Analytical mindset with the ability to troubleshoot data pipeline and transformation issues. Comfortable working with large datasets and navigating both structured and semi-structured data. Ability to follow runbooks, SOPs, and collaborate effectively with other technical teams. Willingness to learn new technologies and adapt in a dynamic environment. Good communication skills to interact with stakeholders, document findings, and share updates. Discipline to work independently, manage priorities, and escalate issues responsibly. To qualify for the role, you must have 2–3 years of experience in DataOps or Data Engineering roles Proven expertise in managing and troubleshooting data workflows within the Azure ecosystem Experience working with Informatica CDI or similar data integration tools Scripting and automation experience in Python/PySpark Ability to support data pipelines in a rotational on-call or production support environment Comfortable working in a remote/hybrid and cross-functional team setup Technologies and Tools Must haves Working knowledge of Azure Data Factory, Data Lake, and Synapse Exposure to Azure Databricks – ability to understand and run existing notebooks Understanding of ETL processes and data flow concepts Good to have Experience with Power BI or Tableau for basic reporting and data visualization Exposure to Informatica CDI or any other data integration platform Basic scripting knowledge in Python or PySpark for data processing or automation tasks Proficiency in writing SQL for querying and analyzing structured data Familiarity with Azure Monitor and Log Analytics for pipeline monitoring Experience supporting DevOps deployments or familiarity with Azure DevOps concepts. What We Look For Enthusiastic learners with a passion for data op’s and practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 2 weeks ago
5.0 - 10.0 years
10 - 20 Lacs
Bengaluru
Remote
Role & responsibilities Looking for a skilled Data Engineer with expertise in Python and Azure Databricks for building scalable data pipelines. Must have strong SQL skills for designing, querying, and optimizing relational databases. Responsible for data ingestion, transformation, and orchestration across cloud platforms. Experience with coding best practices, performance tuning, and CI/CD in Azure ecosystem is essential. Need Streamlit exp.
Posted 2 weeks ago
7.0 years
0 Lacs
India
On-site
Job Title: MS Fabric Solution Engineer Architect Experience: 7-10 Years Shift : IST JD for MS Fabric Solution Engineer Key Responsibilities: ● Lead the technical design, architecture, and hands-on implementation of Microsoft Fabric PoCs. This includes translating business needs into effective data solutions, often applying Medallion Architecture principles within the Lakehouse.. ● Develop and optimize ELT/ETL pipelines for diverse data sources: o Static data (e.g., CIM XML, equipment models, Velocity Suite data). o Streaming data (e.g., measurements from grid devices, Event Hub and IoT Hub). ● Seamlessly integrate Fabric with internal systems (e.g., CRM, ERP) using RESTful APIs, data mirroring, Azure Integration Services, and CDC (Change Data Capture) mechanisms. ● Hands-on configuration and management of core Fabric components: OneLake, Lakehouse, Notebooks (PySpark/KQL), and Real-Time Analytics databases. ● Facilitate data access via GraphQL interfaces, Power BI Embedded, and Direct Lake connections, ensuring optimal performance for self-service BI and adhering to RLS/OLS. ● Work closely with Microsoft experts, SMEs, and stakeholders. ● Document architecture, PoC results, and provide recommendations for production readiness and data governance (e.g., Purview integration). ______________ Required Skills & Experience: ● 7-10 years of experience in Data Engineering / BI / Cloud Analytics, with at least 1–2 projects using Microsoft Fabric (or strong Power BI + Synapse background transitioning to Fabric). ● Proficient in: o OneLake, Data Factory, Lakehouse, Real-Time Intelligence, Dataflow Gen2 o Ingestion using CIM XML, CSV, APIs, SDKs o Power BI Embedded, GraphQL interfaces o Azure Notebooks / PySpark / Fabric SDK ● Experience with data modeling (asset registry, nomenclature alignment, schema mapping). ● Familiarity with real-time streaming (Kafka/Kinesis/IoT Hub) and data governance concepts. ● Strong problem-solving and debugging skills. ● Prior experience with PoC/Prototype-style projects with tight timelines. ______________ Good to Have: ● Knowledge of grid operations / energy asset management systems. ● Experience working on Microsoft-Azure joint engagements. ● Understanding of AI/ML workflow integration via Azure AI Foundry or similar. ● Relevant certifications: DP-600/700 or DP-203. If Intrested. Please submit your CV to Khushboo@Sourcebae.com or share it via WhatsApp at 8827565832 Stay updated with our latest job opportunities and company news by following us on LinkedIn: :https://www.linkedin.com/company/sourcebae
Posted 2 weeks ago
8.0 - 13.0 years
7 - 11 Lacs
Pune
Work from Office
Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount . With our presence across 32 cities across globe, we support 100+ clients acrossbanking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT JOB SUMMARY: Position Sr Consultant Location Capco Locations (Bengaluru/ Chennai/ Hyderabad/ Pune/ Mumbai/ Gurugram) Band M3/M4 (8 to 14 years) Role Description: Job TitleSenior Consultant - Data Engineer Responsibilities Design, build and optimise data pipelines and ETL processes in Azure Databricks ensuring high performance, reliability, and scalability. Implement best practices for data ingestion, transformation, and cleansing to ensure data quality and integrity. Work within clients best practice guidelines as set out by the Data Engineering Lead Work with data modellers and testers to ensure pipelines are implemented correctly. Collaborate as part of a cross-functional team to understand business requirements and translate them into technical solutions. Role Requirements Strong Data Engineer with experience in Financial Services Knowledge of and experience building data pipelines in Azure Databricks Demonstrate a continual desire to implement strategic or optimal solutions and where possible, avoid workarounds or short term tactical solutions Work within an Agile team Experience/Skillset 8+ years experience in data engineering Good skills in SQL, Python and PySpark Good knowledge of Azure Databricks (understanding of delta tables, Apache Spark, Unity Catalog) Experience writing, optimizing, and analyzing SQL and PySpark code, with a robust capability to interpret complex data requirements and architect solutions Good knowledge of SDLC Familiar with Agile/Scrum ways of working Strong verbal and written communication skills Ability to manage multiple priorities and deliver to tight deadlines WHY JOIN CAPCO You will work on engaging projects with some of the largest banks in the world, on projects that will transform the financial services industry. We offer A work culture focused on innovation and creating lasting value for our clients and employees Ongoing learning opportunities to help you acquire new skills or deepen existing expertise A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients A diverse, inclusive, meritocratic culture We offer: A work culture focused on innovation and creating lasting value for our clients and employees Ongoing learning opportunities to help you acquire new skills or deepen existing expertise A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients #LI-Hybrid
Posted 2 weeks ago
7.0 - 12.0 years
7 - 11 Lacs
Pune
Work from Office
Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount . With our presence across 32 cities across globe, we support 100+ clients acrossbanking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT JOB SUMMARY: Position Sr Consultant Location Pune / Bangalore Band M3/M4 (7 to 14 years) Role Description: Must Have Skills: Should have experience in PySpark and Scala + Spark for 4+ years (Min experience). Proficient in debugging and data analysis skills. Should have Spark experience of 4+ years Should have understanding of SDLC and Big Data Application Life Cycle Should have experience in GIT HUB and GIT commands Good to have experience in CICD tools such Jenkins and Ansible Fast problem solving and self-starter Should have experience in using Control-M and Service Now (for Incident management ) Positive attitude, good communication skills (written and verbal both), should not have mother tongue interference. WHY JOIN CAPCO You will work on engaging projects with some of the largest banks in the world, on projects that will transform the financial services industry. We offer A work culture focused on innovation and creating lasting value for our clients and employees Ongoing learning opportunities to help you acquire new skills or deepen existing expertise A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients A diverse, inclusive, meritocratic culture We offer: A work culture focused on innovation and creating lasting value for our clients and employees Ongoing learning opportunities to help you acquire new skills or deepen existing expertise A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients #LI-Hybrid
Posted 2 weeks ago
5.0 - 9.0 years
9 - 13 Lacs
Pune
Work from Office
Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount. With our presence across 32 cities across globe, we support 100+ clients acrossbanking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT Big Data Tester LocationPune (for Mastercard) Experience Level5-9 years Minimum Skill Set Required / Must Have Python PySpark Testing skills and best practices for data validation SQL (hands-on experience, especially with complex queries) and ETL Good to Have Unix Big Data: Hadoop, Spark, Kafka, NoSQL databases (MongoDB, Cassandra), Hive, etc. Data Warehouse: TraditionalOracle, Teradata, SQL Server Modern CloudAmazon Redshift, Google BigQuery, Snowflake AWS development experience (not mandatory, but beneficial) Best Fit Python + PySpark + Testing + SQL (hands-on) and ETL + Good to Have skills
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40005 Jobs | Dublin
Wipro
19416 Jobs | Bengaluru
Accenture in India
16187 Jobs | Dublin 2
EY
15356 Jobs | London
Uplers
11435 Jobs | Ahmedabad
Amazon
10613 Jobs | Seattle,WA
Oracle
9462 Jobs | Redwood City
IBM
9313 Jobs | Armonk
Accenture services Pvt Ltd
8087 Jobs |
Capgemini
7830 Jobs | Paris,France