Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 7.0 years
8 - 14 Lacs
Kolkata
Work from Office
Job Description : We are looking for a skilled Data Engineer with strong hands-on experience in Clickhouse, Kubernetes, SQL, Python, and FastAPI, along with a good understanding of PostgreSQL. The ideal candidate will be responsible for building and maintaining efficient data pipelines, optimizing query performance, and developing APIs to support scalable data services. - Design, build, and maintain scalable and efficient data pipelines and ETL processes. - Develop and optimize Clickhouse databases for high-performance analytics. - Create RESTful APIs using FastAPI to expose data services. - Work with Kubernetes for container orchestration and deployment of data services. - Write complex SQL queries to extract, transform, and analyze data from PostgreSQL and Clickhouse. - Collaborate with data scientists, analysts, and backend teams to support data needs and ensure data quality. - Monitor, troubleshoot, and improve performance of data infrastructure. - Strong experience in Clickhouse - data modeling, query optimization, performance tuning. - Expertise in SQL - including complex joins, window functions, and optimization. - Proficient in Python, especially for data processing (Pandas, NumPy) and scripting. - Experience with FastAPI for creating lightweight APIs and microservices. - Hands-on experience with PostgreSQL - schema design, indexing, and performance. - Solid knowledge of Kubernetes managing containers, deployments, and scaling. - Understanding of software engineering best practices (CI/CD, version control, testing). - Experience with cloud platforms like AWS, GCP, or Azure. - Knowledge of data warehousing and distributed data systems. - Familiarity with Docker, Helm, and monitoring tools like Prometheus/Grafana.
Posted 2 weeks ago
8.0 - 13.0 years
25 - 40 Lacs
Bengaluru
Work from Office
*Must-Have Skills:* * Azure Databricks / PySpark hands-on * SQL/PL-SQL advanced level * Snowflake – 2+ years * Spark/Data pipeline development – 2+ years * Azure Repos / GitHub, Azure DevOps * Unix Shell Scripting * Cloud technology experience *Key Responsibilities:* 1. *Design, build, and manage data pipelines using Azure Databricks, PySpark, and Snowflake. 2. *Analyze and resolve production issues (Tier 2 support with weekend/on-call rotation). 3. *Write and optimize complex SQL/PL-SQL queries. 4. *Collaborate on low-level and high-level design for data solutions. 5. *Document all project deliverables and support deployment. Good to Have: Knowledge of Oracle, Qlik Replicate, GoldenGate, Hadoop Job scheduler tools like Control-M or Airflow Behavioral: Strong problem-solving & communication skills
Posted 2 weeks ago
8.0 - 13.0 years
10 - 14 Lacs
Bengaluru
Hybrid
Qualification & Experience: Minimum of 8 years of experience as a Data Scientist/Engineer with demonstrated expertise in data engineering and cloud computing technologies. Technical Responsibilities Excellent proficiency in Python, with a strong focus on developing advanced skills. Extensive exposure to NLP and image processing concepts. Proficient in version control systems like Git. In-depth understanding of Azure deployments. Expertise in OCR, ML model training, and transfer learning. Experience working with unstructured data formats such as PDFs, DOCX, and images. O Strong familiarity with data science best practices and the ML lifecycle. Strong experience with data pipeline development, ETL processes, and data engineering tools such as Apache Airflow, PySpark, or Databricks. Familiarity with cloud computing platforms like Azure, AWS, or GCP, including services like Azure Data Factory, S3, Lambda, and BigQuery. Tool Exposure: Advanced understanding and hands-on experience with Git, Azure, Python, R programming and data engineering tools such as Snowflake, Databricks, or PySpark. Data mining, cleaning and engineering: Leading the identification and merging of relevant data sources, ensuring data quality, and resolving data inconsistencies. Cloud Solutions Architecture: Designing and deploying scalable data engineering workflows on cloud platforms such as Azure, AWS, or GCP. Data Analysis : Executing complex analyses against business requirements using appropriate tools and technologies. Software Development : Leading the development of reusable, version-controlled code under minimal supervision. Big Data Processing : Developing solutions to handle large-scale data processing using tools like Hadoop, Spark, or Databricks. Principal Duties & Key Responsibilities: Leading data extraction from multiple sources, including PDFs, images, databases, and APIs. Driving optical character recognition (OCR) processes to digitize data from images. Applying advanced natural language processing (NLP) techniques to understand complex data. Developing and implementing highly accurate statistical models and data engineering pipelines to support critical business decisions and continuously monitor their performance. Designing and managing scalable cloud-based data architectures using Azure, AWS, or GCP services. Collaborating closely with business domain experts to identify and drive key business value drivers. Documenting model design choices, algorithm selection processes, and dependencies. Effectively collaborating in cross-functional teams within the CoE and across the organization. Proactively seeking opportunities to contribute beyond assigned tasks. Required Competencies: Exceptional communication and interpersonal skills. Proficiency in Microsoft Office 365 applications. Ability to work independently, demonstrate initiative, and provide strategic guidance. Strong networking, communication, and people skills. Outstanding organizational skills with the ability to work independently and as part of a team. Excellent technical writing skills. Effective problem-solving abilities. Flexibility and adaptability to work flexible hours as required. Key competencies / Values: Client Focus : Tailoring skills and understanding client needs to deliver exceptional results. Excellence : Striving for excellence defined by clients, delivering high-quality work. Trust : Building and retaining trust with clients, colleagues, and partners. Teamwork : Collaborating effectively to achieve collective success. Responsibility : Taking ownership of performance and safety, ensuring accountability.
Posted 2 weeks ago
1.0 - 3.0 years
3 - 6 Lacs
Hyderabad
Hybrid
What you will do In this vital role, you will be responsible for the end-to-end development of an enterprise analytics and data mastering solution using Databricks and Power BI. This role requires expertise in both data architecture and analytics, with the ability to create scalable, reliable, and impactful enterprise solutions that research cohort-building and advanced research pipeline. The ideal candidate will have experience creating and surfacing large unified repositories of human data, based on integrations from multiple repositories and solutions, and be extraordinarily skilled with data analysis and profiling. You will collaborate closely with key customers, product team members, and related IT teams, to design and implement data models, integrate data from various sources, and ensure best practices for data governance and security. The ideal candidate will have a good background in data warehousing, ETL, Databricks, Power BI, and enterprise data mastering. Design and build scalable enterprise analytics solutions using Databricks, Power BI, and other modern data tools. Leverage data virtualization, ETL, and semantic layers to balance need for unification, performance, and data transformation with goal to reduce data proliferation Break down features into work that aligns with the architectural direction runway Participate hands-on in pilots and proofs-of-concept for new patterns Create robust documentation from data analysis and profiling, and proposed designs and data logic Develop advanced sql queries to profile, and unify data Develop data processing code in sql, along with semantic views to prepare data for reporting Develop PowerBI Models and reporting packages Design robust data models, and processing layers, that support both analytical processing and operational reporting needs. Design and develop solutions based on best practices for data governance, security, and compliance within Databricks and Power BI environments. Ensure the integration of data systems with other enterprise applications, creating seamless data flows across platforms. Develop and maintain Power BI solutions, ensuring data models and reports are optimized for performance and scalability. Collaborate with key customers to define data requirements, functional specifications, and project goals. Continuously evaluate and adopt new technologies and methodologies to enhance the architecture and performance of data solutions. What we expect of you We are all different, yet we all use our unique contributions to serve patients. The R&D Data Catalyst Team is responsible for building Data Searching, Cohort Building, and Knowledge Management tools that provide the Amgen scientific community with visibility to Amgens wealth of human datasets, projects and study histories, and knowledge over various scientific findings. These solutions are pivotal tools in Amgens goal to accelerate the speed of discovery, and speed to market of advanced precision medications. Basic Qualifications: Masters degree and 1 to 3 years of Data Engineering experience OR Bachelors degree and 3 to 5 years of Data Engineering experience OR Diploma and 7 to 9 years of Data Engineering experience Must-Have Skills: Minimum of 3 years of hands-on experience with BI solutions (Preferable Power BI or Business Objects) including report development, dashboard creation, and optimization. Minimum of 3 years of hands-on experience building Change-data-capture (CDC) ETL pipelines, data warehouse design and build, and enterprise-level data management. Hands-on experience with Databricks, including data engineering, optimization, and analytics workloads. Deep understanding of Power BI, including model design, DAX, and Power Query. Proven experience designing and implementing data mastering solutions and data governance frameworks. Expertise in cloud platforms (AWS), data lakes, and data warehouses. Strong knowledge of ETL processes, data pipelines, and integration technologies. Good communication and collaboration skills to work with cross-functional teams and senior leadership. Ability to assess business needs and design solutions that align with organizational goals. Exceptional hands-on capabilities with data profiling, data transformation, data mastering Success in mentoring and training team members Good-to-Have Skills: Experience in developing differentiated and deliverable solutions Experience with human data, ideally human healthcare data Familiarity with laboratory testing, patient data from clinical care, HL7, FHIR, and/or clinical trial data management Professional Certifications: ITIL Foundation or other relevant certifications (preferred) SAFe Agile Practitioner (6.0) Microsoft Certified: Data Analyst Associate (Power BI) or related certification. Databricks Certified Professional or similar certification. Soft Skills: Excellent analytical and troubleshooting skills Deep intellectual curiosity The highest degree of initiative and self-motivation Strong verbal and written communication skills, including presentation to varied audiences of complex technical/business topics Confidence technical leader Ability to work effectively with global, remote teams, specifically including using of tools and artifacts to assure clear and efficient collaboration across time zones Ability to handle multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong problem solving, analytical skills; Ability to learn quickly and retain and synthesize complex information from diverse sources
Posted 2 weeks ago
6.0 - 8.0 years
8 - 10 Lacs
Kolkata
Work from Office
Job Summary : We are seeking an experienced Data Engineer with strong expertise in Databricks, Python, PySpark, and Power BI, along with a solid background in data integration and the modern Azure ecosystem. The ideal candidate will play a critical role in designing, developing, and implementing scalable data engineering solutions and pipelines. Key Responsibilities : - Design, develop, and implement robust data solutions using Azure Data Factory, Databricks, and related data engineering tools. - Build and maintain scalable ETL/ELT pipelines with a focus on performance and reliability. - Write efficient and reusable code using Python and PySpark. - Perform data cleansing, transformation, and migration across various platforms. - Work hands-on with Azure Data Factory (ADF) for at least 1.5 to 2 years. - Develop and optimize SQL queries, stored procedures, and manage large data sets using SQL Server, T-SQL, PL/SQL, etc. - Collaborate with cross-functional teams to understand business requirements and provide data-driven solutions. - Engage directly with clients and business stakeholders to gather requirements, suggest optimal solutions, and ensure successful delivery. - Work with Power BI for basic reporting and data visualization tasks. - Apply strong knowledge of data warehousing concepts, modern data platforms, and cloud-based analytics. - Adhere to coding standards and best practices, including thorough documentation and testing (unit, integration, performance). - Support the operations, maintenance, and enhancement of existing data pipelines and architecture. - Estimate tasks and plan release cycles effectively. Required Technical Skills : - Languages & Frameworks : Python, PySpark - Cloud & Tools : Azure Data Factory, Databricks, Azure ecosystem - Databases : SQL Server, T-SQL, PL/SQL - Reporting & BI Tools : Power BI (PBI) - Data Concepts : Data Warehousing, ETL/ELT, Data Cleansing, Data Migration - Other : Version control, Agile methodologies, good problem-solving skills Preferred Qualifications : - Experience with coding in Pysense within Databricks (added advantage) - Solid understanding of cloud data architecture and analytics processes - Ability to independently initiate and lead conversations with business stakeholders
Posted 2 weeks ago
7.0 - 11.0 years
32 - 35 Lacs
Hyderabad
Work from Office
About the job We are seeking an experienced Data Engineering Specialist interested in challenging the status quo to ensure the seamless creation and operation of the data pipelines that are needed by Sanofis advanced analytic, AI and ML initiatives for the betterment of our global patients and customers. Sanofi has recently embarked into a vast and ambitious digital transformation program. A cornerstone of this roadmap is the acceleration of its data transformation and of the adoption of artificial intelligence (AI) and machine learning (ML) solutions, to accelerate R&D, manufacturing and commercial performance and bring better drugs and vaccines to patients faster, to improve health and save lives Main Responsibilities: Establish technical designs to meet Sanofi requirements aligned with the architectural and Data standards Ownership of the entire back end of the application, including the design, implementation, test, and troubleshooting of the core application logic, databases, data ingestion and transformation, data processing and orchestration of pipelines, APIs, CI/CD integration and other processes Fine-tune and optimize queries using Snowflake platform and database techniques Optimize ETL/data pipelines to balance performance, functionality, and other operational requirements. Assess and resolve data pipeline issues to ensure performance and timeliness of execution Assist with technical solution discovery to ensure technical feasibility. Assist in setting up and managing CI/CD pipelines and development of automated tests Developing and managing microservices using python Conduct peer reviews for quality, consistency, and rigor for production level solution Design application architecture for efficient concurrent user handling, ensuring optimal performance during high usage periods Own all areas of the product lifecycle: design, development, test, deployment, operation, and support About you Qualifications: 5+ years of relevant experience developing backend, integration, data pipelining, and infrastructure Expertise in database optimization and performance improvement Expertise in Python, PySpark, and Snowpark Experience data warehousing and object-relational database (Snowflake and PostgreSQL) and writing efficient SQL queries Experience in cloud-based data platforms (Snowflake, AWS) Proficiency in developing robust, reliable APIs using Python and FastAPI Framework Expert in ELT and ETL & experience working with large data sets and performance and query optimization. IICS is a plus Understanding of data structures and algorithms Understanding of DBT is a plus Experience in modern testing framework (SonarQube, K6 is a plus) Strong collaboration skills, willingness to work with others to ensure seamless integration of the server-side and client-side Knowledge of DevOps best practices and associated tools is a plus, especially in the setup, configuration, maintenance, and troubleshooting of associated tools: Containers and containerization technologies (Kubernetes, Argo, Red Hat OpenShift) Infrastructure as code (Terraform) Monitoring and Logging (CloudWatch, Grafana) CI/CD Pipelines (JFrog Artifactory) Scripting and automation (Python, GitHub, Github actions) Experience with JIRA & Confluence Workflow orchestration (Airflow) Message brokers (RabbitMQ) Education: Bachelor’s degree in computer science, engineering, or similar quantitative field of study Why choose us? Bring the miracles of science to life alongside a supportive, future-focused team. Discover endless opportunities to grow your talent and drive your career, whether it’s through a promotion or lateral move, at home or internationally. Enjoy a thoughtful, well-crafted rewards package that recognizes your contribution and amplifies your impact. Take good care of yourself and your family, with a wide range of health and wellbeing benefits including high-quality healthcare, prevention and wellness programs and at least 14 weeks’ gender-neutral parental leave. Opportunity to work in an international environment, collaborating with diverse business teams and vendors, working in a dynamic team, and fully empowered to propose and implement innovative ideas. Pursue Progress . Discover Extraordinary . Progress doesn’t happen without people – people from different backgrounds, in different locations, doing different roles, all united by one thing: a desire to make miracles happen. You can be one of those people. Chasing change, embracing new ideas and exploring all the opportunities we have to offer. Let’s pursue progress. And let’s discover extraordinary together. At Sanofi, we provide equal opportunities to all regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, or gender identity. Watch our ALL IN video and check out our Diversity Equity and Inclusion actions at sanofi.com! Languages: English is a must
Posted 2 weeks ago
8.0 - 13.0 years
0 - 3 Lacs
Pune
Hybrid
Must have:- • Strong communicator, from making presentations to technical writing • Experience in financial domain preferably in the areas of Financial accounting and Risk. • Strong data analysis and validation experience with attention to detail and understanding patterns • Good understanding of Credit Risk, workflow and reporting solutions • Have a flair for technology and are adept at leveraging the latest tools and technologies to increase team productivity and collaboration across dispersed teams • Experience in Python, SQL and PL/SQL (Oracle) and writing queries, scripts. • A real passion for and experience of Agile working practices, with a strong desire to work with baked in quality subject areas such as TDD, BDD, test automation and DevOps principles • Experience using DevOps toolsets like GitLab • Has prior experience of working on databases and usage of SQL to perform data analysis, preferably cloud technologies • Familiar with Azure Native Cloud services, software design and enterprise integration patterns. • Experience working with / exposure to NLP, Gen AI, ML and data modelling projects, a plus • Has prior experience of working on IT projects and has knowledge and experience of software development life cycle using Agile methodology • Is well structured, very reliable and dedicated; has high attention to detail, has ability to handle a significant number of dependencies and issues ensuring nothing is missed out. • Has excellent written and verbal communication skills, inter-personal and negotiation skills • Ability to work as part of a global team with multiple delivery teams and engage with stakeholders at various levels • Able to challenge the status quo and if required have the ability to push-back demands from stakeholders with right justification • Someone with a general ability to pick up information quickly and turn around deliverables under pressure • A team player with excellent people management skills • Takes ownership of tasks assigned to ultimate resolution • Accuracy and timeliness of delivering solutions using the best IT standards and practices
Posted 2 weeks ago
6.0 - 8.0 years
8 - 12 Lacs
Hyderabad
Work from Office
What you will do In this vital role We are looking for a creative and technically skilled Senior Software Engineer/AI Engineer Search to help design and build cutting-edge AI-powered search solutions for the pharmaceutical industry. In this role, you'll develop intelligent systems that surface the most relevant insights from clinical trials, scientific literature, regulatory documents, and internal knowledge assets. Your work will empower researchers, clinicians, and decision-makers with faster, smarter access to the right information. . Design and implement search algorithms using NLP, machine learning, semantic understanding, and deep learning models Build and fine-tune models for information retrieval, query expansion, document ranking, summarization, and Q&A systems Support integration of LLMs (e.g., GPT, BERT, Bio BERT) for semantic and generative search Train and evaluate custom models for biomedical named entity recognition, relevance ranking, and similarity search. Build and deploy vector-based search systems using embeddings and vector databases Work closely with platform engineers to integrate AI models into scalable cloud-based infrastructures (AWS, Azure, GCP) Package and deploy search services using containerization (Docker, Kubernetes) and modern MLOps pipelines. Preprocess and structure unstructured content such as clinical trial reports, research articles, and regulatory documents Apply knowledge graphs, taxonomies, and ontologies (e.g., MeSH, UMLS, SNOMED) to enhance search results Build and deploy recommendation systems models, utilize AIML infrastructure, and contribute to model optimization and data processing Experience in Generative AI on Search Engines Experience in integrating Generative AI capabilities and Vision Models to enrich content quality and user engagement Experience Generative AI tasks such as content summarization. deduping and metadata quality. Researching and developing advanced AI algorithms, including Vision Models for visual content analysis. Implementing KPI measurement frameworks to evaluate the quality and performance of delivered models, including those utilizing Generative AI. Developing and maintaining Deep Learning models for data quality checks, visual similarity scoring, and content tagging. Continually researching current and emerging technologies and proposing changes where needed. Implement GenAI solutions, utilize ML infrastructure, and contribute to data preparation, optimization, and performance enhancements. Basic Qualifications: Degree in computer science & engineering preferred with 6-8 years of software development experience 2-4 years of experience building AI/ML models, ideally in search, NLP, or biomedical domains. Proficiency in Python and frameworks such as PyTorch, TensorFlow, Hugging Face Transformers Experience with search technologies like Elasticsearch, OpenSearch, or vector search tools Solid understanding of NLP techniques: embeddings, transformers, entity recognition, text classification Hands-on experience with various AI models, GCP Search Engines, GCP Cloud services Proficient in programming language AI/ML, Python, Java Crawlers, Java Script, SQL/NoSQL, Databricks/RDS, Data engineering, S3 Buckets, dynamo DB Strong problem solving, analytical skills; Ability to learn quickly; Excellent communication and interpersonal skills Preferred Qualifications: Experience in AI/ML, Java, Python Experienced with Fast Pythons API, GraphQL Experience with design patterns, data structures, data modelling, data algorithms Experienced with AWS /Azure Platform, building and deploying the code Experience in Postgres SQL /Mongo DB SQL database, vector database for large language models, Databricks or RDS, S3 Buckets Knowledge of LLMs, generative AI, and their use in enterprise search Experience in Google cloud Search and Google cloud Storage Experience with popular large language models Experience with LangChain or LlamaIndex framework for language models Experience with prompt engineering, model fine tuning Knowledge of NLP techniques for text analysis and sentiment analysis Experience in Agile software development methodologies Experience in End-to-End testing as part of Test-Driven Development Good to Have Skills Willingness to work on Full stack Applications Exposure to MLOps tools like MLflow, Airflow, or SageMaker Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills. Ability to work effectively with global, remote teams. High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Strong presentation and public speaking skills.
Posted 2 weeks ago
5.0 - 7.0 years
8 - 14 Lacs
Bengaluru
Work from Office
We are looking for an experienced SSAS Data Engineer with strong expertise in SSAS (Tabular and/or Multidimensional Models), SQL, MDX/DAX, and data modeling. The ideal candidate will have a solid background in designing and developing BI solutions, working with large datasets, and building scalable SSAS cubes for reporting and analytics. Experience with ETL processes and reporting tools like Power BI is a strong plus. Key Responsibilities - Design, develop, and maintain SSAS models (Tabular and/or Multidimensional). - Build and optimize MDX or DAX queries for advanced reporting needs. - Create and manage data models (Star/Snowflake schemas) supporting business KPIs. - Develop and maintain ETL pipelines for efficient data ingestion (preferably using SSIS or similar tools). - Implement KPIs, aggregations, partitioning, and performance tuning in SSAS cubes. - Collaborate with data analysts, business stakeholders, and Power BI teams to deliver accurate and insightful reporting solutions. - Maintain data quality and consistency across data sources and reporting layers. - Implement RLS/OLS and manage report security and governance in SSAS and Power BI. Primary : Required Skills : - SSAS Tabular & Multidimensional. - SQL Server (Advanced SQL, Views, Joins, Indexes). - DAX & MDX. - Data Modeling & OLAP concepts. Secondary : - ETL Tools (SSIS or equivalent). - Power BI or similar BI/reporting tools. - Performance tuning & troubleshooting in SSAS and SQL. - Version control (TFS/Git), deployment best practices.
Posted 2 weeks ago
3.0 - 5.0 years
6 - 16 Lacs
Pune
Work from Office
Primary Job Responsibilities: Collaborate with team members to maintain, monitor, and improve data ingestion pipelines on the Data & AI platform. Attend the office 3 times a week for collaborative sessions and team alignment. Drive innovation in ingestion and analytics domains to enhance performance and scalability. Work closely with the domain architect to implement and evolve data engineering strategies Required Skills: Minimum 5 years of experience in Python development focused on Data Engineering. Hands-on experience with Databricks and Delta Lake format. Strong proficiency in SQL, data structures, and robust coding practices. Solid understanding of scalable data pipelines and performance optimization. Preferred / Nice to Have: Familiarity with monitoring tools like Prometheus and Grafana. Experience using Copilot or AI-based tools for code enhancement and efficiency.
Posted 2 weeks ago
10.0 - 15.0 years
22 - 37 Lacs
Bengaluru
Work from Office
Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. As GCP Data Engineer at Kyndryl, you will be responsible for designing and developing data pipelines, participating in architectural discussions, and implementing data solutions in a cloud environment using GCP data services. You will collaborate with global architects and business teams to design and deploy innovative solutions, supporting data analytics, automation, and transformation needs. Responsibilities: Design, develop, and maintain scalable data pipelines using GCP services such as BigQuery, Dataflow, Pub/Sub, and Cloud Storage. Participate in architectural discussions, conduct system analysis, and suggest optimal solutions that are scalable, future-proof, and aligned with business requirements. Collaborate with stakeholders to gather requirements and create high-level and detailed technical designs. Design data models suitable for both transactional and big data environments, supporting Machine Learning workflows. Build and optimize ETL/ELT infrastructure using a variety of data sources and GCP services. Develop and maintain Python / PySpark for data processing and integrate with GCP services for seamless data operations. Develop and optimize SQL queries for data analysis and reporting. Monitor and troubleshoot data pipeline issues to ensure timely resolution. Implement data governance and security best practices within GCP. Perform data quality checks and validation to ensure accuracy and consistency. Support DevOps automation efforts to ensure smooth integration and deployment of data pipelines. Provide design expertise in Master Data Management (MDM), Data Quality, and Metadata Management. Provide technical support and guidance to junior data engineers and other team members. Participate in code reviews and contribute to continuous improvement of data engineering practices. Implement best practices for cost management and resource utilization within GCP. If you're ready to embrace the power of data to transform our business and embark on an epic data adventure, then join us at Kyndryl. Together, let's redefine what's possible and unleash your potential. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical and Professional Experience: Bachelor’s or master’s degree in computer science, Engineering, or a related field with over 8 years of experience in data engineering More than 3 years of experience with the GCP data ecosystem Hands-on experience and Strong proficiency in GCP components such as Dataflow, Dataproc, BigQuery, Cloud Functions, Composer, Data Fusion. Excellent command of SQL with the ability to write complex queries and perform advanced data transformation. Strong programming skills in PySpark and/or Python, specifically for building cloud-native data pipelines. Familiarity with GCP tools like Looker, Airflow DAGs, Data Studio, App Maker, etc. Hands-on experience implementing enterprise-wide cloud data lake and data warehouse solutions on GCP. Knowledge of data governance, security, and compliance best practices. Experience with private and public cloud architectures, pros/cons, and migration considerations. Excellent problem-solving, analytical, and critical thinking skills. Ability to manage multiple projects simultaneously, while maintaining a high level of attention to detail. Communication Skills: Must be able to communicate with both technical and nontechnical. Able to derive technical requirements with the stakeholders. Ability to work independently and in agile teams. Preferred Technical And Professional Experience GCP Data Engineer Certification is highly preferred. Professional certification, e.g., Open Certified Technical Specialist with Data Engineering Specialization. Experience working as a Data Engineer and/or in cloud modernization. Knowledge of Databricks, Snowflake, for data analytics. Experience in NoSQL databases Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). Familiarity with BI dashboards and Google Data Studio is a plus. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.
Posted 2 weeks ago
5.0 - 7.0 years
8 - 14 Lacs
Hyderabad
Work from Office
We are looking for a skilled Data Engineer with strong hands-on experience in Clickhouse, Kubernetes, SQL, Python, and FastAPI, along with a good understanding of PostgreSQL. The ideal candidate will be responsible for building and maintaining efficient data pipelines, optimizing query performance, and developing APIs to support scalable data services. - Design, build, and maintain scalable and efficient data pipelines and ETL processes. - Develop and optimize Clickhouse databases for high-performance analytics. - Create RESTful APIs using FastAPI to expose data services. - Work with Kubernetes for container orchestration and deployment of data services. - Write complex SQL queries to extract, transform, and analyze data from PostgreSQL and Clickhouse. - Collaborate with data scientists, analysts, and backend teams to support data needs and ensure data quality. - Monitor, troubleshoot, and improve performance of data infrastructure. - Strong experience in Clickhouse - data modeling, query optimization, performance tuning. - Expertise in SQL - including complex joins, window functions, and optimization. - Proficient in Python, especially for data processing (Pandas, NumPy) and scripting. - Experience with FastAPI for creating lightweight APIs and microservices. - Hands-on experience with PostgreSQL - schema design, indexing, and performance. - Solid knowledge of Kubernetes managing containers, deployments, and scaling. - Understanding of software engineering best practices (CI/CD, version control, testing). - Experience with cloud platforms like AWS, GCP, or Azure. - Knowledge of data warehousing and distributed data systems. - Familiarity with Docker, Helm, and monitoring tools like Prometheus/Grafana.
Posted 2 weeks ago
5.0 - 7.0 years
8 - 14 Lacs
Kanpur
Work from Office
We are looking for a skilled Data Engineer with strong hands-on experience in Clickhouse, Kubernetes, SQL, Python, and FastAPI, along with a good understanding of PostgreSQL. The ideal candidate will be responsible for building and maintaining efficient data pipelines, optimizing query performance, and developing APIs to support scalable data services. - Design, build, and maintain scalable and efficient data pipelines and ETL processes. - Develop and optimize Clickhouse databases for high-performance analytics. - Create RESTful APIs using FastAPI to expose data services. - Work with Kubernetes for container orchestration and deployment of data services. - Write complex SQL queries to extract, transform, and analyze data from PostgreSQL and Clickhouse. - Collaborate with data scientists, analysts, and backend teams to support data needs and ensure data quality. - Monitor, troubleshoot, and improve performance of data infrastructure. - Strong experience in Clickhouse - data modeling, query optimization, performance tuning. - Expertise in SQL - including complex joins, window functions, and optimization. - Proficient in Python, especially for data processing (Pandas, NumPy) and scripting. - Experience with FastAPI for creating lightweight APIs and microservices. - Hands-on experience with PostgreSQL - schema design, indexing, and performance. - Solid knowledge of Kubernetes managing containers, deployments, and scaling. - Understanding of software engineering best practices (CI/CD, version control, testing). - Experience with cloud platforms like AWS, GCP, or Azure. - Knowledge of data warehousing and distributed data systems. - Familiarity with Docker, Helm, and monitoring tools like Prometheus/Grafana.
Posted 2 weeks ago
7.0 - 12.0 years
0 - 1 Lacs
Gurugram
Remote
Job Title -Data Engineer with API Development Exp - 7+ years Location- Remote Shift timing- 11am to 8:30pm Work type: Contract 7+ Years of Overall IT experience. 3+ Years of experience - Azure Architecture / Engineering (one or more of: Azure Functions, App Services, API Development) 3+ Years of experience - Development (ex. Python, C#, Go or other) 1+ CI/CD Experience (GitHub preferred) 2+ Years of API Development experience (Creating APIs and Consuming APIs) Nice to Have: - Service Bus - Terraform - ADF - Data Bricks - Spark - Scala
Posted 2 weeks ago
3.0 - 6.0 years
0 - 3 Lacs
Pune, Mumbai (All Areas)
Hybrid
Company: Marsh McLennan Agency Description: Marsh McLennan is seeking candidates for the following position based in the Pune/Mumbai office. Senior Engineer/Principal Engineer What can you expect? We are seeking a skilled Data Engineer with 3 to 5 years of hands-on experience in building and optimizing data pipelines and architectures. The ideal candidate will have expertise in Spark, AWS Glue, AWS S3, Python, complex SQL, and AWS EMR. What is in it for you? Holidays (As Per the location) Medical & Insurance benefits (As Per the location) Shared Transport (Provided the address falls in service zone) Hybrid way of working Diversify your experience and learn new skills Opportunity to work with stakeholders globally to learn and grow We will count on you to: Design and implement scalable data solutions that support our data-driven decision-making processes. What you need to have: SQL and RDBMS knowledge - 5/5. Postgres. Should have extensive hands-on Database systems carrying tables, schema, views, materialized views. AWS Knowledge. Core and Data engineering services. Glue/ Lambda/ EMR/ DMS/ S3 - services in focus. ETL data:dge :- Any ETL tool preferably Informatica. Data warehousing. Big data:- Hadoop - Concepts. Spark - 3/5 Hive - 5/5 Python/ Java. Interpersonal skills:- Excellent communication skills and Team lead capabilities. Understanding of data systems well in big organizations setup. Passion deep diving and working with data and delivering value out of it. What makes you stand out? Databricks knowledge. Any Reporting tool experience. Preferred MicroStrategy. Marsh McLennan (NYSE: MMC) is the worlds leading professional services firm in the areas of risk, strategy and people. The Companys more than 85,000 colleagues advise clients in over 130 countries. With annual revenue of $23 billion, Marsh McLennan helps clients navigate an increasingly dynamic and complex environment through four market-leading businesses. Marsh provides data-driven risk advisory services and insurance solutions to commercial and consumer clients. Guy Carpenter develops advanced risk, reinsurance and capital strategies that help clients grow profitably and pursue emerging opportunities. Mercer delivers advice and technology-driven solutions that help organizations redefine the world of work, reshape retirement and investment outcomes, and unlock health and well being for a changing workforce. Oliver Wyman serves as a critical strategic, economic and brand advisor to private sector and governmental clients. For more information, visit marshmclennan.com, or follow us on LinkedIn and X. Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people regardless of their sex/gender, marital or parental status, ethnic origin, nationality, age, background, disability, sexual orientation, caste, gender identity or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one anchor day per week on which their full team will be together in person. Marsh McLennan (NYSE: MMC) is a global leader in risk, strategy and people, advising clients in 130 countries across four businesses: Marsh, Guy Carpenter, Mercer and Oliver Wyman. With annual revenue of $24 billion and more than 90,000 colleagues, Marsh McLennan helps build the confidence to thrive through the power of perspective. For more information, visit marshmclennan.com, or follow on LinkedIn and X. Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people and embrace diversity of age, background, caste, disability, ethnic origin, family duties, gender orientation or expression, gender reassignment, marital status, nationality, parental status, personal or social status, political affiliation, race, religion and beliefs, sex/gender, sexual orientation or expression, skin color, or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one anchor day” per week on which their full team will be together in person.
Posted 2 weeks ago
12.0 - 15.0 years
55 - 60 Lacs
Ahmedabad, Chennai, Bengaluru
Work from Office
Dear Candidate, We are hiring a Data Platform Engineer to build and maintain scalable, secure, and reliable data infrastructure for analytics and real-time processing. Key Responsibilities: Design and manage data pipelines, storage layers, and ingestion frameworks. Build platforms for batch and streaming data processing (Spark, Kafka, Flink). Optimize data systems for scalability, fault tolerance, and performance. Collaborate with data engineers, analysts, and DevOps to enable data access. Enforce data governance, access controls, and compliance standards. Required Skills & Qualifications: Proficiency with distributed data systems (Hadoop, Spark, Kafka, Airflow). Strong SQL and experience with cloud data platforms (Snowflake, BigQuery, Redshift). Knowledge of data warehousing, lakehouse, and ETL/ELT pipelines. Experience with infrastructure as code and automation. Familiarity with data quality, security, and metadata management. Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Srinivasa Reddy Kandi Delivery Manager Integra Technologies
Posted 2 weeks ago
5.0 - 7.0 years
8 - 14 Lacs
Jaipur
Work from Office
We are looking for a skilled Data Engineer with strong hands-on experience in Clickhouse, Kubernetes, SQL, Python, and FastAPI, along with a good understanding of PostgreSQL. The ideal candidate will be responsible for building and maintaining efficient data pipelines, optimizing query performance, and developing APIs to support scalable data services. - Design, build, and maintain scalable and efficient data pipelines and ETL processes. - Develop and optimize Clickhouse databases for high-performance analytics. - Create RESTful APIs using FastAPI to expose data services. - Work with Kubernetes for container orchestration and deployment of data services. - Write complex SQL queries to extract, transform, and analyze data from PostgreSQL and Clickhouse. - Collaborate with data scientists, analysts, and backend teams to support data needs and ensure data quality. - Monitor, troubleshoot, and improve performance of data infrastructure. - Strong experience in Clickhouse - data modeling, query optimization, performance tuning. - Expertise in SQL - including complex joins, window functions, and optimization. - Proficient in Python, especially for data processing (Pandas, NumPy) and scripting. - Experience with FastAPI for creating lightweight APIs and microservices. - Hands-on experience with PostgreSQL - schema design, indexing, and performance. - Solid knowledge of Kubernetes managing containers, deployments, and scaling. - Understanding of software engineering best practices (CI/CD, version control, testing). - Experience with cloud platforms like AWS, GCP, or Azure. - Knowledge of data warehousing and distributed data systems. - Familiarity with Docker, Helm, and monitoring tools like Prometheus/Grafana.
Posted 2 weeks ago
5.0 - 7.0 years
8 - 14 Lacs
Bengaluru
Work from Office
We are looking for a skilled Data Engineer with strong hands-on experience in Clickhouse, Kubernetes, SQL, Python, and FastAPI, along with a good understanding of PostgreSQL. The ideal candidate will be responsible for building and maintaining efficient data pipelines, optimizing query performance, and developing APIs to support scalable data services. - Design, build, and maintain scalable and efficient data pipelines and ETL processes. - Develop and optimize Clickhouse databases for high-performance analytics. - Create RESTful APIs using FastAPI to expose data services. - Work with Kubernetes for container orchestration and deployment of data services. - Write complex SQL queries to extract, transform, and analyze data from PostgreSQL and Clickhouse. - Collaborate with data scientists, analysts, and backend teams to support data needs and ensure data quality. - Monitor, troubleshoot, and improve performance of data infrastructure. - Strong experience in Clickhouse - data modeling, query optimization, performance tuning. - Expertise in SQL - including complex joins, window functions, and optimization. - Proficient in Python, especially for data processing (Pandas, NumPy) and scripting. - Experience with FastAPI for creating lightweight APIs and microservices. - Hands-on experience with PostgreSQL - schema design, indexing, and performance. - Solid knowledge of Kubernetes managing containers, deployments, and scaling. - Understanding of software engineering best practices (CI/CD, version control, testing). - Experience with cloud platforms like AWS, GCP, or Azure. - Knowledge of data warehousing and distributed data systems. - Familiarity with Docker, Helm, and monitoring tools like Prometheus/Grafana.
Posted 2 weeks ago
5.0 - 7.0 years
8 - 14 Lacs
Chennai
Work from Office
We are looking for a skilled Data Engineer with strong hands-on experience in Clickhouse, Kubernetes, SQL, Python, and FastAPI, along with a good understanding of PostgreSQL. The ideal candidate will be responsible for building and maintaining efficient data pipelines, optimizing query performance, and developing APIs to support scalable data services. - Design, build, and maintain scalable and efficient data pipelines and ETL processes. - Develop and optimize Clickhouse databases for high-performance analytics. - Create RESTful APIs using FastAPI to expose data services. - Work with Kubernetes for container orchestration and deployment of data services. - Write complex SQL queries to extract, transform, and analyze data from PostgreSQL and Clickhouse. - Collaborate with data scientists, analysts, and backend teams to support data needs and ensure data quality. - Monitor, troubleshoot, and improve performance of data infrastructure. - Strong experience in Clickhouse - data modeling, query optimization, performance tuning. - Expertise in SQL - including complex joins, window functions, and optimization. - Proficient in Python, especially for data processing (Pandas, NumPy) and scripting. - Experience with FastAPI for creating lightweight APIs and microservices. - Hands-on experience with PostgreSQL - schema design, indexing, and performance. - Solid knowledge of Kubernetes managing containers, deployments, and scaling. - Understanding of software engineering best practices (CI/CD, version control, testing). - Experience with cloud platforms like AWS, GCP, or Azure. - Knowledge of data warehousing and distributed data systems. - Familiarity with Docker, Helm, and monitoring tools like Prometheus/Grafana.
Posted 2 weeks ago
5.0 - 7.0 years
8 - 14 Lacs
Lucknow
Work from Office
We are looking for a skilled Data Engineer with strong hands-on experience in Clickhouse, Kubernetes, SQL, Python, and FastAPI, along with a good understanding of PostgreSQL. The ideal candidate will be responsible for building and maintaining efficient data pipelines, optimizing query performance, and developing APIs to support scalable data services. - Design, build, and maintain scalable and efficient data pipelines and ETL processes. - Develop and optimize Clickhouse databases for high-performance analytics. - Create RESTful APIs using FastAPI to expose data services. - Work with Kubernetes for container orchestration and deployment of data services. - Write complex SQL queries to extract, transform, and analyze data from PostgreSQL and Clickhouse. - Collaborate with data scientists, analysts, and backend teams to support data needs and ensure data quality. - Monitor, troubleshoot, and improve performance of data infrastructure. - Strong experience in Clickhouse - data modeling, query optimization, performance tuning. - Expertise in SQL - including complex joins, window functions, and optimization. - Proficient in Python, especially for data processing (Pandas, NumPy) and scripting. - Experience with FastAPI for creating lightweight APIs and microservices. - Hands-on experience with PostgreSQL - schema design, indexing, and performance. - Solid knowledge of Kubernetes managing containers, deployments, and scaling. - Understanding of software engineering best practices (CI/CD, version control, testing). - Experience with cloud platforms like AWS, GCP, or Azure. - Knowledge of data warehousing and distributed data systems. - Familiarity with Docker, Helm, and monitoring tools like Prometheus/Grafana.
Posted 2 weeks ago
5.0 - 10.0 years
8 - 18 Lacs
Pune, Bengaluru
Work from Office
Key Responsibilities: • Data Pipeline Development: Designing, building, and maintaining robust data pipelines to move data from various sources (e.g., databases, external APIs, logs) to centralized data systems, such as data lakes or warehouses. • Data Integration : Integrating data from multiple sources and ensuring it's processed in a consistent, usable format. This involves transforming, cleaning, and validating data to meet the needs of products, analysts and data scientists. • Database Management : Creating, managing, and optimizing databases for storing large amounts of structured and unstructured data. Ensuring high availability, scalability, and security of data storage solutions. • Performance Optimization: Identifying and resolving issues related to the speed and efficiency of data systems. This could include optimizing queries, storage systems, and improving overall system architecture. • Automation : Automating routine tasks, such as data extraction, transformation, and loading (ETL), to ensure smooth data flows with minimal manual intervention. • Collaboration with Data Teams : Working closely with Work closely with product managers, UX/UI designers, and other stakeholders to understand data requirements and ensure data is in the right format for analysis and modeling. • Data Governance and Quality : Ensuring data integrity and compliance with data governance policies, including data quality standards, privacy regulations (e.g., GDPR), and security protocols. • Monitoring and Troubleshooting : Continuously monitoring data pipelines and databases for any disruptions or errors and troubleshooting any issues that arise to ensure continuous data flow. • Tool and Technology Management : Staying up to date with emerging data tools, technologies, and best practices in order to improve data systems and infrastructure. • Documentation and Reporting : Documenting data systems, pipeline processes, and data architectures, providing clear instructions for the team to follow, and ensuring that the architecture is understandable for stakeholders. Required Skills & Experience: • Knowledge of data bases, relational DBs such as Postgres as well as NoSQL systems such as MongoDB, Kafka • Knowledge of Big Data systems such as Hadoop, Hive/Pig, Trino etc • Experience in SQL like query languages (SQL, MQL, HQL etc) • Experience in building data pipelines • Experience in Software lifecycle tools for CI/CD and version control system such as GIT • Familiarity with Agile methodologies is a plus. - General: - Strong problem-solving skills and the ability to troubleshoot complex software issues. - Familiarity with version control systems, particularly Git. - Experience with Agile methodologies (e.g., Scrum, Kanban). - Excellent communication skills, both verbal and written, with the ability to collaborate in a team environment. - Bachelor's degree in Computer Science, Engineering, or a related field, or equivalent work experience. Preferred Qualifications: • Experience work in GCP and familiarity with Kubernetes, Big Query, GCS, Airflow • Problem-Solving: Strong analytical and problem-solving skills. • Communication: Excellent verbal and written communication skills, with the ability to convey technical concepts to non-technical stakeholders. • Team Player: Ability to work collaboratively in a team-oriented environment. • Adaptability: Flexibility to adapt to changing business needs and priorities.worlHi Mugesh
Posted 2 weeks ago
0.0 - 4.0 years
9 - 13 Lacs
Pune
Work from Office
Project description You will be working in a global team that manages and performs a global technical control. You'll be joining Assets Management team which is looking after asset management data foundation and operates a set of in-house developed tooling. As an IT engineer you'll play an important role in ensuring the development methodology is followed, and lead technical design discussions with the architects. Our culture centers around partnership with our businesses, transparency, accountability and empowerment, and passion for the future. Responsibilities Design, develop, and maintain scalable data solutions using Starburst. Collaborate with cross-functional teams to integrate Starburst with existing data sources and tools. Optimize query performance and ensure data security and compliance. Implement monitoring and alerting systems for data platform health. Stay updated with the latest developments in data engineering and analytics. Skills Must have Bachelor's degree or Masters in a related technical field; or equivalent related professional experience. Prior experience as a Software Engineer applying new engineering principles to improve existing systems including leading complex, well defined projects. Strong knowledge of Big-Data Languages including: SQL Hive Spark/Pyspark Presto Python Strong knowledge of Big-Data Platforms, such as:o The Apache Hadoop ecosystemo AWS EMRo Qubole or Trino/Starburst Good knowledge and experience in cloud platforms such as AWS, GCP, or Azure. Continuous learner with the ability to apply previous experience and knowledge to quickly master new technologies. Demonstrates the ability to select among technology available to implement and solve for need. Able to understand and design moderately complex systems. Understanding of testing and monitoring tools. Ability to test, debug, fix issues within established SLAs. Experience with data visualization tools (e.g., Tableau, Power BI). Understanding of data governance and compliance standards. Nice to have Data Architecture & EngineeringDesign and implement efficient and scalable data warehousing solutions using Azure Databricks and Microsoft Fabric. Business Intelligence & Data VisualizationCreate insightful Power BI dashboards to help drive business decisions. Other Languages EnglishC1 Advanced Seniority Senior
Posted 2 weeks ago
5.0 - 8.0 years
11 - 15 Lacs
Pune
Work from Office
Project description Are you passionate about leveraging the latest technologies for strategic changeDo you enjoy problem solving in clever waysAre you organized enough to drive change across complex data systemsIf so, you could be the right person for this role. As an experienced data engineer, you will join a global data analytics team in our Group Chief Technology Officer / Enterprise Architecture organization supporting our strategic initiatives which ranges from portfolio health to integration. Responsibilities Help Group Enterprise Architecture team to develop our suite of EA tools and workbenches Work in the development team to support the development of portfolio health insights Build data applications from cloud infrastructure to visualization layer Produce clear and commented code Produce clear and comprehensive documentation Play an active role with technology support teams and ensure deliverables are completed or escalated on time Provide support on any related presentations, communications, and trainings Be a team player, working across the organization with skills to indirectly manage and influence Be a self-starter willing to inform and educate others Skills Must have B.Sc./M.Sc. degree in computing or similar 5-8+ years' experience as a Data Engineer, ideally in a large corporate environment In-depth knowledge of SQL and data modelling/data processing Strong experience working with Microsoft Azure Experience with visualisation tools like PowerBI (or Tableau, QlikView or similar) Experience working with Git, JIRA, GitLab Strong flair for data analytics Strong flair for IT architecture and IT architecture metrics Excellent stakeholder interaction and communication skills Understanding of performance implications when making design decisions to deliver performant and maintainable software. Excellent end-to-end SDLC process understanding. Proven track record of delivering complex data apps on tight timelines Fluent in English both written and spoken. Passionate about development with focus on data and cloud Analytical and logical, with strong problem solving skills A team player, comfortable with taking the lead on complex tasks An excellent communicator who is adept in, handling ambiguity and communicating with both technical and non-technical audiences Comfortable with working in cross-functional global teams to effect change Passionate about learning and developing your hard and soft professional skills Nice to have Experience working in the financial industry Experience in complex metrics design and reporting Experience in using artificial intelligence for data analytics Other Languages EnglishC1 Advanced Seniority Senior
Posted 2 weeks ago
5.0 - 10.0 years
11 - 15 Lacs
Pune
Work from Office
Project description You'll be working in the GM Business Analytics team located in Pune. The successful candidate will be a member of the global Distribution team, which has team members in London and Pune. We work as part of a global team providing analytical solutions for IB distribution/sales people. Solutions deployed should be extensible globally with minimal localization. Responsibilities Are you passionate about data and analyticsAre you keen to be part of the journey to modernize a data warehouse/ analytics suite of application(s). Do you take pride in the quality of software delivered for each development iteration We're looking for someone like that to join us and be a part of a high-performing team on a high-profile project. solve challenging problems in an elegant way master state-of-the-art technologies build a highly responsive and fast updating application in an Agile & Lean environment apply best development practices and effectively utilize technologies work across the full delivery cycle to ensure high-quality delivery write high-quality code and adhere to coding standards work collaboratively with diverse team(s) of technologists You are: Curious and collaborative, comfortable working independently, as well as in a team Focused on delivery to the business Strong in analytical skills. For example, the candidate must understand the key dependencies among existing systems in terms of the flow of data among them. It is essential that the candidate learns to understand the 'big picture' of how IB industry/business functions. Able to quickly absorb new terminology and business requirements Already strong in analytical tools, technologies, platforms, etc. The candidate must also demonstrate a strong desire for learning and self-improvement. Open to learning home-grown technologies, support current state infrastructure and help drive future state migrations. imaginative and creative with newer technologies Able to accurately and pragmatically estimate the development effort required for specific objectives You will have the opportunity to work under minimal supervision to understand local and global system requirements, design and implement the required functionality/bug fixes/enhancements. You will be responsible for components that are developed across the whole team and deployed globally. You will also have the opportunity to provide third-line support to the application's global user community, which will include assisting dedicated support staff and liaising with the members of other development teams directly, some of which will be local and some remote. Skills Must have A bachelor's or master's degree, preferably in Information Technology or a related field (computer science, mathematics, etc.), focusing on data engineering. 5+ years of relevant experience as a data engineer in Big Data is required. Strong Knowledge of programming languages (Python / Scala) and Big Data technologies (Spark, Databricks or equivalent) is required. Strong experience in executing complex data analysis and running complex SQL/Spark queries. Strong experience in building complex data transformations in SQL/Spark. Strong knowledge of Database technologies is required. Strong knowledge of Azure Cloud is advantageous. Good understanding and experience with Agile methodologies and delivery. Strong communication skills with the ability to build partnerships with stakeholders. Strong analytical, data management and problem-solving skills. Nice to have Experience working on the QlikView tool Understanding of QlikView scripting and data model Other Languages EnglishC1 Advanced Seniority Senior
Posted 2 weeks ago
0.0 - 4.0 years
5 - 10 Lacs
Mumbai
Work from Office
Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to dive headfirst into the captivating world of data engineering at Kyndryl? As a Data Engineer, you'll be the visionary behind our data platforms, crafting them into powerful tools for decision-makers. Your role? Ensuring a treasure trove of pristine, harmonized data is at everyone's fingertips. As a Data Engineer at Kyndryl, you'll be at the forefront of the data revolution, crafting and shaping data platforms that power our organization's success. This role is not just about code and databases; it's about transforming raw data into actionable insights that drive strategic decisions and innovation. In this role, you'll be engineering the backbone of our data infrastructure, ensuring the availability of pristine, refined data sets. With a well-defined methodology, critical thinking, and a rich blend of domain expertise, consulting finesse, and software engineering prowess, you'll be the mastermind of data transformation. Your journey begins by understanding project objectives and requirements from a business perspective, converting this knowledge into a data puzzle. You'll be delving into the depths of information to uncover quality issues and initial insights, setting the stage for data excellence. But it doesn't stop there. You'll be the architect of data pipelines, using your expertise to cleanse, normalize, and transform raw data into the final dataset—a true data alchemist. Armed with a keen eye for detail, you'll scrutinize data solutions, ensuring they align with business and technical requirements. Your work isn't just a means to an end; it's the foundation upon which data-driven decisions are made – and your lifecycle management expertise will ensure our data remains fresh and impactful. So, if you're a technical enthusiast with a passion for data, we invite you to join us in the exhilarating world of data engineering at Kyndryl. Let's transform data into a compelling story of innovation and growth. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Skills and Experience •Expertise in data mining, data storage and Extract-Transform-Load (ETL) processes •Experience in data pipelines development and tooling, e.g., Glue, Databricks, Synapse, or Dataproc •Experience with both relational and NoSQL databases, PostgreSQL, DB2, MongoDB •Excellent problem-solving, analytical, and critical thinking skills •Ability to manage multiple projects simultaneously, while maintaining a high level of attention to detail •Communication Skills: Must be able to communicate with both technical and non-technical colleagues, to derive technical requirements from business needs and problems Preferred Skills and Experience •Experience working as a Data Engineer and/or in cloud modernization •Experience in Data Modelling, to create conceptual model of how data is connected and how it will be used in business processes •Professional certification, e.g., Open Certified Technical Specialist with Data Engineering Specialization •Cloud platform certification, e.g., AWS Certified Data Analytics– Specialty, Elastic Certified Engineer, Google CloudProfessional Data Engineer, or Microsoft Certified: Azure Data Engineer Associate •Experience working with Kafka , ElasticSearch, Kibana & maintaining data lake •Managing interfaces, monitoring for production deployment including log shipping tool. •Experience in updates, upgrade, patches, VA closure, support with industry best tools •Degree in a scientific discipline, such as Computer Science, Software Engineering, or Information Technology Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane