Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 8.0 years
12 - 18 Lacs
Pune, Gurugram, Bengaluru
Work from Office
Warm welcome from SP Staffing Services! Reaching out to you regarding permanent opportunity !! Job Description: Exp: 5-8 yrs Location: Gurugram/Bangalore Skill: Azure Data Engineer Interested can share your resume to sangeetha.spstaffing@gmail.com with below inline details. Full Name as per PAN: Mobile No: Alt No/ Whatsapp No: Total Exp: Relevant Exp in Databricks: Rel Exp in Pyspark: Rel Exp in DWH: Rel Exp in Python: Current CTC: Expected CTC: Notice Period (Official): Notice Period (Negotiable)/Reason: Date of Birth: PAN number: Reason for Job Change: Offer in Pipeline (Current Status): Availability for virtual interview on weekdays between 10 AM- 4 PM(plz mention time): Current Res Location: Preferred Job Location: Whether educational % in 10th std, 12th std, UG is all above 50%? Do you have any gaps in between your education or Career? If having gap, please mention the duration in months/year:
Posted 4 weeks ago
5.0 - 7.0 years
10 - 17 Lacs
Chennai
Work from Office
Key Responsibilities : Architect and implement end-to-end data solutions using Azure services (Data Factory, Databricks, Data Lake, Synapse, Cosmos DB, etc.). Design robust and scalable data models, including relational, dimensional, and NoSQL schemas Develop and optimize ETL/ELT pipelines and data lakes using Azure Data Factory, Databricks, and open formats such as Delta and Iceberg. Integrate data governance, quality, and security best practices into all architecture designs. Support analytics and machine learning initiatives through structured data pipelines and platforms. Develop and maintain high-performance REST APIs using FastAPI or Flask Ensure data integrity, quality, and availability across various sources Integrate data workflows with application components to support real-time or scheduled processes Collaborate with data engineers, analysts, data scientists, and business stakeholders to align solutions with business needs. Drive CI/CD integration with Databricks using Azure DevOps and tools like DBT. Monitor system performance, troubleshoot issues, and optimize data infrastructure for efficiency and reliability. Communicate technical concepts effectively to non-technical stakeholders Required Skills & Experience Extensive hands-on experience with Azure services: Data Factory, Databricks, Data Lake, Azure SQL, Cosmos DB, Synapse. Expertise in data modeling and design (relational, dimensional, NoSQL). Proven experience with ETL/ELT processes, data lakes, and modern lakehouse architectures. Solid experience with SQL Server or any major RDBMS; ability to write complex queries and stored procedures 3+ years of experience with Azure Data Factory, Azure Databricks, and PySpark Strong programming skills in Python, with solid understanding of Pandas and NumPy Proven experience in building REST APIs Good knowledge of data formats (JSON, Parquet, Avro) and API communication patterns Strong knowledge of data governance, security, and compliance frameworks. Experience with CI/CD, Azure DevOps, and infrastructure as code (Terraform or ARM templates). Familiarity with BI and analytics tools such as Power BI or Tableau. Strong problem-solving skills and attention to performance, scalability, and security Excellent communication skills both written and verbal Preferred Qualifications • Experience in regulated industries (finance, healthcare, etc.). • Familiarity with data cataloging, metadata management, and machine learning integration. • Leadership experience guiding teams and presenting architectural strategies to leadership
Posted 4 weeks ago
3.0 - 5.0 years
7 - 12 Lacs
Pune
Work from Office
Role & responsibilities Proficient: Languages/Framework: Fast API, Azure UI Search API (React) tabases and ETL: Cosmos DB (API for MongoDB), Data Factory Data Bricks Proficiency in Python and R Cloud: Azure Cloud Basics (Azure DevOps) Gitlab: Gitlab Pipeline o Ansible and REX: Rex Deployment Data Science: Prompt Engineering + Modern Testing Data mining and cleaning ML (Supervised/unsupervised learning) NLP techniques, knowledge of Deep Learning techniques include RNN, transformers End-to-end AI solution delivery AI integration and deployment AI frameworks (PyTorch) MLOps frameworks Model deployment processes Data pipeline monitoring Expert: (in addition to proficient skills) Languages/Framework: Azure Open AI Data Science: Open AI GPT Family of models 4o/4/3, Embeddings + Vector Search Databases and ETL: Azure Storage Account Expertise in machine learning algorithms (supervised, unsupervised, reinforcement learning) Proficiency in deep learning frameworks (TensorFlow, PyTorch) Strong mathematical foundation (linear algebra, calculus, probability, statistics) Research methodology and experimental design Proficiency in data analysis tools (Pandas, NumPy, SQL) Strong statistical and probabilistic modelling skills Data visualization skills (Matplotlib, Seaborn, Tableau) Knowledge of big data technologies (Spark, Hive) Experience with AI-driven analytics and decision-making systems Note: ***Notice Period should not be more than 10-15 days. ***
Posted 4 weeks ago
4.0 - 9.0 years
6 - 10 Lacs
Kolkata
Work from Office
Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build whats next for their businesses. Your role Develop and maintain data pipelines tailored to Azure environments, ensuring security and compliance with client data standards. Collaborate with cross-functional teams to gather data requirements, translate them into technical specifications, and develop data models. Leverage Python libraries for data handling, enhancing processing efficiency and robustness. Ensure SQL workflows meet client performance standards and handle large data volumes effectively. Build and maintain reliable ETL pipelines, supporting full and incremental loads and ensuring data integrity and scalability in ETL processes. Implement CI/CD pipelines for automated deployment and testing of data solutions. Optimize and tune data workflows and processes to ensure high performance and reliability. Monitor, troubleshoot, and optimize data processes for performance and reliability. Document data infrastructure, workflows, and maintain industry knowledge in data engineering and cloud tech. Your Profile Bachelors degree in computer science, Information Systems, or a related field 4+ years of data engineering experience with a strong focus on Azure data services for client-centric solutions. Extensive expertise in Azure Synapse, Data Lake Storage, Data Factory, Databricks, and Blob Storage, ensuring secure, compliant data handling for clients. Good interpersonal communication skills Skilled in designing and maintaining scalable data pipelines tailored to client needs in Azure environments. Proficient in SQL and PL/SQL for complex data processing and client-specific analytics. What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of 22.5 billion.
Posted 1 month ago
5.0 - 8.0 years
3 - 7 Lacs
Kolkata
Work from Office
Execute high-quality visual designs for various documentation projects in PPT & Word. Collaborate with senior designers, team members, and other streams to understand design requirements and contribute effectively to design projects. Stay up-to-date with industry trends, software updates, and new technologies to improve your efficiency and productivity. Adapt designs based on feedback from peers, senior designers, and stakeholders to refine and enhance the final product. Assist in the preparation of design presentations, and word to communicate ideas effectively to clients or internal stakeholders. Ensure that all visual designs are of high quality and adhere to design standards and guidelines. Maintain a high level of deliverables in accordance with directives from the stream lead or design lead. Prioritize tasks and manage your workload efficiently to meet project deadlines while maintaining design quality. Primary Skill Ability to manage multiple projects simultaneously under tight deadlines. Microsoft Word and PowerPoint proficiency is preferred, although having working knowledge of MS Excel, Photoshop, Illustrator, InDesign, is also beneficial. Secondary Skill Experience working with advertising agencies or branding companies. Good communication skills. Skills (competencies)
Posted 1 month ago
6.0 - 11.0 years
8 - 16 Lacs
Hyderabad, Pune, Chennai
Hybrid
Data Engineer having good experience on Azure Databricks and Python Must Have Databricks Python Azure Good to have ADF Candidate must be proficient in Databricks
Posted 1 month ago
3.0 years
5 - 25 Lacs
Chennai, Tamil Nadu, India
On-site
Location: Bangalore, Pune, Chennai, Kolkata ,Gurugram Experience:5-15 Years Work Mode: Hybrid Mandatory Skills: Snowflake/Azure Data Factory/ PySpark / Databricks Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Snowflake, Databricks and ADF. Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in the reporting layer and develop a data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussions with client architect and team members Orchestrate the data pipelines in the scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Expertise in Snowflake security, Snowflake SQL and designing/implementing other Snowflake objects. Hands-on experience with Snowflake utilities, SnowSQL, Snowpipe, Snowsight and Snowflake connectors. Deep understanding of Star and Snowflake dimensional modeling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL and Spark (PySpark) Experience in building ETL / data warehouse transformation processes Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning, troubleshooting and Query Optimization. Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Skills: data,circleci,sql,snowflake,etl,databricks,airflow,terraform,git,python,pipelines,azure datafactory,projects,reporting,pyspark,pl/sql,unix shell scripting,rdbms,data warehouse,azure data factory,nosql,skills
Posted 1 month ago
3.0 years
5 - 25 Lacs
Gurugram, Haryana, India
On-site
Location: Bangalore, Pune, Chennai, Kolkata ,Gurugram Experience:5-15 Years Work Mode: Hybrid Mandatory Skills: Snowflake/Azure Data Factory/ PySpark / Databricks Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Snowflake, Databricks and ADF. Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in the reporting layer and develop a data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussions with client architect and team members Orchestrate the data pipelines in the scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Expertise in Snowflake security, Snowflake SQL and designing/implementing other Snowflake objects. Hands-on experience with Snowflake utilities, SnowSQL, Snowpipe, Snowsight and Snowflake connectors. Deep understanding of Star and Snowflake dimensional modeling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL and Spark (PySpark) Experience in building ETL / data warehouse transformation processes Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning, troubleshooting and Query Optimization. Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Skills: data,circleci,sql,snowflake,etl,databricks,airflow,terraform,git,python,pipelines,azure datafactory,projects,reporting,pyspark,pl/sql,unix shell scripting,rdbms,data warehouse,azure data factory,nosql,skills
Posted 1 month ago
3.0 years
5 - 25 Lacs
Greater Kolkata Area
On-site
Location: Bangalore, Pune, Chennai, Kolkata ,Gurugram Experience:5-15 Years Work Mode: Hybrid Mandatory Skills: Snowflake/Azure Data Factory/ PySpark / Databricks Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Snowflake, Databricks and ADF. Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in the reporting layer and develop a data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussions with client architect and team members Orchestrate the data pipelines in the scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Expertise in Snowflake security, Snowflake SQL and designing/implementing other Snowflake objects. Hands-on experience with Snowflake utilities, SnowSQL, Snowpipe, Snowsight and Snowflake connectors. Deep understanding of Star and Snowflake dimensional modeling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL and Spark (PySpark) Experience in building ETL / data warehouse transformation processes Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning, troubleshooting and Query Optimization. Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Skills: data,circleci,sql,snowflake,etl,databricks,airflow,terraform,git,python,pipelines,azure datafactory,projects,reporting,pyspark,pl/sql,unix shell scripting,rdbms,data warehouse,azure data factory,nosql,skills
Posted 1 month ago
1.0 - 4.0 years
1 - 3 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Cloud Data Engineer Job Title : Cloud Data Engineer Location : Chennai, Hyderabad, Bangalore Experience : 1-4 Job Summary The Cloud Data Engineer designs and builds scalable data pipelines and architectures in cloud environments. This role supports analytics, machine learning, and business intelligence initiatives by ensuring reliable data flow and transformation. Key Responsibilities Develop and maintain ETL/ELT pipelines using cloud-native tools. Design data models and storage solutions optimized for performance and scalability. Integrate data from various sources (APIs, databases, streaming platforms). Ensure data quality, consistency, and security across pipelines. Collaborate with data scientists, analysts, and business teams. Monitor and troubleshoot data workflows and infrastructure. Automate data engineering tasks using scripting and orchestration tools. Required Skills Experience with cloud data platforms (AWS Glue, Azure Data Factory, Google Cloud Dataflow). Proficiency in SQL and programming languages (Python, Scala, Java). Knowledge of big data technologies (Spark, Hadoop, Kafka). Familiarity with data warehousing solutions (Redshift, BigQuery, Snowflake). Understanding of data governance, privacy, and compliance standards. Qualifications Bachelors or Masters degree in Computer Science, Data Engineering, or related field. 3+ years of experience in data engineering, preferably in cloud environments. Certifications in cloud data engineering (e.g., Google Professional Data Engineer, AWS Data Analytics Specialty) are a plus.
Posted 1 month ago
12.0 years
10 - 45 Lacs
Pune, Maharashtra, India
On-site
Location: Pune Experience: 8 -12 Years Work Mode: Hybrid Mandatory Skills: Python, Pyspark, SQL, ETL, Data Pipeline, Azure Databricks, Azure DataFactory, Azure Synapse, Airflow, and Architect Designing,Architect. Note-Interview mode -Face 2 Face Overview We are seeking a skilled and motivated Data Engineer with experience in Python, SQL, Azure, and cloud-based technologies to join our dynamic team. The ideal candidate will have a solid background in building and optimizing data pipelines, working with cloud platforms, and leveraging modern data engineering tools like Airflow, PySpark, and Azure Data Engineering. If you are passionate about data and looking for an opportunity to work on cutting-edge technologies, this role is for you! Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Skills: data pipeline,azure,design,data,azure datafactory,architect,data lakes,data warehouse,pyspark,skills,airflow,pipelines,azure databricks,etl,data engineering,python,architect designing,sql,azure synapse
Posted 1 month ago
7.0 - 12.0 years
8 - 18 Lacs
Navi Mumbai, Pune
Work from Office
Interested candidates kindly submit the below form: https://forms.gle/ex1M5oa3qagMUcrn6 Job Description: We are looking for an experienced Team Lead Data Warehouse Migration, Data Engineering & BI to lead enterprise-level data transformation initiatives. The ideal candidate will have deep expertise in migration , Snowflake , Power BI and end-to-end data engineering using tools like Azure Data Factory , Databricks , and PySpark . Key Responsibilities: Lead and manage data warehouse migration projects , including extraction, transformation, and loading (ETL/ELT) across legacy and modern platforms. Architect and implement scalable Snowflake data warehousing solutions for analytics and reporting. Develop and schedule robust data pipelines using Azure Data Factory and Databricks . Write efficient and maintainable PySpark code for batch and real-time data processing. Design and develop dashboards and reports using Power BI to support business insights. Ensure data accuracy, security, and consistency throughout the project lifecycle. Collaborate with stakeholders to understand data and reporting requirements. Mentor and lead a team of data engineers and BI developers. Manage project timelines, deliverables, and team performance effectively Must-Have Skills: Data Migration: Hands-on experience with large-scale data migration, reconciliation, and transformation. Snowflake: Data modeling, performance tuning, ELT/ETL development, role-based access control. Azure Data Factory: Pipeline development, integration services, linked services. Databricks: Spark SQL, notebooks, cluster management, orchestration. PySpark: Advanced transformations, error handling, and optimization techniques. Power BI: Data visualization, DAX, Power Query, dashboard/report publishing and maintenance. Preferred Skills: Familiarity with Agile methodologies and sprint-based development. Experience in working with CI/CD for data workflows. Ability to lead client discussions and manage stakeholder expectations. Strong analytical and problem-solving abilities.
Posted 1 month ago
5.0 - 10.0 years
15 - 30 Lacs
Hyderabad
Work from Office
Lead Data Engineer Data Management Job description Company Overview Accordion works at the intersection of sponsors and management teams throughout every stage of the investment lifecycle, providing hands-on, execution-focused support to elevate data and analytics capabilities. So, what does it mean to work at Accordion? It means joining 1,000+ analytics, data science, finance & technology experts in a high-growth, agile, and entrepreneurial environment while transforming how portfolio companies drive value. It also means making your mark on Accordions futureby embracing a culture rooted in collaboration and a firm-wide commitment to building something great, together. Headquartered in New York City with 10 offices worldwide, Accordion invites you to join our journey. Data & Analytics (Accordion | Data & Analytics) Accordion's Data & Analytics (D&A) team delivers cutting-edge, intelligent solutions to a global clientele, leveraging a blend of domain knowledge, sophisticated technology tools, and deep analytics capabilities to tackle complex business challenges. We partner with Private Equity clients and their Portfolio Companies across diverse sectors, including Retail, CPG, Healthcare, Media & Entertainment, Technology, and Logistics. D&A team delivers data and analytical solutions designed to streamline reporting capabilities and enhance business insights across vast and complex data sets ranging from Sales, Operations, Marketing, Pricing, Customer Strategies, and more. Location: Hyderabad Role Overview: Accordion is looking for Lead Data Engineer. He/she will be responsible for the design, development, configuration/deployment, and maintenance of the above technology stack. He/she must have in-depth understanding of various tools & technologies in the above domain to design and implement robust and scalable solutions which address client current and future requirements at optimal costs. The Lead Data Engineer should be able to evaluate existing architectures and recommend way to upgrade and improve the performance of architectures both on-premises and cloud-based solutions. A successful Lead Data Engineer should possess strong working business knowledge, familiarity with multiple tools and techniques along with industry standards and best practices in Business Intelligence and Data Warehousing environment. He/she should have strong organizational, critical thinking, and communication skills. What You will do: Partners with clients to understand their business and create comprehensive business requirements. Develops end-to-end Business Intelligence framework based on requirements including recommending appropriate architecture (on-premises or cloud), analytics and reporting. Works closely with the business and technology teams to guide in solution development and implementation. Work closely with the business teams to arrive at methodologies to develop KPIs and Metrics. Work with Project Manager in developing and executing project plans within assigned schedule and timeline. Develop standard reports and functional dashboards based on business requirements. Conduct training programs and knowledge transfer sessions to junior developers when needed. Recommend improvements to provide optimum reporting solutions. Curiosity to learn new tools and technologies to provide futuristic solutions for clients. Ideally, you have: Undergraduate degree (B.E/B.Tech.) from tier-1/tier-2 colleges are preferred. More than 5 years of experience in related field. Proven expertise in SSIS, SSAS and SSRS (MSBI Suite.) In-depth knowledge of databases (SQL Server, MySQL, Oracle etc.) and data warehouse (any one of Azure Synapse, AWS Redshift, Google BigQuery, Snowflake etc.) In-depth knowledge of business intelligence tools (any one of Power BI, Tableau, Qlik, DOMO, Looker etc.) Good understanding of Azure (OR) AWS: Azure (Data Factory & Pipelines, SQL Database & Managed Instances, DevOps, Logic Apps, Analysis Services) or AWS (Glue, Aurora Database, Dynamo Database, Redshift, QuickSight). Proven abilities to take on initiative and be innovative. Analytical mind with problem solving attitude. Why Explore a Career at Accordion: High growth environment: Semi-annual performance management and promotion cycles coupled with a strong meritocratic culture, enables fast track to leadership responsibility. Cross Domain Exposure: Interesting and challenging work streams across industries and domains that always keep you excited, motivated, and on your toes. Entrepreneurial Environment : Intellectual freedom to make decisions and own them. We expect you to spread your wings and assume larger responsibilities. Fun culture and peer group: Non-bureaucratic and fun working environment; Strong peer environment that will challenge you and accelerate your learning curve. Other benefits for full time employees: Health and wellness programs that include employee health insurance covering immediate family members and parents, term life insurance for employees, free health camps for employees, discounted health services (including vision, dental) for employee and family members, free doctors consultations, counsellors, etc. Corporate Meal card options for ease of use and tax benefits. Team lunches, company sponsored team outings and celebrations. Cab reimbursement for women employees beyond a certain time of the day. Robust leave policy to support work-life balance. Specially designed leave structure to support woman employees for maternity and related requests. Reward and recognition platform to celebrate professional and personal milestones. A positive & transparent work environment including various employee engagement and employee benefit initiatives to support personal and professional learning and development.
Posted 1 month ago
8.0 - 13.0 years
2 - 30 Lacs
Gurugram
Work from Office
About The Role Seeking a highly skilled Senior Data Engineer with 8 years of experience to join our dynamic team Requirements Experienced in architecting, building and maintaining end-to-end data pipelines using Python and Spark in Databricks Proficient in designing and implementing scalable data lake and data warehouse solutions on Azure including Azure Data Lake, Data Factory, Synapse and Azure SQL Hands on experience in leading the integration of complex data sources and the development of efficient ETL processes Champion best practices in data governance, data quality and data security across the organization Adept in collaborating closely with data scientists, analysts and business stakeholders to deliver high-impact data solutions
Posted 1 month ago
5.0 - 10.0 years
25 - 30 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Develop and maintain data pipelines, ETL/ELT processes, and workflows to ensure the seamless integration and transformation of data. Architect, implement, and optimize scalable data solutions. Required Candidate profile Work closely with data scientists, analysts, and business stakeholders to understand requirements and deliver actionable insights. Partner with cloud architects and DevOps teams
Posted 1 month ago
8.0 - 13.0 years
20 - 25 Lacs
Pune
Remote
Design Databases & Data Warehouses, Power BI Solutions, Support Enterprise Business Intelligence, Strong Team Player & Contributor, Continuous Improvement Experience in SQL, Oracle, SSIS/SSRS, Azure, ADF, CI/CD, Power BI, DAX, Microsoft Fabric Required Candidate profile Source system data structures, data extraction, data transformation, warehousing, DB administration, query development, and Power BI. Develop WORK FROM HOME
Posted 1 month ago
12.0 - 15.0 years
11 - 16 Lacs
Bengaluru
Work from Office
: Key Responsibilities: Own, manage and prioritize requirements in the product life cycle from definition to phase-out. Define platform requirements for native, on-premise, and cloud deployments. Provide clear direction, context, and priorities to development teams. Collaborate closely with key internal stakeholders and engage with external stakeholders. Focus Areas: Must - Healthcare market. Product knowhow and customer understanding. Must - Sound knowledge of Clinical Workflows and Healthcare IT, especially in the area of Radiology. Must - Healthcare Industry standards like DICOM and IHE. Must - Good understanding of software systems categorized as Medical Device. Must - Basic understanding of Legal regulations and standards applicable for medical devices, affecting safety aspects(i.e. FDA 21CFR820QSR, ISO 13485). Must - Platform Scalability & ModernizationEnable flexible architecture supporting hybrid cloud, containerization, and orchestration (e.g., Kubernetes). Must - Azure ExpertiseDeep knowledge of Azure services (Data Lake Storage, SQL, Data Factory, Synapse) and cloud cost management. Must - Data Lake ArchitectureProficient in data ingestion, storage formats (Parquet, Delta Lake), and multi-zone design (raw, curated, analytics). Nice to have - SQL & DatabasesStrong SQL skills with experience in database design, optimization, and complex queries. Nice to have - Qlik BI ToolsSkilled in Qlik Sense/QlikView for data modeling, transformation, and dashboard/report development. Nice to have - Exposure to agile methodology What are my tasks Gather, prioritize, create & communicate stakeholder and market requirements & S/W specifications Guide and support development teams, resolving conflicts and answering questions Manage all the Agile methodology practices related to requirements engineering and product definition Provide input to project management and support rollout activities such as training, presentations, and workshops What do I need to know to qualify for this job QualificationA Bachelors / masters degree in engineering and / or MCA or equivalent. Work Experience12 to 15 years
Posted 1 month ago
8.0 - 13.0 years
5 - 9 Lacs
Hyderabad
Work from Office
1. Data Engineer Azure Data Services 2. Data Modelling NO SQL and SQL 3. Good understanding of Spark, Spark stream 4. Hands on with Python / Pandas / Data Factory / Cosmos DB / Data Bricks / Event Hubs / Stream Analytics 5. Knowledge of medallion architecture, data vaults, data marts etc. 6. Preferably Azure Data associate exam certified.
Posted 1 month ago
12.0 - 17.0 years
13 - 18 Lacs
Hyderabad
Work from Office
1. Data Engineer Azure Data Services 2. Data Modelling NO SQL and SQL 3. Good understanding of Spark, park stream 4. Hands on with Python Pandas / Data Factory / Cosmos DB / Data Bricks / Event Hubs / Stream Analytics 5. Knowledge of medallion architecture, data vaults, data marts etc. 6. Preferably Azure Data associate exam certified.
Posted 1 month ago
7.0 - 12.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Data Modeller JD: We are seeking a skilled Data Modeller to join our Corporate Banking team. The ideal candidate will have a strong background in creating data models for various banking services, including Current Account Savings Account (CASA), Loans, and Credit Services. This role involves collaborating with the Data Architect to define data model structures within a data mesh environment and coordinating with multiple departments to ensure cohesive data management practices. Data Modelling: oDesign and develop data models for CASA, Loan, and Credit Services, ensuring they meet business requirements and compliance standards. Create conceptual, logical, and physical data models that support the bank's strategic objectives. Ensure data models are optimized for performance, security, and scalability to support business operations and analytics. Collaboration with Data Architect: Work closely with the Data Architect to establish the overall data architecture strategy and framework. Contribute to the definition of data model structures within a data mesh environment. Data Quality and Governance: Ensure data quality and integrity in the data models by implementing best practices in data governance. Assist in the establishment of data management policies and standards. Conduct regular data audits and reviews to ensure data accuracy and consistency across systems. Data Modelling ToolsERwin, IBM InfoSphere Data Architect, Oracle Data Modeler, Microsoft Visio, or similar tools. DatabasesSQL, Oracle, MySQL, MS SQL Server, PostgreSQL, Neo4j Graph Data Warehousing TechnologiesSnowflake, Teradata, or similar. ETL ToolsInformatica, Talend, Apache NiFi, Microsoft SSIS, or similar. Big Data TechnologiesHadoop, Spark (optional but preferred). TechnologiesExperience with data modelling on cloud platforms Microsoft Azure (Synapse, Data Factory)
Posted 1 month ago
7.0 - 12.0 years
10 - 14 Lacs
Hyderabad
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : SAP Information Lifecycle management ILM Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : BE Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will be responsible for overseeing the application development process and ensuring successful project delivery. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the application development process- Ensure timely project delivery- Provide guidance and support to team members Professional & Technical Skills: - Must To Have Skills: Proficiency in SAP Information Lifecycle management ILM- Strong understanding of data lifecycle management- Experience in data archiving and retention policies- Knowledge of SAP data management solutions- Hands-on experience in SAP data migration- Experience in SAP data governance Additional Information:- The candidate should have a minimum of 7.5 years of experience in SAP Information Lifecycle management ILM- This position is based at our Hyderabad office- A BE degree is required Qualification BE
Posted 1 month ago
7.0 - 12.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Must have a Computer Science, Software Engineering, Engineering related degree or equivalent, with 7-10 years experience including a minimum of 5 years experience in a similar role, ideally in multisite hybrid/cloud first environment. Hands-on development experience coding in Python (mandatory) Hands-on development experience with NOSQL (preferable Cosmos) Extensive experience and knowledge with Azure (Azure Cosmos DB, Azure functions, Pipelines, Devops, Kubernetes, Storage) Mandatory Must have excellent knowledge of Microsoft Azure products and how to implement themEnterprise Apps, Azure Functions, Cosmos DB, Containers, Event Grid, Logic Apps, Service Bus, Data Factory Must have hands-on experience of object-oriented development and applied its principles in multiple solutions designDomain Driven, Tiered Applications, Micro-services Must have good knowledge of .NET, C# the standard libraries as well as JavaScript (in a Vue.js context). Knowledge of other development and scripting languages is appreciated (Python, Java, Typescript, PowerShell, bash) Must have knowledge of development and deployment tools (IaC, git, docker etc). Comfortable working with API, webhooks, data transfer, workflow technologies. Deep Understanding of software architecture and the long-term implication of architectural choices. Familiar with Agile project management and continuous delivery.
Posted 1 month ago
3.0 - 7.0 years
7 - 11 Lacs
Bengaluru
Work from Office
A proven and credible practitioner, your deep solution experience will help you lead a team of go-to subject matter experts. Fostering a culture of candour, collaboration, and growth-mindedness, you'll ensure co-creation across IBM Sales and client teams that drive investment in, and adoption of IBM's strategic platforms. Overseeing your teams' fusion of innovative solutions with modern IT architectures, integrated solutions, and offerings, you'll ensure they're helping to solve some of their clients most complex business challenges. We're committed to success. In this role, your achievements will drive your career, team, and clients to thrive. A typical day may involve: Strategic Team Leadership: Leading a team of technical sales experts to co-create innovative solutions with clients. Partnership and Prototype Excellence: Collaborating with IBM and partners to deliver compelling prototypes. Optimizing Resource Utilization: Promoting maximum use of IBM's Technology Sales resources. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Proficient in .Net Core with React or Angular Experience in Agile teams applying the best architectural, design, unit testing patterns & practices with an eye for code quality and standards. AZURE FUNCTION, AZURE SERVICE BUS, AZURE STORAGE ACCOUNT- MANDATORY AZURE DURABLE FUNCTIONS AZURE DATA FACTORY, AZURE SQL OR COSMOS DB(DATABASE)Required Ability to write calculation rules and configurable consolidation rules Preferred technical and professional experience Excellent written and verbal interpersonal skills for coordinating across teams. Should have at least 2 end to end implementation experience. Ability to write and update the rules of historical overrides
Posted 1 month ago
4.0 - 8.0 years
10 - 15 Lacs
Bengaluru
Work from Office
As a Software Developer you'll participate in many aspects of the software development lifecycle, such as design, code implementation, testing, and support. You will create software that enables your clients' hybrid-cloud and AI journeys. Your primary responsibilities include: Comprehensive Feature Development and Issue ResolutionWorking on the end to end feature development and solving challenges faced in the implementation. Stakeholder Collaboration and Issue ResolutionCollaborate with key stakeholders, internal and external, to understand the problems, issues with the product and features and solve the issues as per SLAs defined. Continuous Learning and Technology IntegrationBeing eager to learn new technologies and implementing the same in feature development. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Proficient in .Net Core with React or Angular Experience in Agile teams applying the best architectural, design, unit testing patterns & practices with an eye for code quality and standards. AZURE FUNCTION, AZURE SERVICE BUS, AZURE STORAGE ACCOUNT- MANDATORY AZURE DURABLE FUNCTIONS AZURE DATA FACTORY, AZURE SQL OR COSMOS DB(DATABASE)Required Ability to write calculation rules and configurable consolidation rules Preferred technical and professional experience Excellent written and verbal interpersonal skills for coordinating across teams. Should have at least 2 end to end implementation experience. Ability to write and update the rules of historical overrides
Posted 1 month ago
5.0 - 10.0 years
10 - 15 Lacs
Bengaluru
Work from Office
Proficient with Azure Platform Development (Azure Functions, Azure Services etc) . * 5 to 15 Years relevant software development experience with fairly Full stack profile Proficient in Cloud Native Deployment with CI/CD Pipelines. Proficient in one or more of Data Development (SQL Databases, No SQL, Cloud Datastores etc) technologies Good and Effective Communication skill to understand the requirement and articulate the solution Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Proficient in .Net Core with React or Angular Experience in Agile teams applying the best architectural, design, unit testing patterns & practices with an eye for code quality and standards. AZURE FUNCTION, AZURE SERVICE BUS, AZURE STORAGE ACCOUNT- MANDATORY AZURE DURABLE FUNCTIONS AZURE DATA FACTORY, AZURE SQL OR COSMOS DB(DATABASE)Required Ability to write calculation rules and configurable consolidation rules Preferred technical and professional experience Excellent written and verbal interpersonal skills for coordinating across teams. Should have at least 2 end to end implementation experience. Ability to write and update the rules of historical overrides
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough