Home
Jobs

979 Adf Jobs - Page 14

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Snowflake, Databricks and ADF. Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in the reporting layer and develop a data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussions with client architect and team members Orchestrate the data pipelines in the scheduler via Airflow Skills And Qualifications Skills: sql,pl/sql,spark,star and snowflake dimensional modeling,databricks,snowsight,terraform,git,unix shell scripting,snowsql,cassandra,circleci,azure,pyspark,snowpipe,mongodb,neo4j,azure data factory,snowflake,python Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Expertise in Snowflake security, Snowflake SQL and designing/implementing other Snowflake objects. Hands-on experience with Snowflake utilities, SnowSQL, Snowpipe, Snowsight and Snowflake connectors. Deep understanding of Star and Snowflake dimensional modeling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL and Spark (PySpark) Experience in building ETL / data warehouse transformation processes Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning, troubleshooting and Query Optimization. Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Mandatory Skills: Snowflake/Azure Data Factory/ PySpark / Databricks Show more Show less

Posted 1 week ago

Apply

4.0 - 6.0 years

5 - 11 Lacs

Thane, Navi Mumbai, Mumbai (All Areas)

Work from Office

Naukri logo

Role & responsibilities Support the implementation and maintenance of data governance policies, procedures, and standards specific to the banking industry. Hands-on experience in creating and maintaining activities associated with data life cycle management and various data governance activities. Develop, update, and maintain the data dictionary for critical banking data assets, ensuring accurate definitions, attributes, and classifications. Interfacing Work with business units and IT teams to standardize terminology across systems for consistency and clarity. Document end-to-end data lineage for key banking data processes (e.g., customer data, transaction data, risk management data). Create and maintain documentation of metadata, data dictionaries, and lineage for ongoing governance processes. Experience on reports and dashboards preparation for data quality scores, and lineage status. 1. Technical Skills. Experience in Data governance related activities like (preparation on data dictionary and data lineage documents). Proficient in writing database queries. anyone of the database like (SQL, Oracle, MySQL, Postgres) Experience in data life cycle management Understanding of data privacy and security frameworks specific to banking, such as PCI DSS, DPDP act. Preferred candidate profile Minimum 4- 8 years of experience in Data governance related activities Bachelors degree in (B. Tech /BCA, BSc (IT) etc.) Information Systems and relevant field Experience in Data management life cycle and Data governance activities.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Snowflake, Databricks and ADF. Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in the reporting layer and develop a data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussions with client architect and team members Orchestrate the data pipelines in the scheduler via Airflow Skills And Qualifications Skills: sql,pl/sql,spark,star and snowflake dimensional modeling,databricks,snowsight,terraform,git,unix shell scripting,snowsql,cassandra,circleci,azure,pyspark,snowpipe,mongodb,neo4j,azure data factory,snowflake,python Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Expertise in Snowflake security, Snowflake SQL and designing/implementing other Snowflake objects. Hands-on experience with Snowflake utilities, SnowSQL, Snowpipe, Snowsight and Snowflake connectors. Deep understanding of Star and Snowflake dimensional modeling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL and Spark (PySpark) Experience in building ETL / data warehouse transformation processes Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning, troubleshooting and Query Optimization. Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Mandatory Skills: Snowflake/Azure Data Factory/ PySpark / Databricks Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Snowflake, Databricks and ADF. Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in the reporting layer and develop a data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussions with client architect and team members Orchestrate the data pipelines in the scheduler via Airflow Skills And Qualifications Skills: sql,pl/sql,spark,star and snowflake dimensional modeling,databricks,snowsight,terraform,git,unix shell scripting,snowsql,cassandra,circleci,azure,pyspark,snowpipe,mongodb,neo4j,azure data factory,snowflake,python Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Expertise in Snowflake security, Snowflake SQL and designing/implementing other Snowflake objects. Hands-on experience with Snowflake utilities, SnowSQL, Snowpipe, Snowsight and Snowflake connectors. Deep understanding of Star and Snowflake dimensional modeling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL and Spark (PySpark) Experience in building ETL / data warehouse transformation processes Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning, troubleshooting and Query Optimization. Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Mandatory Skills: Snowflake/Azure Data Factory/ PySpark / Databricks Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Linkedin logo

Senior Engineer, Data Modeling Gurgaon/Bangalore, India AXA XL recognizes data and information as critical business assets, both in terms of managing risk and enabling new business opportunities. This data should not only be high quality, but also actionable - enabling AXA XL’s executive leadership team to maximize benefits and facilitate sustained industrious advantage. Our Chief Data Office also known as our Innovation, Data Intelligence & Analytics team (IDA) is focused on driving innovation through optimizing how we leverage data to drive strategy and create a new business model - disrupting the insurance market. As we develop an enterprise-wide data and digital strategy that moves us toward greater focus on the use of data and data-driven insights, we are seeking a Data Engineer. The role will support the team’s efforts towards creating, enhancing, and stabilizing the Enterprise data lake through the development of the data pipelines. This role requires a person who is a team player and can work well with team members from other disciplines to deliver data in an efficient and strategic manner. What You’ll Be Doing What will your essential responsibilities include? Act as a data engineering expert and partner to Global Technology and data consumers in controlling complexity and cost of the data platform, whilst enabling performance, governance, and maintainability of the estate. Understand current and future data consumption patterns, architecture (granular level), partner with Architects to make sure optimal design of data layers. Apply best practices in Data architecture. For example, balance between materialization and virtualization, optimal level of de-normalization, caching and partitioning strategies, choice of storage and querying technology, performance tuning. Leading and hands-on execution of research into new technologies. Formulating frameworks for assessment of new technology vs business benefit, implications for data consumers. Act as a best practice expert, blueprint creator of ways of working such as testing, logging, CI/CD, observability, release, enabling rapid growth in data inventory and utilization of Data Science Platform. Design prototypes and work in a fast-paced iterative solution delivery model. Design, Develop and maintain ETL pipelines using Py spark in Azure Databricks using delta tables. Use Harness for deployment pipeline. Monitor Performance of ETL Jobs, resolve any issue that arose and improve the performance metrics as needed. Diagnose system performance issue related to data processing and implement solution to address them. Collaborate with other teams to make sure successful integration of data pipelines into larger system architecture requirement. Maintain integrity and quality across all pipelines and environments. Understand and follow secure coding practice to make sure code is not vulnerable. You will report to the Application Manager. What You Will BRING We’re looking for someone who has these abilities and skills: Required Skills And Abilities Effective Communication skills. Bachelor’s degree in computer science, Mathematics, Statistics, Finance, related technical field, or equivalent work experience. Relevant years of extensive work experience in various data engineering & modeling techniques (relational, data warehouse, semi-structured, etc.), application development, advanced data querying skills. Relevant years of programming experience using Databricks. Relevant years of experience using Microsoft Azure suite of products (ADF, synapse and ADLS). Solid knowledge on network and firewall concepts. Solid experience writing, optimizing and analyzing SQL. Relevant years of experience with Python. Ability to break complex data requirements and architect solutions into achievable targets. Robust familiarity with Software Development Life Cycle (SDLC) processes and workflow, especially Agile. Experience using Harness. Technical lead responsible for both individual and team deliveries. Desired Skills And Abilities Worked in big data migration projects. Worked on performance tuning both at database and big data platforms. Ability to interpret complex data requirements and architect solutions. Distinctive problem-solving and analytical skills combined with robust business acumen. Excellent basics on parquet files and delta files. Effective Knowledge of Azure cloud computing platform. Familiarity with Reporting software - Power BI is a plus. Familiarity with DBT is a plus. Passion for data and experience working within a data-driven organization. You care about what you do, and what we do. Who WE Are AXA XL, the P&C and specialty risk division of AXA, is known for solving complex risks. For mid-sized companies, multinationals and even some inspirational individuals we don’t just provide re/insurance, we reinvent it. How? By combining a comprehensive and efficient capital platform, data-driven insights, leading technology, and the best talent in an agile and inclusive workspace, empowered to deliver top client service across all our lines of business − property, casualty, professional, financial lines and specialty. With an innovative and flexible approach to risk solutions, we partner with those who move the world forward. Learn more at axaxl.com What We OFFER Inclusion AXA XL is committed to equal employment opportunity and will consider applicants regardless of gender, sexual orientation, age, ethnicity and origins, marital status, religion, disability, or any other protected characteristic. At AXA XL, we know that an inclusive culture and a diverse workforce enable business growth and are critical to our success. That’s why we have made a strategic commitment to attract, develop, advance and retain the most diverse workforce possible, and create an inclusive culture where everyone can bring their full selves to work and can reach their highest potential. It’s about helping one another — and our business — to move forward and succeed. Five Business Resource Groups focused on gender, LGBTQ+, ethnicity and origins, disability and inclusion with 20 Chapters around the globe Robust support for Flexible Working Arrangements Enhanced family friendly leave benefits Named to the Diversity Best Practices Index Signatory to the UK Women in Finance Charter Learn more at axaxl.com/about-us/inclusion-and-diversity. AXA XL is an Equal Opportunity Employer. Total Rewards AXA XL’s Reward program is designed to take care of what matters most to you, covering the full picture of your health, wellbeing, lifestyle and financial security. It provides dynamic compensation and personalized, inclusive benefits that evolve as you do. We’re committed to rewarding your contribution for the long term, so you can be your best self today and look forward to the future with confidence. Sustainability At AXA XL, Sustainability is integral to our business strategy. In an ever-changing world, AXA XL protects what matters most for our clients and communities. We know that sustainability is at the root of a more resilient future. Our 2023-26 Sustainability strategy, called “Roots of resilience”, focuses on protecting natural ecosystems, addressing climate change, and embedding sustainable practices across our operations. Our Pillars Valuing nature: How we impact nature affects how nature impacts us. Resilient ecosystems - the foundation of a sustainable planet and society - are essential to our future. We’re committed to protecting and restoring nature - from mangrove forests to the bees in our backyard - by increasing biodiversity awareness and inspiring clients and colleagues to put nature at the heart of their plans. Addressing climate change: The effects of a changing climate are far reaching and significant. Unpredictable weather, increasing temperatures, and rising sea levels cause both social inequalities and environmental disruption. We're building a net zero strategy, developing insurance products and services, and mobilizing to advance thought leadership and investment in societal-led solutions. Integrating ESG: All companies have a role to play in building a more resilient future. Incorporating ESG considerations into our internal processes and practices builds resilience from the roots of our business. We’re training our colleagues, engaging our external partners, and evolving our sustainability governance and reporting. AXA Hearts in Action: We have established volunteering and charitable giving programs to help colleagues support causes that matter most to them, known as AXA XL’s “Hearts in Action” programs. These include our Matching Gifts program, Volunteering Leave, and our annual volunteering day - the Global Day of Giving. For more information, please see axaxl.com/sustainability. Show more Show less

Posted 1 week ago

Apply

4.0 - 6.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Responsible for developing, optimize, and maintaining business intelligence and data warehouse systems, ensuring secure, efficient data storage and retrieval, enabling self-service data exploration, and supporting stakeholders with insightful reporting and analysis. Grade - T5 Please note that the Job will close at 12am on Posting Close date, so please submit your application prior to the Close Date Accountabilities What your main responsibilities are: Data Pipeline - Develop and maintain scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity Data Integration - Connect offline and online data to continuously improve overall understanding of customer behavior and journeys for personalization. Data pre-processing including collecting, parsing, managing, analyzing and visualizing large sets of data Data Quality Management - Cleanse the data and improve data quality and readiness for analysis. Drive standards, define and implement/improve data governance strategies and enforce best practices to scale data analysis across platforms Data Transformation - Processes data by cleansing data and transforming them to proper storage structure for the purpose of querying and analysis using ETL and ELT process Data Enablement - Ensure data is accessible and useable to wider enterprise to enable a deeper and more timely understanding of operation. Qualifications & Specifications Masters /Bachelor’s degree in Engineering /Computer Science/ Math/ Statistics or equivalent. Strong programming skills in Python/Pyspark/SAS. Proven experience with large data sets and related technologies – Hadoop, Hive, Distributed computing systems, Spark optimization. Experience on cloud platforms (preferably Azure) and it's services Azure Data Factory (ADF), ADLS Storage, Azure DevOps. Hands-on experience on Databricks, Delta Lake, Workflows. Should have knowledge of DevOps process and tools like Docker, CI/CD, Kubernetes, Terraform, Octopus. Hands-on experience with SQL and data modeling to support the organization's data storage and analysis needs. Experience on any BI tool like Power BI (Good to have). Cloud migration experience (Good to have) Cloud and Data Engineering certification (Good to have) Working in an Agile environment 4-6 Years Of Relevant Work Experience Is Required. Experience with stakeholder management is an added advantage. What We Are Looking For Education: Bachelor's degree or equivalent in Computer Science, MIS, Mathematics, Statistics, or similar discipline. Master's degree or PhD preferred. Knowledge, Skills And Abilities Fluency in English Analytical Skills Accuracy & Attention to Detail Numerical Skills Planning & Organizing Skills Presentation Skills Data Modeling and Database Design ETL (Extract, Transform, Load) Skills Programming Skills FedEx was built on a philosophy that puts people first, one we take seriously. We are an equal opportunity/affirmative action employer and we are committed to a diverse, equitable, and inclusive workforce in which we enforce fair treatment, and provide growth opportunities for everyone. All qualified applicants will receive consideration for employment regardless of age, race, color, national origin, genetics, religion, gender, marital status, pregnancy (including childbirth or a related medical condition), physical or mental disability, or any other characteristic protected by applicable laws, regulations, and ordinances. Our Company FedEx is one of the world's largest express transportation companies and has consistently been selected as one of the top 10 World’s Most Admired Companies by "Fortune" magazine. Every day FedEx delivers for its customers with transportation and business solutions, serving more than 220 countries and territories around the globe. We can serve this global network due to our outstanding team of FedEx team members, who are tasked with making every FedEx experience outstanding. Our Philosophy The People-Service-Profit philosophy (P-S-P) describes the principles that govern every FedEx decision, policy, or activity. FedEx takes care of our people; they, in turn, deliver the impeccable service demanded by our customers, who reward us with the profitability necessary to secure our future. The essential element in making the People-Service-Profit philosophy such a positive force for the company is where we close the circle, and return these profits back into the business, and invest back in our people. Our success in the industry is attributed to our people. Through our P-S-P philosophy, we have a work environment that encourages team members to be innovative in delivering the highest possible quality of service to our customers. We care for their well-being, and value their contributions to the company. Our Culture Our culture is important for many reasons, and we intentionally bring it to life through our behaviors, actions, and activities in every part of the world. The FedEx culture and values have been a cornerstone of our success and growth since we began in the early 1970’s. While other companies can copy our systems, infrastructure, and processes, our culture makes us unique and is often a differentiating factor as we compete and grow in today’s global marketplace. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

India

On-site

Linkedin logo

Location: Bangalore, Chennai, Kolkata, Pune and Gurgaon Hybrid Primary Skills: Snowflake, SQL, DWH, Power BI, ETL and Informatica. We are seeking a skilled Snowflake Developer with a strong background in Data Warehousing (DWH), SQL, Informatica, Power BI, and related tools to join our Data Engineering team. The ideal candidate will have 5+ years of experience in designing, developing, and maintaining data pipelines, integrating data across multiple platforms, and optimizing large-scale data architectures. This is an exciting opportunity to work with cutting-edge technologies in a collaborative environment and help build scalable, high-performance data solutions. Key Responsibilities: ● Minimum of 5+ years of hands-on experience in Data Engineering, with a focus on Data Warehousing, Business Intelligence, and related technologies. ● Data Integration & Pipeline Development: Develop and maintain data pipelines using Snowflake, Fivetran, and DBT for efficient ELT processes (Extract, Load, Transform) across various data sources. ● SQL Query Development & Optimization: Write complex, scalable SQL queries, including stored procedures, to support data transformation, reporting, and analysis. ● Data Modeling & ELT Implementation: Implement advanced data modeling techniques, such as Slowly Changing Dimensions (SCD Type-2), using DBT. Design and optimize high-performance data architectures. ● Business Requirement Analysis: Collaborate with business stakeholders to understand data needs and translate business requirements into technical solutions. ● Troubleshooting & Data Quality: Perform root cause analysis on data-related issues, ensuring effective resolution and maintaining high data quality standards. ● Collaboration & Documentation: Work closely with cross-functional teams to integrate data solutions. Create and maintain clear documentation for data processes, data models, and pipelines. Skills & Qualifications: ● Expertise in Snowflake for data warehousing and ELT processes. ● Strong proficiency in SQL for relational databases and writing complex queries. ● Experience with Informatica PowerCenter for data integration and ETL development. ● Experience using Power BI for data visualization and business intelligence reporting. ● Experience with Fivetran for automated ELT pipelines. ● Familiarity with Sigma Computing, Tableau, Oracle, and DBT. ● Strong data analysis, requirement gathering, and mapping skills. ● Familiarity with cloud services such as Azure (RDBMS, Data Bricks, ADF), with AWS or GCP ● Experience with workflow management tools such as Airflow, Azkaban, or Luigi. ● Proficiency in Python for data processing (other languages like Java, Scala are a plus). Education- Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or a related Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Job Family Data Science & Analysis (India) Travel Required None Clearance Required None What You Will Do Design, develop, and maintain robust, scalable, and efficient data pipelines and ETL/ELT processes. Lead and execute data engineering projects from inception to completion, ensuring timely delivery and high quality. Build and optimize data architectures for operational and analytical purposes. Collaborate with cross-functional teams to gather and define data requirements. Implement data quality, data governance, and data security practices. Manage and optimize cloud-based data platforms ( Azure\AWS). Develop and maintain Python/PySpark libraries for data ingestion, Processing and integration with both internal and external data sources. Design and optimize scalable data pipelines using Azure data factory and Spark(Databricks) Work with stakeholders, including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Develop frameworks for data ingestion, transformation, and validation. Mentor junior data engineers and guide best practices in data engineering. Evaluate and integrate new technologies and tools to improve data infrastructure. Ensure compliance with data privacy regulations (HIPAA, etc.). Monitor performance and troubleshoot issues across the data ecosystem. Automated deployment of data pipelines using GIT hub actions \ Azure devops What You Will Need Bachelors or master’s degree in computer science, Information Systems, Statistics, Math, Engineering, or related discipline. Minimum 5 + years of solid hands-on experience in data engineering and cloud services. Extensive working experience with advanced SQL and deep understanding of SQL. Good Experience in Azure data factory (ADF), Databricks , Python and PySpark. Good experience in modern data storage concepts data lake, lake house. Experience in other cloud services (AWS) and data processing technologies will be added advantage. Ability to enhance , develop and resolve defects in ETL process using cloud services. Experience handling large volumes (multiple terabytes) of incoming data from clients and 3rd party sources in various formats such as text, csv, EDI X12 files and access database. Experience with software development methodologies (Agile, Waterfall) and version control tools Highly motivated, strong problem solver, self-starter, and fast learner with demonstrated analytic and quantitative skills. Good communication skill. What Would Be Nice To Have AWS ETL Platform – Glue , S3 One or more programming languages such as Java, .Net Experience in US health care domain and insurance claim processing. What We Offer Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace. About Guidehouse Guidehouse is an Equal Opportunity Employer–Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation. Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco. If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at 1-571-633-1711 or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation. All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or guidehouse@myworkday.com. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process. If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse’s Ethics Hotline. If you want to check the validity of correspondence you have received, please contact recruiting@guidehouse.com. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant’s dealings with unauthorized third parties. Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Snowflake, Databricks and ADF. Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in the reporting layer and develop a data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussions with client architect and team members Orchestrate the data pipelines in the scheduler via Airflow Skills And Qualifications Skills: sql,pl/sql,spark,star and snowflake dimensional modeling,databricks,snowsight,terraform,git,unix shell scripting,snowsql,cassandra,circleci,azure,pyspark,snowpipe,mongodb,neo4j,azure data factory,snowflake,python Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Expertise in Snowflake security, Snowflake SQL and designing/implementing other Snowflake objects. Hands-on experience with Snowflake utilities, SnowSQL, Snowpipe, Snowsight and Snowflake connectors. Deep understanding of Star and Snowflake dimensional modeling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL and Spark (PySpark) Experience in building ETL / data warehouse transformation processes Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning, troubleshooting and Query Optimization. Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Mandatory Skills: Snowflake/Azure Data Factory/ PySpark / Databricks Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Snowflake, Databricks and ADF. Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in the reporting layer and develop a data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussions with client architect and team members Orchestrate the data pipelines in the scheduler via Airflow Skills And Qualifications Skills: sql,pl/sql,spark,star and snowflake dimensional modeling,databricks,snowsight,terraform,git,unix shell scripting,snowsql,cassandra,circleci,azure,pyspark,snowpipe,mongodb,neo4j,azure data factory,snowflake,python Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Expertise in Snowflake security, Snowflake SQL and designing/implementing other Snowflake objects. Hands-on experience with Snowflake utilities, SnowSQL, Snowpipe, Snowsight and Snowflake connectors. Deep understanding of Star and Snowflake dimensional modeling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL and Spark (PySpark) Experience in building ETL / data warehouse transformation processes Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning, troubleshooting and Query Optimization. Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Mandatory Skills: Snowflake/Azure Data Factory/ PySpark / Databricks Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

About Psiog Psiog Digital is a pure play software services specialist focusing on delivering technology services & solutions in the areas of operational efficiency & customer experience. Headquartered in Chennai, India, Psiog services global customers in North America, EMEA, ANZ & APAC across a variety of industries such as Manufacturing, Energy & Utilities, Hi-Tech, Financial Services, Retail and Automobile. Psiog's core expertise is in building, implementing, testing, integrating & maintaining applications leveraging a variety of cutting-edge tools & technologies. Key Service Offerings For The Mid-market Segment Are Application Development & Maintenance. Outsourced Product Development. Testing & description : Key Skills SQL ETL Tools Reporting Requirements : Expert-level knowledge in RDBMS (SQL Server) with clear understanding of SQL query writing, object creation and management and performance and optimisation of DB/DWH operations. Good understanding of Transactional and Dimensional Data Modelling, Star Schema, Facts/Dimensions, Relationship. Good understanding of ETL concepts and exposure to at least one ETL service providing tool. (SSIS/ADF/Similar) Expert-level knowledge in at least one MS reporting/visualisation tool (Power BI/Azure Analysis Services). Should have worked on at least 1 development lifecycle of one of the below : End-to-end ETL project (Involving any ETL tool). End-to-end reporting project (Involving a reporting too, Power BI & Analysis Services preferred). Ability to write and review test cases, test code and validate code. Ability to perform data analysis on different reports to come up with troubleshooting of missing data, suggest value added metrics and consult on best practices to the customer. Good understanding of SDLC practices like source control, version management, usage of Azure DevOps and CI/CD practices. Should have the skill to fully understand the context and use-case of a project and have a personal vision for it - Play the role of interfacing with customer directly on a daily basis. Should be able to converse with functional users and convert requirements into tangible processes/models and documentation in available templates. Should be able to provide consultative options to customer on best way to execute projects. Should have a good understanding of project dynamics - scoping, setting estimates, setting timelines, working around timelines in case of exceptions, etc. Should be able to demonstrate the ability to technically lead a team of developers and testers and perform design reviews, code reviews, etc. Should have a good understanding of presentation and communications skills - written or verbal, specially to express technical ideas / opportunities : Opportunity to work as part of the Enterprise Data Management Centre of Excellence and contribute to strategic growth of a fast-growing practice. Opportunity to mentor, young fresh minds from across premiere institutions. Exposure to multiple clients and projects simultaneously through CoE model of Skills : Knowledge of Python. Knowledge of Azure DevOps, Source Control/Repos. (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

12.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Candidate should have overall 12+ years of experience. 5+ years of experience in managing data projects in ETL, DWH and BI areaMust have knowledge of Agile Scrum methodology and experience of working as Scrum MasterGood to have knowledge and working experience in Azure cloud platform and Azure Data Factory (ADF) Must have experience on SQL-based technologies (e.g. Microsoft SQL Server) and Azure SQL DB. Experience of managing end-to-end project delivery from Discovery phase to Production deployment. Effective project tracking and managing risksShould have experience of handling multiple stakeholders from customer sideManaging teams from multiple locations Excellent communication skills and Experience of handling customer stakeholders Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Vishakhapatnam, Andhra Pradesh, India

On-site

Linkedin logo

Position : Azure Data Engineer Experience : 5+ years Location : Visakhapatnam Primary Skills : Azure Data Factory, Azure Synapse Analytics,PySpark,Scala,CI/CD Job Description: 5+ years of experience in data engineering or a related field. Strong hands-on experience with Azure Synapse Analytics and Azure Data Factory (ADF). Proven experience with Databricks, including development in PySpark or Scala. Proficiency in DBT for data modeling and transformation. Expertise in SQL and performance tuning techniques. Solid understanding of data warehousing concepts and ETL/ELT design patterns. Experience working in Agile environments and familiarity with Git-based version control. Strong communication and collaboration skills. Preferred Qualifications: Experience with CI/CD tools and DevOps for data engineering. Familiarity with Delta Lake and Lakehouse architecture. Exposure to other Azure services such as Azure Data Lake Storage (ADLS), Azure Key Vault, and Azure DevOps. Show more Show less

Posted 1 week ago

Apply

12.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

About the Role Candidate should have overall 12+ years of experience. 5+ years of experience in managing data projects in ETL, DWH and BI area. Responsibilities Must have knowledge of Agile Scrum methodology and experience of working as Scrum Master. Good to have knowledge and working experience in Azure cloud platform and Azure Data Factory (ADF). Must have experience on SQL-based technologies (e.g. Microsoft SQL Server) and Azure SQL DB. Experience of managing end-to-end project delivery from Discovery phase to Production deployment. Effective project tracking and managing risks. Should have experience of handling multiple stakeholders from customer side. Managing teams from multiple locations. Excellent communication skills and experience of handling customer stakeholders. Qualifications 12+ years of experience. Required Skills Agile Scrum methodology. SQL-based technologies. Azure cloud platform. Preferred Skills Experience with Azure Data Factory (ADF). Experience in managing data projects in ETL, DWH and BI area. Pay range and compensation package Not specified. Equal Opportunity Statement We are committed to diversity and inclusivity. ``` Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Microsoft Fabric Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding your team through the development process. You will also engage in strategic planning to align application development with organizational goals, ensuring that the solutions provided are effective and efficient. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge. - Facilitate regular team meetings to discuss progress and address any roadblocks. Professional & Technical Skills: Lead and manage a team of data engineers, providing guidance, mentorship, and support. Foster a collaborative and innovative team culture. Work closely with stakeholders to understand data requirements and business objectives. Translate business requirements into technical specifications for the Data Warehouse. Lead the design of data models, ensuring they meet business needs and adhere to best practices. Collaborate with the Technical Architect to design dimensional models for optimal performance. Design and implement data pipelines for ingestion, transformation, and loading (ETL/ELT) using Fabric Data Factory Pipeline and Dataflows Gen2. Develop scalable and reliable solutions for batch data integration across various structured and unstructured data sources. Oversee the development of data pipelines for smooth data flow into the Fabric Data Warehouse. Implement and maintain data solutions in Fabric Lakehouse and Fabric Warehouse. Monitor and optimize pipeline performance, ensuring minimal latency and resource efficiency. Tune data processing workloads for large datasets in Fabric Warehouse and Lakehouse. Exposure in ADF and DataBricks Additional Information: - The candidate should have minimum 5 years of experience in Microsoft Fabric. - This position is based in Hyderabad. - A 15 years full time education is required. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Project description You will be working in a global team that manages and performs a global technical control. You'll be joining Assets Management team which is looking after asset management data foundation and operates a set of in-house developed tooling. As an IT engineer you'll play an important role in ensuring the development methodology is followed, and lead technical design discussions with the architects. Our culture centers around partnership with our businesses, transparency, accountability and empowerment, and passion for the future. Responsibilities Design, build, and manage data pipelines using Azure Data Integration Services (Azure DataBricks, ADF, Azure Functions.) Collaborate closely with the security team to develop robust data solutions that support our security initiatives. Implement, monitor, and optimize data processes, ensuring adherence to security and data governance best practices. Troubleshoot and resolve data-related issues, ensuring data quality and accessibility. Develop strategies for data acquisitions and integration of the new data into our existing architecture. Document procedures and workflows associated with data pipelines, contributing to best practices. Share knowledge about latest Azure Data Integration Services trends and techniques. Implement and manage CI/CD pipelines to automate data and UI testcases and integrate testing with development pipelines. Implement and manage CI/CD pipelines to automate development and integrate test pipelines. Conduct regular reviews of the system, identify possible security risks, and implement preventive measures. Skills Must have Excellent command of English Bachelor's or Master's degree in Computer Science, Information Technology, or related field. 5+ years of experience in data integration and pipeline development using Azure Data Integration Services including Azure Data Factory and Azure Databricks. Hands-on with Python and Spark Strong understanding of security principles in the context of data integration. Proven experience with SQL and other data query languages. Ability to write, debug, and optimize data transformations and datasets. Extensive experience in designing and implementing ETL solutions using Azure Databricks, Azure Data Factory or similar technologies. Familiar with automated testing frameworks using Squash Nice to have Data Architecture & Engineering: Design and implement efficient and scalable data warehousing solutions using Azure Databricks and Microsoft Fabric. Business Intelligence & Data Visualization: Create insightful Power BI dashboards to help drive business decisions. Show more Show less

Posted 1 week ago

Apply

0.0 - 2.0 years

0 Lacs

Kochi, Kerala

On-site

Indeed logo

Job Role: Senior Dot Net Developer Experience: 8+ years Notice period : Immediate Location: Trivandrum / Kochi Introduction Candidates with 8+ years of experience in IT industry and with strong .Net/.Net Core/Azure Cloud Service/ Azure DevOps. This is a client facing role and hence should have strong communication skills. This is for a US client and the resource should be hands-on - experience in coding and Azure Cloud. Working hours - 8 hours , with a 4 hours of overlap during EST Time zone. ( 12 PM - 9 PM) This overlap hours is mandatory as meetings happen during this overlap hours Responsibilities include: Design, develop, enhance, document, and maintain robust applications using .NET Core 6/8+, C#, REST APIs, T-SQL, and modern JavaScript/jQuery Integrate and support third-party APIs and external services Collaborate across cross-functional teams to deliver scalable solutions across the full technology stack Identify, prioritize, and execute tasks throughout the Software Development Life Cycle (SDLC) Participate in Agile/Scrum ceremonies and manage tasks using Jira Understand technical priorities, architectural dependencies, risks, and implementation challenges Troubleshoot, debug, and optimize existing solutions with a strong focus on performance and reliability Certifications : Microsoft Certified: Azure Fundamentals Microsoft Certified: Azure Developer Associate Other relevant certifications in Azure, .NET, or Cloud technologies Primary Skills : 8+ years of hands-on development experience with: C#, .NET Core 6/8+, Entity Framework / EF Core JavaScript, jQuery, REST APIs Expertise in MS SQL Server , including: Complex SQL queries, Stored Procedures, Views, Functions, Packages, Cursors, Tables, and Object Types Skilled in unit testing with XUnit, MSTest Strong in software design patterns, system architecture, and scalable solution design Ability to lead and inspire teams through clear communication, technical mentorship, and ownership Strong problem-solving and debugging capabilities Ability to write reusable, testable, and efficient code Develop and maintain frameworks and shared libraries to support large-scale applications Excellent technical documentation, communication, and leadership skills Microservices and Service-Oriented Architecture (SOA) Experience in API Integrations 2+ years of hands with Azure Cloud Services, including: Azure Functions Azure Durable Functions Azure Service Bus, Event Grid, Storage Queues Blob Storage, Azure Key Vault, SQL Azure Application Insights, Azure Monitoring Secondary Skills: Familiarity with AngularJS, ReactJS, and other front-end frameworks Experience with Azure API Management (APIM) Knowledge of Azure Containerization and Orchestration (e.g., AKS/Kubernetes) Experience with Azure Data Factory (ADF) and Logic Apps Exposure to Application Support and operational monitoring Azure DevOps - CI/CD pipelines (Classic / YAML) Job Types: Full-time, Permanent Pay: From ₹2,500,000.00 per year Benefits: Paid time off Provident Fund Location Type: In-person Schedule: UK shift Ability to commute/relocate: Kochi, Kerala: Reliably commute or planning to relocate before starting work (Preferred) Experience: .NET: 8 years (Preferred) Azure: 2 years (Preferred) Work Location: In person Speak with the employer +91 9932724170

Posted 1 week ago

Apply

0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Linkedin logo

Role- Azure databricks engineer Location- Kolkata Experience 7+ Must-Have** Build the solution for optimal extraction, transformation, and loading of data from a wide variety of data sources using Azure data ingestion and transformation components. Following technology skills are required – Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Experience with ADF, Dataflow Experience with big data tools like Delta Lake, Azure Databricks Experience with Synapse Designing an Azure Data Solution skills Assemble large, complex data sets that meet functional / non-functional business requirements. Good-to-Have Working knowledge of Azure DevOps SN Responsibility of / Expectations from the Role 1 Customer Centric Work closely with client teams to understand project requirements and translate into technical design Experience working in scrum or with scrum teams Internal Collaboration Work with project teams and guide the end to end project lifecycle, resolve technical queries Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data needs. Soft Skills Good communication skills Ability to interact with various internal groups and CoEs Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Role -- Data Modeler Experience -- 6+ Years Location -- Bangalore Notice -- Immediate Joiners only Job Description -- We are seeking a Data Modeler to design, develop, and maintain conceptual, logical, and physical data models that support business needs. The ideal candidate will work closely with data engineers, architects, BA and business stakeholders to ensure data consistency, integrity, and performance across various systems. Key Responsibilities: Design and develop conceptual, logical, and physical data models based on business requirements. Collaborate with business analysts, data engineers, and architects to ensure data models align with business goals. Optimize database design to enhance performance, scalability, and maintainability. Define and enforce data governance standards , including naming conventions, metadata management, and data lineage. Work with ETL and BI teams to ensure seamless data integration and reporting capabilities. Analyze and document data relationships, dependencies, and transformations across various platforms. Maintain data dictionaries and ensure compliance with industry best practices. Azure data engineering stack data modeler needs to be handson with ADF, Azure databricks, SCD, Unity Catalogue and pyspark, power designer, biz designer. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

SQL Developer with SSIS (ETL Developer) Location: Hyderabad (Hybrid Model) Experience Required: 5+ Years Joining Timeline: Immediate to 20 Days Role Type: Individual Contributor (IC) Position Summary We are seeking a skilled SQL Developer with strong SSIS expertise to join a dynamic team supporting a leading US-based banking client. This is a hybrid role based in Hyderabad, suited for professionals experienced in building scalable, auditable ETL pipelines and collaborating within Agile teams. Must-Have Skills SkillProficiency SQL Development Expert in writing complex T-SQL queries, stored procedures, joins, and transactions. Proficient in handling error logging and audit logic for production-grade environments. ETL using SSIS Strong experience in designing, implementing, and debugging SSIS packages using components like script tasks, event handlers, and nested packages. Batch Integration Hands-on experience in managing high-volume batch data ingestion from various sources using SSIS, with performance and SLA considerations. Agile Delivery Actively contributed to Agile/Scrum teams, participated in sprint planning, code reviews, demos, and met sprint commitments. Stakeholder Collaboration Proficient in engaging with business/product owners for requirement gathering, transformation validation, and output review. Excellent communication skills required. Key Responsibilities Design and develop robust, auditable SSIS workflows based on business and data requirements. Ensure efficient deployment and maintenance using CI/CD tools like Jenkins or UCD. Collaborate with stakeholders to align solutions with business needs and data governance standards. Maintain and optimize SQL/SSIS packages for production environments ensuring traceability, performance, and error handling. Nice-to-Have Skills SkillDetail Cloud ETL (ADF) Exposure to Azure Data Factory or equivalent ETL tools. CI/CD (Jenkins/UCD) Familiar with DevOps deployment tools and pipelines. Big Data (Spark/Hadoop) Understanding or integration experience with big data systems. Other RDBMS (Oracle/Teradata) Experience in querying and integrating data from additional platforms. Apply here-sapna@helixbeat.com Show more Show less

Posted 1 week ago

Apply

7.0 - 15.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Greetings from TCS!! TCS is Hiring for Databricks architect Interview Mode: Virtual Required Experience: 7-15 years Work location: Chennai, Kolkata, Hyderabad Must have: Hands on Experience in ADF, Azure Databricks, Pyspark, Azure Data Factory, Unity Catalog, Data migrations, Data Security Good to have - Spark SQL, Spark Streaming, Kafka Hands on in Databricks on AWS, Apache Spark, AWS S3 (Data Lake), AWS Glue, AWS Redshift / Athena, AWS Data Catalog, Amazon Redshift, Amazon Athena, AWS RDS, AWS Glue, AWS EMR (Spark/Hadoop) CI/CD (Code Pipeline, Code Build) Good to have - AWS Lambda, Python, AWS CI/CD, Kafka MLflow, TensorFlow, or PyTorch, Airflow, CloudWatch If interested kindly send your updated CV and below mentioned details through DM/E-mail: srishti.g2@tcs.com Name: E-mail ID: Contact Number: Highest qualification (Fulltime): Preferred Location: Highest qualification university: Current organization: Total, years of experience: Relevant years of experience: Any gap: Mention-No: of months/years (career/ education): If any then reason for gap: Is it rebegin: Previous organization name: Current CTC: Expected CTC: Notice Period: Have you worked with TCS before (Permanent / Contract): If shortlisted, will you be available for a virtual interview on 13-Jun-25 (Friday)?: Show more Show less

Posted 1 week ago

Apply

7.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title -Data Engineer with API Development (Remote) Exp - 7+ years Location- Remote/Hybrid Shift timing- 11am to 8:30pm Contract- 6 months extendable JOB DESCRIPTION 7+ Years of Overall IT experience. 3+ Years of experience - Azure Architecture / Engineering (one or more of: Azure Functions, App Services, API Development) 3+ Years of experience - Development (ex. Python, C#, Go or other) 1+ CI/CD Experience (GitHub preferred) 2+ Years of API Development experience (Creating APIs and Consuming APIs) Nice to Have - Service Bus - Terraform - ADF - Data Bricks - Spark - Scala Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Jalandhar, Punjab, India

On-site

Linkedin logo

AlgoTutor is looking for a confident and engaging HFT Trainer to deliver in-person training sessions at one of our partner colleges in Jalandhar. If you’re passionate about guiding students in cracking HFT companies by guiding them on HFT curriculum — we want to hear from you! Location: On-site/Online , College Campus in Bengaluru Duration: 28 days Daily Hours: 6 hours/day Start Date: 16th Jun Key Responsibilities: Conduct sessions on below curriculum Week 1: Foundations of HFT Day 1 : Introduction to HFT, market microstructure, setup environment, data parsing. Day 2 : Low-latency C++ programming, multithreading, latency profiling. Day 3 : Data structures (arrays, hash maps), limit order book implementation. Day 4 : Networking (TCP/UDP, FIX protocol), low-latency message handling. Day 5 : Tick data processing, moving averages, real-time metrics. Day 6 : HFT system architecture, event-driven systems, trading loop. Day 7 : Weekly review, build a market data simulator project. Week 2: Trading Strategies & Optimization Day 8 : Statistical arbitrage, mean reversion, basic strategy coding. Day 9 : Market making strategies, order logic, inventory management. Day 10 : Time series analysis (MA, ARIMA), predictive modeling. Day 11 : Advanced low-latency techniques, multithreaded optimization. Day 12 : Execution algorithms (VWAP, TWAP), large order handling. Day 13 : Real-time risk management, stop-loss, risk detection. Day 14 : Weekly review, build a market-making bot. Week 3: Advanced Strategies & Infrastructure Day 15 : Pairs trading, cointegration, ADF test, backtesting. Day 16 : Latency arbitrage, strategy simulation, ethics. Day 17 : ML for HFT, feature engineering, order flow prediction. Day 18 : Infrastructure: co-location, FPGAs, network optimization. Day 19 : Order book dynamics, spoofing detection, high-volume trading. Day 20 : Portfolio optimization (Sharpe, Kelly), multi-asset strategies. Day 21 : Weekly review, build a latency arbitrage system. Week 4: Real-World Deployment Day 22 : Regulations (MiFID II, SEC), compliance checks. Day 23 : Backtesting framework, avoiding bias, strategy validation. Day 24 : Live trading simulation using real-time feeds. Day 25 : System monitoring, logging, failure detection. Day 26 : Capstone Project (Part 1): design and implement core logic. Day 27 : Capstone Project (Part 2): risk, compliance, testing, prep. Day 28 : Final presentations, wrap-up, certification, career advice. Requirements: Strong command over HFT Concepts Prior experience in classroom/online training Passionate about teaching Why Work With Us? Impact hundreds of students by enhancing their communication and confidence Be part of a mission-driven EdTech company shaping future professionals Opportunity for long-term collaboration on future training programs Apply Now and Grow with AlgoTutor! Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

We are seeking a proactive and business-oriented Data Functional Consultant with strong experience in Azure Data Factory and Azure Databricks . This role bridges the gap between business stakeholders and technical teams—translating business needs into scalable data solutions, ensuring effective data management, and enabling insights-driven decision-making. The ideal candidate is not a pure developer or data engineer but someone who understands business processes, data flows, and stakeholder priorities , and can help drive value from data platforms using cloud-native Azure services. What You’ll Do: Collaborate closely with business stakeholders to gather, understand, and document functional data requirements. Translate business needs into high-level data design, data workflows, and process improvements. Work with data engineering teams to define and validate ETL/ELT logic and data pipeline workflows using Azure Data Factory and Databricks. Facilitate functional workshops and stakeholder meetings to align on data needs and business KPIs. Act as a bridge between business teams and data engineers to ensure accurate implementation and delivery of data solutions. Conduct data validation, UAT, and support users in adopting data platforms and self-service analytics. Maintain functional documentation, data dictionaries, and mapping specifications. Assist in defining data governance, data quality, and master data management practices from a business perspective. Monitor data pipeline health and help triage issues from a functional/business impact standpoint. What You’ll Bring: Proven exposure to Azure Data Factory (ADF) for orchestrating data workflows. Practical experience with Azure Databricks for data processing (functional understanding, not necessarily coding). Strong understanding of data warehousing, data modeling, and business KPIs. Experience working in agile or hybrid project environments. Excellent communication and stakeholder management skills. Ability to translate complex technical details into business-friendly language. Familiarity with tools like Power BI, Excel, or other reporting solutions is a plus. Background in Banking, Finance industries is a bonus. What We Offer: At Delphi, we are dedicated to creating an environment where you can thrive, both professionally and personally. Our competitive compensation package, performance-based incentives, and health benefits are designed to ensure you're well-supported. We believe in your continuous growth and offer company-sponsored certifications, training programs , and skill-building opportunities to help you succeed. We foster a culture of inclusivity and support, with remote work options and a fully supported work-from-home setup to ensure your comfort and productivity. Our positive and inclusive culture includes team activities, wellness and mental health programs to ensure you feel supported. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Role: Azure Data Engineer Location: Gurugram We’re looking for a skilled and motivated Data Engineer to join our growing team and help us build scalable data pipelines, optimize data platforms, and enable real-time analytics. 🧠 What You'll Do 🔹 Design, develop, and maintain robust data pipelines using tools like Databricks, PySpark, SQL, Fabric, and Azure Data Factory 🔹 Collaborate with data scientists, analysts, and business teams to ensure data is accessible, clean, and actionable 🔹 Work on modern data lakehouse architectures and contribute to data governance and quality frameworks 🎯 Tech Stack ☁️ Azure | 🧱 Databricks | 🐍 PySpark | 📊 SQL 👤 What We’re Looking For ✅ 3+ years of experience in data engineering or analytics engineering ✅ Hands-on with cloud data platforms and large-scale data processing ✅ Strong problem-solving mindset and a passion for clean, efficient data design Job Description: Min 3 years of experience in modern data engineering/data warehousing/data lakes technologies on cloud platforms like Azure, AWS, GCP, Data Bricks etc. Azure experience is preferred over other cloud platforms. 5 years of proven experience with SQL, schema design and dimensional data modelling Solid knowledge of data warehouse best practices, development standards and methodologies Experience with ETL/ELT tools like ADF, Informatica, Talend etc., and data warehousing technologies like Azure Synapse, Microsoft Fabric, Azure SQL, Amazon redshift, Snowflake, Google Big Query etc. Strong experience with big data tools (Databricks, Spark etc..) and programming skills in PySpark and Spark SQL. Be an independent self-learner with “let’s get this done” approach and ability to work in Fast paced and Dynamic environment. Excellent communication and teamwork abilities. Nice-to-Have Skills: Event Hub, IOT Hub, Azure Stream Analytics, Azure Analysis Service, Cosmo DB knowledge. SAP ECC /S/4 and Hana knowledge. Intermediate knowledge on Power BI Azure DevOps and CI/CD deployments, Cloud migration methodologies and processes Best Regards, Santosh Cherukuri Email: scherukuri@bayonesolutions.com Show more Show less

Posted 1 week ago

Apply

Exploring ADF Jobs in India

The job market for ADF (Application Development Framework) professionals in India is witnessing significant growth, with numerous opportunities available for job seekers in this field. ADF is a popular framework used for building enterprise applications, and companies across various industries are actively looking for skilled professionals to join their teams.

Top Hiring Locations in India

Here are 5 major cities in India where there is a high demand for ADF professionals: - Bangalore - Hyderabad - Pune - Chennai - Mumbai

Average Salary Range

The estimated salary range for ADF professionals in India varies based on experience levels: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-20 lakhs per annum

Career Path

In the ADF job market in India, a typical career path may include roles such as Junior Developer, Senior Developer, Technical Lead, and Architect. As professionals gain more experience and expertise in ADF, they can progress to higher-level positions with greater responsibilities.

Related Skills

In addition to ADF expertise, professionals in this field are often expected to have knowledge of related technologies such as Java, Oracle Database, SQL, JavaScript, and web development frameworks like Angular or React.

Interview Questions

Here are 25 interview questions for ADF roles, categorized by difficulty level: - Basic: - What is ADF and its key features? - What is the difference between ADF Faces and ADF Task Flows? - Medium: - Explain the lifecycle of an ADF application. - How do you handle exceptions in ADF applications? - Advanced: - Discuss the advantages of using ADF Business Components. - How would you optimize performance in an ADF application?

Closing Remark

As you explore job opportunities in the ADF market in India, make sure to enhance your skills, prepare thoroughly for interviews, and showcase your expertise confidently. With the right preparation and mindset, you can excel in your ADF career and secure rewarding opportunities in the industry. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies