Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Hi {fullName} There is an opportunity for Azure Data Engineer(Databricks, Pyspark, Python, SQL)IN Hyderabad for which WALKIN interview is there on 24th May 25 between 9:30 AM TO 12:30 PM PLS SHARE below details to mamidi.p@tcs.com with subject line as Azure Data Engineer 24th May 25 if you are interested Email id: Contact no: Total EXP: Preferred Location: CURRENT CTC: EXPECTED CTC: NOTICE PERIOD: CURRENT ORGANIZATION: HIGHEST QUALIFICATION THAT IS FULL TIME : HIGHEST QUALIFICATION UNIVERSITY: ANY GAP IN EDUCATION OR EMPLOYMENT: IF YES HOW MANY YEARS AND REASON FOR GAP: ARE U AVAILABLE FOR WALKIN INTERVIEW ON 24TH MAY 25(YES/NO): We will share a mail to you by tom Night if you are shortlisted. Role Azure Data Engineer Desired Competencies (Technical/Behavioral Competency) Must-Have Minimum 4+ years of development experience in Azure. Must have “Data Warehouse / Data Lake” development experience. Must have “Azure Data Factory (ADF) & Azure SQL DB” Must have “Azure Data Bricks” experience using Python or Spark or Scala. Nice to have “Data Modelling” & “Azure Synapse” experience. Passion for Data Quality with an ability to integrate these capabilities into the deliverables. Prior use of Big Data components and the ability to rationalize and align their fit for a business case. Experience in working with different data sources - flat files, XML, JSON, Avro files and databases Experience in developing implementation plans and schedules and preparing documentation for the jobs according to the business requirements. Proven experience and ability to work with people across the organization and skilled at managing cross-functional relationships and communicating with leadership across multiple organizations Proven capabilities for strong written and oral communication skill with the ability to synthesize, simplify and explain complex problems to different audiences. Good-to-Have Nice to have Azure Data Engineer Certifications Roles & Responsibilities Ability to integrate into a project team environment and contribute to project planning activities. Lead ambiguous and complex situations to clear measurable plans. Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism SAP Management Level Senior Associate Job Description & Summary We are seeking a talented and experienced Azure Data Engineer to join our growing team. In this role, you will be responsible for designing, building, and maintaining data pipelines and solutions on the Microsoft Azure platform. You will play a pivotal role in ensuring the smooth operation and efficiency of our data infrastructure, enabling data-driven decision making across the organization. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities Design, develop, and implement robust and scalable data pipelines using Azure services such as Azure Data Factory, Azure Databricks, Azure Cosmos DB, Azure SQL Database, and other relevant services. Develop and maintain data models, schema, and data quality processes. Collaborate with stakeholders to define and understand data requirements. Develop and implement data governance policies and procedures. Troubleshoot and resolve data-related issues and performance bottlenecks. Stay up-to-date on the latest Azure technologies and best practices. Contribute to the development and maintenance of documentation and technical standards. Participate in code reviews and provide technical guidance to team members. Mandatory Skill Sets Spark framework, PySpark, Azure Databricks, Azure SQL Database, ADF, Storage Account, Azure Data Explorer Preferred Skill Sets Trino, Apache Airflow Years Of Experience Required 4 to 8 years Education Qualification Bachelor's degree in computer science, engineering, or a related field. Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Microsoft Azure Databricks, Microsoft Azure Data Explorer, PySpark Optional Skills Apache Web Server Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date Show more Show less
Posted 4 weeks ago
6 - 11 years
6 - 16 Lacs
Bhopal, Hyderabad, Pune
Hybrid
Urgent Opening for Sr/ Azure data lead Position!!!! Greening from NewVision Software!!! Exp : Min 6yrs CTC : As per company norms NP : Max 15 days Skills required : ADF, Databricks, SQL, Python JD : Job Description Position Summary: We are seeking a talented Sr. / Lead Data Engineer with a strong background in data engineering to join our team. You will play a key role in designing, building, and maintaining data pipelines using a variety of technologies, with a focus on the Microsoft Azure cloud platform. Responsibilities: Design, develop, and implement data pipelines using Azure Data Factory (ADF) or other orchestration tools. Write efficient SQL queries to extract, transform, and load (ETL) data from various sources into Azure Synapse Analytics. Utilize PySpark and Python for complex data processing tasks on large datasets within Azure Databricks. Collaborate with data analysts to understand data requirements and ensure data quality. Hands-on experience in designing and developing Datalakes and Warehouses Implement data governance practices to ensure data security and compliance. Monitor and maintain data pipelines for optimal performance and troubleshoot any issues. Develop and maintain unit tests for data pipeline code. Work collaboratively with other engineers and data professionals in an Agile development environment. Preferred Skills & Experience: Good knowledge of PySpark & working knowledge of Python Full stack Azure Data Engineering skills (Azure Data Factory, DataBricks and Synapse Analytics) Experience with large dataset handling Hands-on experience in designing and developing Datalakes and Warehouses
Posted 4 weeks ago
2 - 3 years
0 Lacs
Kochi, Kerala, India
On-site
Job Title - Data Engineer Sr.Analyst ACS SONG Management Level: Level 10 Sr. Analyst Location: Kochi, Coimbatore, Trivandrum Must have skills: Python/Scala, Pyspark/Pytorch Good to have skills: Redshift Job Summary You’ll capture user requirements and translate them into business and digitally enabled solutions across a range of industries. Your responsibilities will include: Roles And Responsibilities Designing, developing, optimizing, and maintaining data pipelines that adhere to ETL principles and business goals Solving complex data problems to deliver insights that helps our business to achieve their goals. Source data (structured→ unstructured) from various touchpoints, format and organize them into an analyzable format. Creating data products for analytics team members to improve productivity Calling of AI services like vision, translation etc. to generate an outcome that can be used in further steps along the pipeline. Fostering a culture of sharing, re-use, design and operational efficiency of data and analytical solutions Preparing data to create a unified database and build tracking solutions ensuring data quality Create Production grade analytical assets deployed using the guiding principles of CI/CD. Professional And Technical Skills Expert in Python, Scala, Pyspark, Pytorch, Javascript (any 2 at least) Extensive experience in data analysis (Big data- Apache Spark environments), data libraries (e.g. Pandas, SciPy, Tensorflow, Keras etc.), and SQL. 2-3 years of hands-on experience working on these technologies. Experience in one of the many BI tools such as Tableau, Power BI, Looker. Good working knowledge of key concepts in data analytics, such as dimensional modeling, ETL, reporting/dashboarding, data governance, dealing with structured and unstructured data, and corresponding infrastructure needs. Worked extensively in Microsoft Azure (ADF, Function Apps, ADLS, Azure SQL), AWS (Lambda,Glue,S3), Databricks analytical platforms/tools, Snowflake Cloud Datawarehouse. Additional Information Experience working in cloud Data warehouses like Redshift or Synapse Certification in any one of the following or equivalent AWS- AWS certified data Analytics- Speciality Azure- Microsoft certified Azure Data Scientist Associate Snowflake- Snowpro core- Data Engineer Databricks Data Engineering About Our Company | Accenture (do not remove the hyperlink) Experience: 3.5 -5 years of experience is required Educational Qualification: Graduation (Accurate educational details should capture) Show more Show less
Posted 4 weeks ago
6 - 10 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
TCS has been a great pioneer in feeding the fire of Young Techies like you. We are a global leader in the technology arena and there's nothing that can stop us from growing together. We are Hiring for Azure Data Engineer/ADF Developer. We are delighted to invite you for a discussion to get to know more about you and your professional experience. The interview will be in person. Venue details Date: 24-May-2025 Registration Time: 9.30 AM to 12.30 PM Location: TCS, Assotech Business Cresterra, Yamuna Tower, VI, Plot No.22, Noida-Greater Noida Expy, Sector 135, Noida-201301. Job Description Role: Azure Data Engineer/ADF Developer Experience: 6 to 10 years Location: Noida Required Technical Skill Set: Azure Data Factory Mandatory Technical Skill Set: · Azure Data Factory · SQL- Advance · SSIS · Data Bricks · SDFS · Azure Data Factory (ADF) pipelines and Polybase · Good knowledge of Azure Storage – including Blob Storage, Data Lake, Azure SQL, Data Factory V2, Databricks · Can work on streaming analytics and various data services in Azure like Data flow, etc · Ability to develop extensible, testable and maintainable code · Good understanding of the challenges of enterprise software development · Track record of delivering high volume, low latency distributed software solutions · Experience of working in Agile teams · Experience of the full software development lifecycle including analysis, design, implementation, testing and support · Experience of mentoring more junior developers and directing/organizing the work of team Good to have skills: Experience of Datawarehouse applications. Experience in TTH Domain Projects · Knowledge of Azure DevOps is desirable. · Knowledge of CI/CD and DevOps practices is an advantage. Desired Competencies (Technical/Behavioral Competency) Must-Have: Good know how of SDFS, Azure Data Factory (ADF) pipelines and Polybase · Good knowledge of Azure Storage – including Blob Storage, Data Lake, Azure SQL, Data Factory V2, Databricks · Can work on streaming analytics and various data services in Azure like Data flow, etc. · Client facing, Technical Role, Assertive, Team Member skills. · Good Communication skills both written and spoken Good-to-Have: Experience of Datawarehouse applications Experience in TTH Domain Projects · Knowledge of Azure DevOps is desirable. · Knowledge of CI/CD and DevOps practices is an advantage. Responsibility of / Expectations from the Role: Ensure the accuracy of the deliverables through quality assurance practice 6+ years of experience in related fields with strong development background using Azure Data Engineering and Azure PaaS services. Show more Show less
Posted 4 weeks ago
8 - 10 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Experience- 8-10 years Location- Pune, Mumbai, Bangalore, Noida, Chennai, Coimbatore, Hyderabad JD- Databricks with Data Scientist experience · 4 years of relevant work experience as a data scientist · Minimum 2 years of experience in Azure Cloud using Databricks Services, PySpark, Natural Language API, MLflow · Experience designing and building statistical forecasting models. · Experience in Ingesting data from API and Databases · Experience in Data Transformation using PySpark and SQL · Experience designing and building machine learning models. · Experience designing and building optimization models., including expertise with statistical data analysis · Experience articulating and translating business questions and using statistical techniques to arrive at an answer using available data. · Demonstrated skills in selecting the right statistical tools given a data analysis problem. Effective written and verbal communication skills. · Skillset: Python, Pyspark, Databricks, MLflow, ADF Show more Show less
Posted 4 weeks ago
10 - 15 years
0 Lacs
Mumbai, Maharashtra, India
On-site
About the company: With over 2.5 crore customers, over 5,000 distribution points and nearly 2,000 branches, IndusInd Bank is a universal bank with a widespread banking footprint across the country. IndusInd offers a wide array of products and services for individuals and corporates including microfinance, personal loans, personal and commercial vehicles loans, credit cards, SME loans. Over the years, IndusInd has grown ceaselessly and dynamically, driven by zeal to offer our customers banking services at par with the highest quality standards in the industry. IndusInd is a pioneer in digital first solutions to bring together the power of next-gen digital product stack, customer excellence and trust of an established bank. Job Purpose: To work on implementing data modeling solutions To design data flow and structure to reduce data redundancy and improving data movement among systems defining a data lineage To work in the Azure Data Warehouse To work with large data volume of data integration Experience With overall experience between 10 to 15 years, applicant must have minimum 8 to 11 years of hard core professional experience in data modeling for large Data Warehouse with multiple Sources. Technical Skills Expertise in core skill of data modeling principles/methods including conceptual, logical & physical Data Models Ability to utilize BI tools like Power BI, Tableau, etc to represent insights Experience in translating/mapping relational data models into XML and Schemas Expert knowledge of metadata management, relational & data modeling tools like ER Studio, Erwin or others. Hands-on experience in relational, dimensional and/or analytical experience (using RDBMS, dimensional, NoSQL, ETL and data ingestion protocols). Very strong in SQL queries Expertise in performance tuning of SQL queries. Ability to analyse source system and create Source to Target mapping. Ability to understand the business use case and create data models or joined data in Datawarehouse. Preferred experience in banking domain and experience in building data models/marts for various banking functions. Good to have knowledge of – -Azure powershell scripting or Python scripting for data transformation in ADF - SSIS, SSAS, BI tools like Power BI -Azure PaaS components like Azure Data Factory, Azure Data Bricks, Azure Data Lake, Azure Synapse (DWH), Polybase, ExpressRoute tunneling, etc. -API integration Responsibility Understanding the existing data model, existing data warehouse design, functional domain subject areas of data, documenting the same with as is architecture and proposed one. Understanding existing ETL process, various sources and analyzing, documenting the best approach to design logical data model where required Work with development team to implement the proposed data model into physical data model, build data flows Work with development team to optimize the database structure with best practices applying optimization methods. Analyze, document and implement to re-use of data model for new initiatives. Will interact with stakeholder, Users, other IT teams to understand the eco system and analyze for solutions Work on user requirements and create queries for creating consumption views for users from the existing DW data. Will train and lead a small team of data engineers. Qualifications Bachelors of Computer Science or Equivalent Should have certification done on Data Modeling and Data Analyst. Good to have a certification of Azure Fundamental and Azure Engineer courses (AZ900 or DP200/201) Behavioral Competencies Should have excellent problem-solving and time management skills Strong analytical thinking skills Applicant should have excellent communication skill and process oriented with flexible execution mindset. Strategic Thinking with Research and Development mindset. Clear and demonstrative communication Efficiently identify and solves issues Identify, track and escalate risks in a timely manner Selection Process: Interested Candidates are mandatorily required to apply through the listing on Jigya. Only applications received through Jigya will be evaluated further. Shortlisted candidates may need to appear in an Online Assessment and/or a Technical Screening interview administered by Jigya, on behalf on IndusInd Bank Candidates selected after the screening rounds will be processed further by IndusInd Bank Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Role Overview We are seeking a highly skilled Sr. ADF Data Engineer with strong expertise in Azure Data Factory (ADF), Data Warehousing, and SQL. The ideal candidate will bring robust ETL experience, a deep understanding of data warehousing concepts, and a knack for database performance optimization. A proactive, problem-solving mindset and excellent client communication skills are critical to succeed in this role. Role Responsibilities Design, develop, and maintain ETL processes using Azure Data Factory Collaborate with cross-functional teams to gather requirements and deliver efficient data solutions Optimize and tune data workflows for performance and scalability Create and maintain comprehensive documentation for all data engineering processes Assist in troubleshooting and resolving data-related issues Ensure data quality and integrity across various data sources Required Technical Skill Set ADF (Azure Data Factory): Strong hands-on experience with pipelines, data flows, and integrations SQL: Expertise in writing, optimizing queries, stored procedures, and packages ETL: Comprehensive experience in Extract, Transform, Load processes across multiple sources (SQL Server, Synapse, etc.) Skills: azure data factory (adf),documentation,etl,data warehousing,performance optimization,sql,data,data quality,troubleshooting Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
Remote
YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. At YASH, we’re a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth – bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future. We are looking forward to hire Data Warehouse Professionals in the following areas : Job Description Senior Data Engineer As a Senior Data Engineer, you will support the European World Area using the Windows & Azure suite of Analytics & Data platforms. The focus of the role is on the technical aspects and implementation of data gathering, integration and database design. We look forward to seeing your application! In This Role, Your Responsibilities Will Be Data Ingestion and Integration: Collaborate with Product Owners and analysts to understand data requirements & design, develop, and maintain data pipelines for ingesting, transforming, and integrating data from various sources into Azure Data Services. Migration of existing ETL packages: Migrate existing SSIS packages to Synapse pipelines Data Modelling: Assist in designing and implementing data models, data warehouses, and databases in Azure Synapse Analytics, Azure Data Lake Storage, and other Azure services. Data Transformation: Develop ETL (Extract, Transform, Load) processes using SQL Server Integration Services (SSIS), Azure Synapse Pipelines, or other relevant tools to prepare data for analysis and reporting. Data Quality and Governance: Implement data quality checks and data governance practices to ensure the accuracy, consistency, and security of data assets. Monitoring and Optimization: Monitor and optimize data pipelines and workflows for performance, scalability, and cost efficiency. Documentation: Maintain comprehensive documentation of processes, including data lineage, data dictionaries, and pipeline schedules. Collaboration: Work closely with cross-functional teams, including data analysts, data scientists, and business stakeholders, to understand their data needs and deliver solutions accordingly. Azure Services: Stay updated on Azure data services and best practices to recommend and implement improvements in our data architecture and processes. For This Role, You Will Need 3-5 years of experience in Data Warehousing with On-Premises or Cloud technologies Strong practical experience of Synapse pipelines / ADF. Strong practical experience of developing ETL packages using SSIS. Strong practical experience with T-SQL or any variant from other RDBMS. Graduate degree educated in computer science or a relevant subject. Strong analytical and problem-solving skills. Strong communication skills in dealing with internal customers from a range of functional areas. Willingness to work flexible working hours according to project requirements. Technical documentation skills. Fluent in English. Preferred Qualifications That Set You Apart Oracle PL/SQL. Experience in working on Azure Services like Azure Synapse Analytics, Azure Data Lake. Working experience with Azure DevOps paired with knowledge of Agile and/or Scrum methods of delivery. Languages: French, Italian, or Spanish would be an advantage. Agile certification. Who You Are You understand the importance and interdependence of internal customer relationships. You seek out experts and innovators to learn about the impact emerging technologies might have on your business. You focus on priorities and set stretch goals. Our Offer to You We understand the importance of work-life balance and are dedicated to supporting our employees' personal and professional needs. From competitive benefits plans and comprehensive medical care to equitable opportunities for growth and development we strive to create a workplace that is supportive and rewarding. Depending on location, our flexible work from home policy allows you to make the best of your time, by combining quiet home office days with collaborative experiences in the office so that you can personalize your work-life mix. Moreover, our global volunteer employee resource groups will empower you to connect with peers that share the same interest, promote diversity and inclusion, and positively contribute to communities around us. At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded upon four principles Flexible work arrangements, Free spirit, and emotional positivity Agile self-determination, trust, transparency, and open collaboration All Support needed for the realization of business goals, Stable employment with a great atmosphere and ethical corporate culture Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Greater Hyderabad Area
On-site
Centroid is looking for a member to join our team. As a member, you will be coordinating with the clients. An ideal candidate should possess knowledge of various technologies related to Oracle Applications with brief practical experience in the Technical path. You will be responsible for working along with the internal team members in the India office. Job Description: Min 10 years of experience in Oracle Apps Technical. Creating test plans, test cases, test scripts and performs functional testing. Work closely with various Business Partners to deliver high-quality application solutions. Write detailed technical design documents. Must have Upgrade experience Interacting with Oracle via their formal Metalink SR process in order to secure assistance and solutions for problems. Conduct and or participate in requirement/analysis sessions. Must be able to work with third-party systems and Perform modifications including EBusiness suite changes and the maintenance of various application interfaces. Should have EBS upgrade knowledge and also CEMLI objects development including integration and conversions experience in Finance, Manufacturing, Supply Chain and HR. Ability to work independently Required Skillset: Must have strong technical experience on SQL, PL/SQL, OTBI/BI, and XML Publisher reports, workflow, Unix, and Oracle Applications Framework (OAF and ADF). SOAP & REST APIs; Data conversion using FBDI. Development using Sandbox, FRS Reporting, Integrations with 3rd parties, Security console Management/SSO Effective team player with organization excellent communication skills (Written and Verbal) Must be able to handle independently with business users and external users and responsible for design, development, Testing support, production deployment, and production support. Excellent and communications skills. Preferred Skillset: An advantage to having Java Skillset Candidates with a 1-month notice period are preferred. Candidates willing to join immediately are preferred. Education Requirement: Bachelor's degree in Computer Science, Business Information Systems, or Computer Information Systems or equivalent work experience Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Thiruvananthapuram, Kerala, India
On-site
What you'll do... Manual and automation testing. Need to handle calls with onsite counter parts. Attend scrum ceremonies. Need to support production, during season time. Co-ordinate with business on quality. Collaboration with integrated system teams.. What you'll bring to the team... Strong knowledge in database testing along with Playwright automation. Good knowledge in agile framework. Good communication skill Must have Skills: Playwright, SQL, ADF, Azure Dev Ops Good to have Skills: Selenium, Agile Framework Why work for us At H&R Block, we understand that passion and creativity are the keys to true innovation. We provide an environment that is both fun and challenging, allowing our employees to grow and reach their full potential. We are a Great Place to Work certified company and is in Top 10 best workplaces for Women in India. We are the No.1 Best workplace in India among Mid-size companies. (Certified by GPTW - Jul 2024). H&R Block has always remained committed to being guided by a mission that is understood, embraced and pursued by the entire organization: To help our clients achieve their financial objectives by serving as their tax and financial partner. By combining the knowledge of the highly trained professionals with cognitive computing technology, we’re offering our clients the most personalized tax experience ever. H&R Block offers both Retail Tax and Online Tax services. The Global Technology Center of H&R Block India came to existence in October 2017 in Trivandrum, Kerala. We started as a small company with 5 employees and 4 years down the lane we are a proud 1000+-member workforce innovating and reinventing the Software Industry culture. Follow our LinkedIn page for latest updates/news: Linkedin Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
About The Role Gartner is looking for a well-rounded and motivated developer to join its Conferences Technology & Insight Analytics team, which is responsible for developing the reporting and analytics to support its Conference reporting operations. What You Will Do Collaborate with business stakeholders, design, build advanced analytic solutions for Gartner Conference Technology Business. Execution of our data strategy through design and development of Data platforms to deliver Reporting, BI and Advanced Analytics solutions. Design and development of key analytics capabilities using MS SQL Server, Azure SQL Managed Instance, T-SQL & ADF on Azure Platform. Consistently improving and optimizing T-SQL performance across the entire analytics platform. Create, build, and implement comprehensive data integration solutions utilizing Azure Data Factory. Analysing and solving complex business problems, breaking down the work into actionable tasks. Develop, maintain, and document data dictionary and data flow diagrams Responsible for building and enhancing the regression test suite to monitor nightly ETL jobs and identify data issues. Work alongside project managers, cross teams to support fast paced Agile/Scrum environment. Build Optimized solutions and designs to handle Big Data. Follow coding standards, build appropriate unit tests, integration tests, deployment scripts and review project artifacts created by peers. Contribute to overall growth by suggesting improvements to the existing software architecture or introducing new technologies. What You Will Need Looking for 5 - 6yrs of exp as Data Engineer. The candidate should have strong qualitative and quantitative problem-solving abilities and is expected to yield ownership and accountability. Must have: Strong experience with SQL, including diagnosing and resolving load failures, constructing hierarchical queries, and efficiently analysing existing SQL code to identify and resolve issues, using Microsoft Azure SQL Database, SQL Server, and Azure SQL Managed Instance. Strong technical experience with Database performance and tuning, troubleshooting and query optimization. Technical experience with Azure Data Factory on Azure Platform. Create and manage complex ETL pipelines to extract, transform, and load data from various sources using Azure Data Factory. Enhance data workflows to improve performance, scalability, and cost-effectiveness. Experience in Cloud Platforms, Azure technologies like Azure Analysis Services, Azure Blob Storage, Azure Data Lake, Azure Delta Lake etc Experience with data modelling, database design, and data warehousing concepts and Data Lake. Ensure thorough documentation of data processes, configurations, and operational procedures. Who You Are Graduate/Post-graduate in BE/Btech, ME/MTech or MCA is preferred Excellent communication and prioritization skills Able to work independently or within a team proactively in a fast-paced AGILE-SCRUM environment Strong desire to improve upon their skills in software development, frameworks, and technologies Who are we? At Gartner, Inc. (NYSE:IT), we guide the leaders who shape the world. Our mission relies on expert analysis and bold ideas to deliver actionable, objective insight, helping enterprise leaders and their teams succeed with their mission-critical priorities. Since our founding in 1979, we’ve grown to more than 20,000 associates globally who support ~15,000 client enterprises in ~90 countries and territories. We do important, interesting and substantive work that matters. That’s why we hire associates with the intellectual curiosity, energy and drive to want to make a difference. The bar is unapologetically high. So is the impact you can have here. What makes Gartner a great place to work? Our sustained success creates limitless opportunities for you to grow professionally and flourish personally. We have a vast, virtually untapped market potential ahead of us, providing you with an exciting trajectory long into the future. How far you go is driven by your passion and performance. We hire remarkable people who collaborate and win as a team. Together, our singular, unifying goal is to deliver results for our clients. Our teams are inclusive and composed of individuals from different geographies, cultures, religions, ethnicities, races, genders, sexual orientations, abilities and generations. We invest in great leaders who bring out the best in you and the company, enabling us to multiply our impact and results. This is why, year after year, we are recognized worldwide as a great place to work. What do we offer? Gartner offers world-class benefits, highly competitive compensation and disproportionate rewards for top performers. In our hybrid work environment, we provide the flexibility and support for you to thrive — working virtually when it's productive to do so and getting together with colleagues in a vibrant community that is purposeful, engaging and inspiring. Ready to grow your career with Gartner? Join us. The policy of Gartner is to provide equal employment opportunities to all applicants and employees without regard to race, color, creed, religion, sex, sexual orientation, gender identity, marital status, citizenship status, age, national origin, ancestry, disability, veteran status, or any other legally protected status and to seek to advance the principles of equal employment opportunity. Gartner is committed to being an Equal Opportunity Employer and offers opportunities to all job seekers, including job seekers with disabilities. If you are a qualified individual with a disability or a disabled veteran, you may request a reasonable accommodation if you are unable or limited in your ability to use or access the Company’s career webpage as a result of your disability. You may request reasonable accommodations by calling Human Resources at +1 (203) 964-0096 or by sending an email to ApplicantAccommodations@gartner.com. Job Requisition ID:98336 By submitting your information and application, you confirm that you have read and agree to the country or regional recruitment notice linked below applicable to your place of residence. Gartner Applicant Privacy Link: https://jobs.gartner.com/applicant-privacy-policy For efficient navigation through the application, please only use the back button within the application, not the back arrow within your browser. Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Summary: The metrics insights and analytics team is responsible for building dashboards and analytical solutions using AI ML based on requirements from business . Provide predictive and prescriptive analytics based using various delivery execution parameters and give actionable insights to users. Automate processes using new age machine learning algorithms. K ey Roles and Responsibilities: C onceptualize, maintain, automate dashboards as per the requirementsA utomation of existing processes to improve productivity and time to market E nable decision making and action plan identification through Metrics analytics C onduct training and presentations C onnect with various stakeholders to understand business problems and provide solutions B ring new age solutions and techniques into the way of working S k ills:M i nimum 3-7 years of work experience on power BI dashboards/ TABLEAU and python Mi nimum 3-7 years of work experience on AI/ML development St rong Analytical skills, adept in solutioning & problem solving, Inclination towards numbersEx perience of working on Text analytics, NLPEx perienced in data cleansing, pre-processing data and exploration data analysisKn owledge on Azure ADF, excel MACRO, RPA will be an advantageAb le to perform feature engineering, normalize data and build correlation mapsPr oficient in SQLHa nd-on experience in model operationalization and pipeline managementCa pable of working with global teamsGo od presentation and training skills L T IMindtree is a global technology consulting and digital solutions company that enables enterprises across industries to reimagine business models, accelerate innovation, and maximize growth by harnessing digital technologies. As a digital transformation partner to more than 750 clients, LTIMindtree brings extensive domain and technology expertise to help drive superior competitive differentiation, customer experiences, and business outcomes in a converging world. Powered by nearly 90,000 talented and entrepreneurial professionals across more than 30 countries, LTIMindtree — a Larsen & Toubro Group company — combines the industry-acclaimed strengths of erstwhile Larsen and Toubro Infotech and Mindtree in solving the most complex business challenges and delivering transformation at scale. For more information, please visit www.ltimindtree.com https://www.ltimindtree.com/ . D E I Statement: LT IMindtree is proud to be an equal opportunity employer. We are committed to equal employment opportunity regardless of race, ethnicity, nationality, gender, gender-identity, gender expression, language, age, sexual orientation, religion, marital status, veteran status, socio-economic status, disability, or any other characteristic protected by applicable law. Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role: Pyspark Developer Location: Hyderabad/ Chennai Duration: Full time Job Description: Key Skills – Pyspark, Spark, Python, Hive, SQL Job Descriptions: - 6+ Year experience as Data Engineer with experience in building ETL/Data pipelines with cloud technologies - 4+ years extensive experience in designing, developing, and implementing scalable data pipelines using Databricks to ingest, transform, and store structured and unstructured data. - 4+ years' experience programming languages such as Python/Pyspark and query languages like SQL - 4+ years' experience in building metadata driven data ingestion pipelines using ADF - 4+ years' experience in analysing, optimizing, and tuning existing data pipelines for performance, reliability, and efficiency - 2+ years' experience in implementing ML Ops practices to streamline the deployment and management of machine learning models. - 2+ years' experience in utilizing Apache Airflow for job orchestration and workflow management - Familiarity with CI/CD tools and practices for automating the deployment of data engineering solutions. - Experience in collaborating with data scientists, analysts, and other stakeholders to understand business requirements and translate them into technical solutions - Knowledge/Experience in implementing security measures and standard processes to ensure data privacy and compliance with regulatory standards - In-depth knowledge of data engineering concepts, ETL processes, and data architecture principles. Thanks, Siva Show more Show less
Posted 4 weeks ago
2 - 8 years
0 Lacs
Bengaluru, Karnataka
Work from Office
Experience- 8-10 years Location- Pune, Mumbai, Bangalore, Noida, Chennai, Coimbatore, Hyderabad JD- · 4 years of relevant work experience as a data scientist · Minimum 2 years of experience in Azure Cloud using Databricks Services, PySpark, Natural Language API, MLflow · Experience designing and building statistical forecasting models. · Experience in Ingesting data from API and Databases · Experience in Data Transformation using PySpark and SQL · Experience designing and building machine learning models. · Experience designing and building optimization models., including expertise with statistical data analysis · Experience articulating and translating business questions and using statistical techniques to arrive at an answer using available data. · Demonstrated skills in selecting the right statistical tools given a data analysis problem. Effective written and verbal communication skills. · Skillset: Python, Pyspark, Databricks, MLflow, ADF Job Types: Full-time, Permanent Pay: From ₹1,500,000.00 per year Schedule: Fixed shift Application Question(s): How soon you can join? What is your current CTC? What is your expected CTC? What is your current location? How many years of experience do you have in Databricks? How many years of experience do you have in Python, Pyspark? How many years of experience do have as a Data Scientist? Experience: total: 8 years (Required) Work Location: In person
Posted 4 weeks ago
3 years
0 Lacs
Pune, Maharashtra, India
On-site
Exp: 6 - 14 Yrs Work Mode: Hybrid Location: Bangalore, Chennai, Kolkata, Pune and Gurgaon Primary Skills:Python, pyspark, Azure Data Factory, snowflake, snowpipe, snowsql, Snowsight, Snowpark, ETL and SQL. Snowpro certified is plus Architect Exp Mandatory Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Snowflake, Databricks and ADF. Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in the reporting layer and develop a data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussions with client architect and team members Orchestrate the data pipelines in the scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Expertise in Snowflake security, Snowflake SQL and designing/implementing other Snowflake objects. Hands-on experience with Snowflake utilities, SnowSQL, Snowpipe, Snowsight and Snowflake connectors. Deep understanding of Star and Snowflake dimensional modeling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL and Spark (PySpark) Experience in building ETL / data warehouse transformation processes Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning, troubleshooting and Query Optimization. Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Skills: python,snowpro,sql,azure data factory,azure,snowpipe,neo4j,skills,data engineering,snowflake,terraform,nosql,circleci,git,data management,unix shell scripting,pl/sql,data warehouse,data bricks,pipelines,cassandra,snowsql,architects,rdbms,databricks,data,projects,pyspark,adf,snowsight,etl,snowpark,mongodb Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Reporting Data Engineer Join EY as a MARS Data Engineer and be at the forefront of providing and implementing innovative data insights, data products, and data services. MARS is a data platform providing custom data insights, DaaS and DaaP for a variety of EY departments and staff. We leverage software development practices to develop intricate data insights and develop data products. Your Key Responsibilities As a member of the MARS team, you will play a critical role in our mission of providing innovative data insights, the operations and support of the MARS data platform. This includes supporting customers, internal team members, and management. Operations and support include estimating, designing, developing and delivery of data products and services. You will contribute your creative solutions and knowledge to our data platform which features 2TB of mobile device data daily (300K+ devices). Our platform empowers our product managers and help enable our teams to build a better working world. As reporting engineer with the MARS team, the following activities are expected: Collaborate closely with the product manager to align activities to timelines and deadlines Proactively suggest new ideas and solutions, driving them to implementation with minimal guidance on technical delivery Provide input to the MARS roadmap and actively participate to bring it to life Collaborate with the Intune engineering team to get a clear understanding of the mobile device lifecycle and the relationship to Intune data and reporting Serve as the last level of support for all MARS data reporting questions and issues. Participate and contribute in the below activities: Customer discussions and requirement gathering sessions Application reports (daily, weekly, monthly, quarterly, annually) Custom reporting for manual reports, dashboards, exports, APIs, and semantic models Customer Service engagements Daily team meetings Work estimates and daily status Data & Dashboard monitoring & troubleshooting Automation Data management and classification Maintaining design documentation for Data schema, data models, data catalogue, and related products/services. Monitoring and integrating a variety of data sources Maintain and develop custom data quality tools General Skills Skills and attributes for success Analytical Ability: Strong analytical skills in supporting core technologies, particularly in managing large user bases, to effectively troubleshoot and optimize data solutions. Communication Skills: Excellent written and verbal communication skills, with the ability to articulate complex technical concepts clearly to both technical and non-technical stakeholders. Proficiency in English is required, with additional languages being a plus. Interpersonal Skills: Strong interpersonal skills, sound judgment, and tact to foster collaboration with colleagues and customers across diverse cultural backgrounds. Creative Problem-Solving: Ability to conceptualize innovative solutions that add value to end users, particularly in the context of mobile applications and services. Self-Starter Mentality: A proactive and self-motivated approach to work, with the ability to take initiative and drive projects forward independently. Documentation Skills: Clear and concise documentation skills, ensuring that all processes, solutions, and communications are well-documented for future reference. Organizational skills: The ability to define project plans, execute them, and manage ongoing risks and communications throughout the project lifecycle. Cross-Cultural Awareness: Awareness of and sensitivity to cross-cultural dynamics, enabling effective collaboration with global teams and clients. User Experience Focus: Passionate about improving user experience, with an understanding of how to measure, monitor, and enhance user satisfaction through feedback and analytics. To qualify for the role, you must have the following qualifications: At least three-years of experience in the following technologies and methodologies Hands-on experience in Microsoft Intune data, Mobile Device and Application Management data (MSFT APIs, Graph and IDW) Proven experience in mobile platform engineering or a related field. Strong understanding of mobile technologies and security protocols, particularly within an Intune-based environment. Experience with Microsoft Intune, including mobile device and application management. Proficient in supporting Modern Workplace tools and resources. Skilled in supporting Modern Workplace tools and resources Experience with iOS and Android operating systems. Proficient in PowerShell scripting for automation and management tasks. Ability to operate proactively and independently in a fast-paced environment. Solution oriented mindset with the capability to design and implement creative Mobile solutions and the ability to suggest and implement solutions that meet EY’s requirements Ability to work in UK working hours Specific technology skills include the following: Technical Skills Power BI - semantic models, Advanced Dashboards Power Bi Templates Intune Reporting and Intune Data Intune Compliance Intune Device Intune Policy management Intune Metrics Intune Monitoring SPLUNK data and reporting Sentinel data and reporting HR data and reporting Mobile Defender data and reporting AAD-Active Directory Data quality & data assurance Data Bricks Web Analytics Mobile Analytics Azure Data Factory Azure pipelines/synapses Azure SQL DB/Server ADF Automation Azure Kubernetes Service (KaaS) Key Vault management Azure Monitoring App Proxy & Azure Front Door data exports API Development Python, SQL, KQL, Power Apps MSFT Intune APIs, (Export, App Install) Virtual Machines SharePoint - General operations Data modeling ETL and related technologies Ideally, you’ll also have the following: Strong communication skills to effectively liaise with various stakeholders. A proactive approach to suggesting and implementing new ideas. Familiarity with the latest trends in mobile technology. Ability to explain very technical topics to non-technical stakeholders Experience in managing and supporting large mobile environments. Testing and Quality Assurance – ensure our mobile platform meets quality, performance and security standards. Implementation of new products and/or service offerings. Experience with working in a large global environment XML data formats Agile delivery Object-oriented design and programming Software development Mobile What we look for: A person that demonstrates a commitment to integrity, initiative, collaboration, efficiency and three or more years in the field of data analytics, and Intune data reporting. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Data Engineer Department: Engineering Type of Position: Full Time About us: Arrise Solutions India Pvt. Ltd. is a leading content provider to the iGaming and Betting Industry, offering a multi-product portfolio that is innovative, regulated and mobile-focused. We strive to create the most engaging and evocative experience for customers globally across a range of products, including slots, live casino, sports betting, virtual sports and bingo. We are seeking a talented and experienced Data Engineer that will work in a global team of Data Scientists delivering their pipelines into production in the most efficient way possible. You will also implement monitoring and alert systems. The ideal candidate should have in-depth knowledge and experience of Python, Azure/AWS, data storage, Data Pipelines. Key Responsibilities Create and manage ETL workflows using Python and relevant libraries (e.g., Pandas, NumPy) for high-volume data processing Monitor and optimize data workflows to reduce latency, maximize throughput, and ensure high-quality data availability. Develop REST API integrations and Python scripts to automate data exchanges with internal systems and BI dashboards. Implement validation processes and address anomalies or performance bottlenecks in real time. Design, develop and maintain Data Engineering pipelines for Machine Learning projects, ensuring high reliability and scalability. Collaborate with cross-functional teams across data Science and engineering to come up with solutions to complex problem statements. Automate existing workflows within Data Science team Required Skills And Qualifications Bachelor’s or master’s degree in computer science, Engineering, or a related field. Advanced Python proficiency with data libraries (Pandas, NumPy, etc.). Deep understanding of ETL / Reporting / Cloud (Azure) / DS technologies 3–5 years of professional experience in data engineering, ETL development, or similar roles. Experience in Azure Data Factory, Databricks, Azure data lake and Azure SQL Server. Configuration and Deployment of ADF packages. Experience working with SQL databases (e.g., MySQL, PostgreSQL) and NoSQL solutions (e.g., MongoDB). Experience with version control (Git) and continuous integration practices. Prior experience in handling very large datasets across different business functions. Excellent problem-solving, analytical, and communication skills. PREFERRED QUALIFICATIONS: Extensive experience with Azure ecosystem, particularly Azure Data Engineering and Machine Learning. Experience developing computer vision, text, audio, and/or tabular data models. Strong proficiency in Gitlab CI, Jenkins, Grafana, Docker. Excellent software engineering skills in API design and development, and concurrency design skills. If you are a skilled Data Engineer who has passion to work in a fast-paced environment, have an eye for details and ready to experiment new things, we encourage you to apply and be part of our dynamic and innovative team and organization. What We Offer Competitive compensation depending on experience Opportunities for professional and personal development Opportunities to progress within a dynamic team. Chance to work with close and collaborative colleagues Comprehensive health coverage OUR VALUES PERSISTENCE We never give up and are determined to be the best at what we do. RESPECT We value and respect our clients, their players, and our team members; promoting professionalism, integrity and fairness without compromise. OWNERSHIP We take ownership of our work and consistently deliver in a reliable manner; always providing the highest level of quality. Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About The Position The Business Analyst - Operations Performance is part of the Technical Services organization that sits within the Chevron ENGINE Center and is responsible for delivering data solutions that meet the needs of Chevron's Asset Management, Production Accounting and Operational workflows. This role will oversee the development of data products from start to finish to ensure they meet customer expectations. The Business Analyst - Operations Performance will be the face of the Data team and the voice of the customer for the development teams. Key Responsibilities Product Vision and Strategy: Develop and communicate a clear product vision and roadmap. Define product goals and key performance indicators (KPIs). Translate business objectives into actionable product requirements. Product Backlog Management Create and maintain a prioritized product backlog. Define user stories, epics, and acceptance criteria for backlog items. Prioritize features based on business value and customer needs. Data Product Development Ensure alignment with Chevron's architectural guidelines by leveraging architecture guidance. Ensure the best practice data product development lifecycle is adhered to by the team. Collaboration And Communication Facilitate communication between the development team, stakeholders, and customers. Conduct user research and gather customer feedback. Present product updates and roadmap to stakeholders. Agile Development Process Participate in sprint planning, reviews, and retrospectives. Collaborate with the development team to ensure sprint goals are met. Make timely decisions to address issues and adapt to changing priorities. Quality Assurance Ensure product quality by reviewing deliverables against acceptance criteria. Identify and address potential risks and issues. Learning & Development Opportunities Exposure to functional workflows in Operations, Production Engineering, Facilities Engineering. Required Qualifications Bachelor’s degree in a related engineering discipline (mechanical, chemical, etc.) (B.E./B.Tech.) or computer science from a deemed/recognized (AICTE) university Experience in being a liaison between technical teams and business stakeholders. Critical thinking and practical problem solving. Understanding of data management, data storage, and data infrastructure. Demonstratable experience in SQL querying and modern data warehousing. Preferred Qualifications 5+ years of experience as a Technical Product Owner or Project Manager in a Data Management space. 3+ years of experience in development of data products in a cloud environment. Good understanding of O&G business and business workflows. Azure cloud and Databricks experience. Outcome-focused attitude. High degree of technical acumen in SQL, Spark, ADF, Databricks, Power BI. Chevron ENGINE supports global operations, supporting business requirements across the world. Accordingly, the work hours for employees will be aligned to support business requirements. The standard work week will be Monday to Friday. Working hours are 8:00am to 5:00pm or 1.30pm to 10.30pm. Chevron participates in E-Verify in certain locations as required by law. Show more Show less
Posted 4 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Introduction A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat. Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role, you'll be encouraged to challenge the norm, investigate ideas outside of your role, and come up with creative solutions resulting in ground breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience. Your Role And Responsibilities Ability to take full ownership and deliver component or functionality. Supporting the team to deliver project features with high quality and providing technical guidance. Responsible to work effectively individually and with team members toward customer satisfaction and success Preferred Education Master's Degree Required Technical And Professional Expertise SQL ADF Azure Data Bricks Preferred Technical And Professional Experience PostgreSQL, MSSQL Eureka Hystrix, zuul/API gateway In-Memory storage Show more Show less
Posted 1 month ago
0 years
0 Lacs
Kochi, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Reporting Data Engineer Join EY as a MARS Data Engineer and be at the forefront of providing and implementing innovative data insights, data products, and data services. MARS is a data platform providing custom data insights, DaaS and DaaP for a variety of EY departments and staff. We leverage software development practices to develop intricate data insights and develop data products. Your Key Responsibilities As a member of the MARS team, you will play a critical role in our mission of providing innovative data insights, the operations and support of the MARS data platform. This includes supporting customers, internal team members, and management. Operations and support include estimating, designing, developing and delivery of data products and services. You will contribute your creative solutions and knowledge to our data platform which features 2TB of mobile device data daily (300K+ devices). Our platform empowers our product managers and help enable our teams to build a better working world. As reporting engineer with the MARS team, the following activities are expected: Collaborate closely with the product manager to align activities to timelines and deadlines Proactively suggest new ideas and solutions, driving them to implementation with minimal guidance on technical delivery Provide input to the MARS roadmap and actively participate to bring it to life Collaborate with the Intune engineering team to get a clear understanding of the mobile device lifecycle and the relationship to Intune data and reporting Serve as the last level of support for all MARS data reporting questions and issues. Participate and contribute in the below activities: Customer discussions and requirement gathering sessions Application reports (daily, weekly, monthly, quarterly, annually) Custom reporting for manual reports, dashboards, exports, APIs, and semantic models Customer Service engagements Daily team meetings Work estimates and daily status Data & Dashboard monitoring & troubleshooting Automation Data management and classification Maintaining design documentation for Data schema, data models, data catalogue, and related products/services. Monitoring and integrating a variety of data sources Maintain and develop custom data quality tools General Skills Skills and attributes for success Analytical Ability: Strong analytical skills in supporting core technologies, particularly in managing large user bases, to effectively troubleshoot and optimize data solutions. Communication Skills: Excellent written and verbal communication skills, with the ability to articulate complex technical concepts clearly to both technical and non-technical stakeholders. Proficiency in English is required, with additional languages being a plus. Interpersonal Skills: Strong interpersonal skills, sound judgment, and tact to foster collaboration with colleagues and customers across diverse cultural backgrounds. Creative Problem-Solving: Ability to conceptualize innovative solutions that add value to end users, particularly in the context of mobile applications and services. Self-Starter Mentality: A proactive and self-motivated approach to work, with the ability to take initiative and drive projects forward independently. Documentation Skills: Clear and concise documentation skills, ensuring that all processes, solutions, and communications are well-documented for future reference. Organizational skills: The ability to define project plans, execute them, and manage ongoing risks and communications throughout the project lifecycle. Cross-Cultural Awareness: Awareness of and sensitivity to cross-cultural dynamics, enabling effective collaboration with global teams and clients. User Experience Focus: Passionate about improving user experience, with an understanding of how to measure, monitor, and enhance user satisfaction through feedback and analytics. To qualify for the role, you must have the following qualifications: At least three-years of experience in the following technologies and methodologies Hands-on experience in Microsoft Intune data, Mobile Device and Application Management data (MSFT APIs, Graph and IDW) Proven experience in mobile platform engineering or a related field. Strong understanding of mobile technologies and security protocols, particularly within an Intune-based environment. Experience with Microsoft Intune, including mobile device and application management. Proficient in supporting Modern Workplace tools and resources. Skilled in supporting Modern Workplace tools and resources Experience with iOS and Android operating systems. Proficient in PowerShell scripting for automation and management tasks. Ability to operate proactively and independently in a fast-paced environment. Solution oriented mindset with the capability to design and implement creative Mobile solutions and the ability to suggest and implement solutions that meet EY’s requirements Ability to work in UK working hours Specific technology skills include the following: Technical Skills Power BI - semantic models, Advanced Dashboards Power Bi Templates Intune Reporting and Intune Data Intune Compliance Intune Device Intune Policy management Intune Metrics Intune Monitoring SPLUNK data and reporting Sentinel data and reporting HR data and reporting Mobile Defender data and reporting AAD-Active Directory Data quality & data assurance Data Bricks Web Analytics Mobile Analytics Azure Data Factory Azure pipelines/synapses Azure SQL DB/Server ADF Automation Azure Kubernetes Service (KaaS) Key Vault management Azure Monitoring App Proxy & Azure Front Door data exports API Development Python, SQL, KQL, Power Apps MSFT Intune APIs, (Export, App Install) Virtual Machines SharePoint - General operations Data modeling ETL and related technologies Ideally, you’ll also have the following: Strong communication skills to effectively liaise with various stakeholders. A proactive approach to suggesting and implementing new ideas. Familiarity with the latest trends in mobile technology. Ability to explain very technical topics to non-technical stakeholders Experience in managing and supporting large mobile environments. Testing and Quality Assurance – ensure our mobile platform meets quality, performance and security standards. Implementation of new products and/or service offerings. Experience with working in a large global environment XML data formats Agile delivery Object-oriented design and programming Software development Mobile What we look for: A person that demonstrates a commitment to integrity, initiative, collaboration, efficiency and three or more years in the field of data analytics, and Intune data reporting. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Kochi, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Description About the role: As a Senior Data Engineer, you will be responsible for building and supporting large-scale data architectures that provide information to downstream systems and business users. We are seeking an innovative and experienced individual who can aggregate and organize data from multiple sources to streamline business decision-making. In your role, you will collaborate closely with Data Engineer Leads and partners to establish and maintain data platforms that support front-end analytics. Your contributions will inform Takeda’s dashboards and reporting, providing insights to stakeholders throughout the business. In this role, you will be a part of the Digital Insights and Analytics team. This team drives business insights through IT data analytics techniques such as pattern recognition, AI/ML, and data modelling, to analyse and interpret the organization’s data with the purpose of drawing conclusions about information and trends. This role will work closely with the Tech Delivery Lead and Data Engineer Junior, both located in India. This role will align to the Data & Analytics chapter of the ICC. This position will be part of PDT Business Intelligence pod and will report to Data Engineering Lead. How you will contribute: Develop and maintain scalable data pipelines, in line with ETL principles, and build out new integrations, using AWS/Azure native technologies, to support continuing increases in data source, volume, and complexity. Define data requirements, gather, and mine data, while validating the efficiency of data tools in the Big Data Environment. Lead the evaluation, implementation and deployment of emerging tools and processes to improve productivity. Implement processes and systems to provide accurate and available data to key stakeholders, downstream systems, and business processes. Partner with Business Analysts and Solution Architects to develop technical architectures for strategic enterprise projects and initiatives. Coordinate with Data Scientists to understand data requirements, and design solutions that enable advanced analytics, machine learning, and predictive modelling. Mentor and coach junior Data Engineers on data standards and practices, promoting the values of learning and growth. Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. Minimum Requirements/Qualifications: Bachelor's degree in engineering, Computer Science, Data Science, or related field 5-9 years of experience in software development, data science, data engineering, ETL, and analytics reporting development Experience in building and maintaining data and system integrations using dimensional data modelling and optimized ETL pipelines. Experience in design and developing ETL pipelines using ETL tools like IICS, Datastage, Abinitio, Talend etc. Proven track record of designing and implementing complex data solutions Demonstrated understanding and experience using: Data Engineering Programming Languages (i.e., Python, SQL) Distributed Data Framework (e.g., Spark) Cloud platform services (AWS/ Azure preferred) Relational Databases DevOps and continuous integration AWS knowledge on services like Lambda, DMS, Step Functions, S3, Event Bridge, Cloud Watch, Aurora RDS or related AWS ETL services Azure knowledge on services like ADF, ADLS, etc. Knowledge of Data lakes, Data warehouses Databricks/Delta Lakehouse architecture Code management platforms like Github/ Gitlab/ etc., Understanding of database architecture, Data modelling concepts and administration. Handson experience of Spark Structured Streaming for building real-time ETL pipelines. Utilizes the principles of continuous integration and delivery to automate the deployment of code changes to elevate environments, fostering enhanced code quality, test coverage, and automation of resilient test cases. Proficient in programming languages (e.g., SQL, Python, Pyspark) to design, develop, maintain, and optimize data architecture/pipelines that fit business goals. Strong organizational skills with the ability to work multiple projects simultaneously and operate as a leading member across globally distributed teams to deliver high-quality services and solutions. Excellent written and verbal communication skills, including storytelling and interacting effectively with multifunctional teams and other strategic partners Strong problem solving and troubleshooting skills Ability to work in a fast-paced environment and adapt to changing business priorities Preferred requirements: Master's degree in engineering specialized in Computer Science, Data Science, or related field Demonstrated understanding and experience using: Knowledge in CDK Experience in IICS Data Integration tool Job orchestration tools like Tidal/Airflow/ or similar Knowledge on No SQL Proficiency in leveraging the Databricks Unity Catalog for effective data governance and implementing robust access control mechanisms is highly advantageous. Databricks Certified Data Engineer Associate AWS/Azure Certified Data Engineer EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Trivandrum, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Description About the role: As a Senior Data Engineer, you will be responsible for building and supporting large-scale data architectures that provide information to downstream systems and business users. We are seeking an innovative and experienced individual who can aggregate and organize data from multiple sources to streamline business decision-making. In your role, you will collaborate closely with Data Engineer Leads and partners to establish and maintain data platforms that support front-end analytics. Your contributions will inform Takeda’s dashboards and reporting, providing insights to stakeholders throughout the business. In this role, you will be a part of the Digital Insights and Analytics team. This team drives business insights through IT data analytics techniques such as pattern recognition, AI/ML, and data modelling, to analyse and interpret the organization’s data with the purpose of drawing conclusions about information and trends. This role will work closely with the Tech Delivery Lead and Data Engineer Junior, both located in India. This role will align to the Data & Analytics chapter of the ICC. This position will be part of PDT Business Intelligence pod and will report to Data Engineering Lead. How you will contribute: Develop and maintain scalable data pipelines, in line with ETL principles, and build out new integrations, using AWS/Azure native technologies, to support continuing increases in data source, volume, and complexity. Define data requirements, gather, and mine data, while validating the efficiency of data tools in the Big Data Environment. Lead the evaluation, implementation and deployment of emerging tools and processes to improve productivity. Implement processes and systems to provide accurate and available data to key stakeholders, downstream systems, and business processes. Partner with Business Analysts and Solution Architects to develop technical architectures for strategic enterprise projects and initiatives. Coordinate with Data Scientists to understand data requirements, and design solutions that enable advanced analytics, machine learning, and predictive modelling. Mentor and coach junior Data Engineers on data standards and practices, promoting the values of learning and growth. Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. Minimum Requirements/Qualifications: Bachelor's degree in engineering, Computer Science, Data Science, or related field 5-9 years of experience in software development, data science, data engineering, ETL, and analytics reporting development Experience in building and maintaining data and system integrations using dimensional data modelling and optimized ETL pipelines. Experience in design and developing ETL pipelines using ETL tools like IICS, Datastage, Abinitio, Talend etc. Proven track record of designing and implementing complex data solutions Demonstrated understanding and experience using: Data Engineering Programming Languages (i.e., Python, SQL) Distributed Data Framework (e.g., Spark) Cloud platform services (AWS/ Azure preferred) Relational Databases DevOps and continuous integration AWS knowledge on services like Lambda, DMS, Step Functions, S3, Event Bridge, Cloud Watch, Aurora RDS or related AWS ETL services Azure knowledge on services like ADF, ADLS, etc. Knowledge of Data lakes, Data warehouses Databricks/Delta Lakehouse architecture Code management platforms like Github/ Gitlab/ etc., Understanding of database architecture, Data modelling concepts and administration. Handson experience of Spark Structured Streaming for building real-time ETL pipelines. Utilizes the principles of continuous integration and delivery to automate the deployment of code changes to elevate environments, fostering enhanced code quality, test coverage, and automation of resilient test cases. Proficient in programming languages (e.g., SQL, Python, Pyspark) to design, develop, maintain, and optimize data architecture/pipelines that fit business goals. Strong organizational skills with the ability to work multiple projects simultaneously and operate as a leading member across globally distributed teams to deliver high-quality services and solutions. Excellent written and verbal communication skills, including storytelling and interacting effectively with multifunctional teams and other strategic partners Strong problem solving and troubleshooting skills Ability to work in a fast-paced environment and adapt to changing business priorities Preferred requirements: Master's degree in engineering specialized in Computer Science, Data Science, or related field Demonstrated understanding and experience using: Knowledge in CDK Experience in IICS Data Integration tool Job orchestration tools like Tidal/Airflow/ or similar Knowledge on No SQL Proficiency in leveraging the Databricks Unity Catalog for effective data governance and implementing robust access control mechanisms is highly advantageous. Databricks Certified Data Engineer Associate AWS/Azure Certified Data Engineer EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description The Sr. Developer / Tech Lead will play a key role in collaborating with stakeholders to design, develop, and deploy robust cloud-based, microservices-driven applications. The role demands strong technical leadership, hands-on development, and a proactive approach to problem-solving and team collaboration. Duties And Responsibilities Collaborate with product owners and architects to analyze business and technical problems and architect scalable solutions. Design and develop cloud-native applications with clearly defined DevOps processes and release strategies. Develop microservices-based applications using modern frameworks and technologies. Implement solutions using a Test-Driven Development (TDD) approach. Provide hands-on support to engineers by reviewing and troubleshooting code. Continuously improve code quality through regular code reviews. Identify and manage technical challenges and constraints proactively. Solve complex performance and architectural challenges. Create and maintain high-quality technical documentation. Work effectively in Agile/Scrum teams and participate in all relevant ceremonies. Lead Proof of Concept (PoC) development to validate architectural decisions and mitigate technical risks. Skills And Competencies Required Technical Skills: Extensive hands-on experience with: C#, .NET Core, Microservices, Web API Azure Services: Service Bus, AKS, Function Apps, Azure Data Factory (ADF – pipelines, data flows, triggers, linked services) Strong understanding of: TDD methodology CI/CD pipelines and deployment processes Object-oriented programming and enterprise-level entity relationships Proficient in working with: SQL Server, Azure SQL, Cosmos DB ETL processes, Data Lake, Blob Storage RESTful APIs, JSON, XML Familiarity with: Docker and Kubernetes Cloud environments (especially Azure) Responsive web development and cross-platform architectures Solid grasp of effort estimation, functional/technical specs, and milestone planning Soft Skills Strong problem-solving and analytical skills Excellent verbal and written communication Proactive and collaborative team player Preferred Qualifications (Nice To Have) Experience in Oracle Fusion Cloud migration (data extraction, transformation, integration) Basic knowledge of Finance & Accounting, especially around enterprise system migrations (e.g., chart of accounts, sub-ledgers, financial reporting structures) Experience in the Retail domain Skills C#,.NET Core,Microservices,Web API Show more Show less
Posted 1 month ago
0 years
0 Lacs
Kanayannur, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Description About the role: As a Senior Data Engineer, you will be responsible for building and supporting large-scale data architectures that provide information to downstream systems and business users. We are seeking an innovative and experienced individual who can aggregate and organize data from multiple sources to streamline business decision-making. In your role, you will collaborate closely with Data Engineer Leads and partners to establish and maintain data platforms that support front-end analytics. Your contributions will inform Takeda’s dashboards and reporting, providing insights to stakeholders throughout the business. In this role, you will be a part of the Digital Insights and Analytics team. This team drives business insights through IT data analytics techniques such as pattern recognition, AI/ML, and data modelling, to analyse and interpret the organization’s data with the purpose of drawing conclusions about information and trends. This role will work closely with the Tech Delivery Lead and Data Engineer Junior, both located in India. This role will align to the Data & Analytics chapter of the ICC. This position will be part of PDT Business Intelligence pod and will report to Data Engineering Lead. How you will contribute: Develop and maintain scalable data pipelines, in line with ETL principles, and build out new integrations, using AWS/Azure native technologies, to support continuing increases in data source, volume, and complexity. Define data requirements, gather, and mine data, while validating the efficiency of data tools in the Big Data Environment. Lead the evaluation, implementation and deployment of emerging tools and processes to improve productivity. Implement processes and systems to provide accurate and available data to key stakeholders, downstream systems, and business processes. Partner with Business Analysts and Solution Architects to develop technical architectures for strategic enterprise projects and initiatives. Coordinate with Data Scientists to understand data requirements, and design solutions that enable advanced analytics, machine learning, and predictive modelling. Mentor and coach junior Data Engineers on data standards and practices, promoting the values of learning and growth. Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. Minimum Requirements/Qualifications: Bachelor's degree in engineering, Computer Science, Data Science, or related field 5-9 years of experience in software development, data science, data engineering, ETL, and analytics reporting development Experience in building and maintaining data and system integrations using dimensional data modelling and optimized ETL pipelines. Experience in design and developing ETL pipelines using ETL tools like IICS, Datastage, Abinitio, Talend etc. Proven track record of designing and implementing complex data solutions Demonstrated understanding and experience using: Data Engineering Programming Languages (i.e., Python, SQL) Distributed Data Framework (e.g., Spark) Cloud platform services (AWS/ Azure preferred) Relational Databases DevOps and continuous integration AWS knowledge on services like Lambda, DMS, Step Functions, S3, Event Bridge, Cloud Watch, Aurora RDS or related AWS ETL services Azure knowledge on services like ADF, ADLS, etc. Knowledge of Data lakes, Data warehouses Databricks/Delta Lakehouse architecture Code management platforms like Github/ Gitlab/ etc., Understanding of database architecture, Data modelling concepts and administration. Handson experience of Spark Structured Streaming for building real-time ETL pipelines. Utilizes the principles of continuous integration and delivery to automate the deployment of code changes to elevate environments, fostering enhanced code quality, test coverage, and automation of resilient test cases. Proficient in programming languages (e.g., SQL, Python, Pyspark) to design, develop, maintain, and optimize data architecture/pipelines that fit business goals. Strong organizational skills with the ability to work multiple projects simultaneously and operate as a leading member across globally distributed teams to deliver high-quality services and solutions. Excellent written and verbal communication skills, including storytelling and interacting effectively with multifunctional teams and other strategic partners Strong problem solving and troubleshooting skills Ability to work in a fast-paced environment and adapt to changing business priorities Preferred requirements: Master's degree in engineering specialized in Computer Science, Data Science, or related field Demonstrated understanding and experience using: Knowledge in CDK Experience in IICS Data Integration tool Job orchestration tools like Tidal/Airflow/ or similar Knowledge on No SQL Proficiency in leveraging the Databricks Unity Catalog for effective data governance and implementing robust access control mechanisms is highly advantageous. Databricks Certified Data Engineer Associate AWS/Azure Certified Data Engineer EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The job market for ADF (Application Development Framework) professionals in India is witnessing significant growth, with numerous opportunities available for job seekers in this field. ADF is a popular framework used for building enterprise applications, and companies across various industries are actively looking for skilled professionals to join their teams.
Here are 5 major cities in India where there is a high demand for ADF professionals: - Bangalore - Hyderabad - Pune - Chennai - Mumbai
The estimated salary range for ADF professionals in India varies based on experience levels: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-20 lakhs per annum
In the ADF job market in India, a typical career path may include roles such as Junior Developer, Senior Developer, Technical Lead, and Architect. As professionals gain more experience and expertise in ADF, they can progress to higher-level positions with greater responsibilities.
In addition to ADF expertise, professionals in this field are often expected to have knowledge of related technologies such as Java, Oracle Database, SQL, JavaScript, and web development frameworks like Angular or React.
Here are 25 interview questions for ADF roles, categorized by difficulty level: - Basic: - What is ADF and its key features? - What is the difference between ADF Faces and ADF Task Flows? - Medium: - Explain the lifecycle of an ADF application. - How do you handle exceptions in ADF applications? - Advanced: - Discuss the advantages of using ADF Business Components. - How would you optimize performance in an ADF application?
As you explore job opportunities in the ADF market in India, make sure to enhance your skills, prepare thoroughly for interviews, and showcase your expertise confidently. With the right preparation and mindset, you can excel in your ADF career and secure rewarding opportunities in the industry. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2