Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
16.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) – Principal Cloud Architect As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We’re looking for Senior Managers (GTM +Cloud/ Big Data Architects) with strong technology and data understanding having proven delivery capability in delivery and pre sales. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team. Your Key Responsibilities Have proven experience in driving Analytics GTM/Pre-Sales by collaborating with senior stakeholder/s in the client and partner organization in BCM, WAM, Insurance. Activities will include pipeline building, RFP responses, creating new solutions and offerings, conducting workshops as well as managing in flight projects focused on cloud and big data. Need to work with client in converting business problems/challenges to technical solutions considering security, performance, scalability etc. [ 16- 20 years] Need to understand current & Future state enterprise architecture. Need to contribute in various technical streams during implementation of the project. Provide product and design level technical best practices Interact with senior client technology leaders, understand their business goals, create, architect, propose, develop and deliver technology solutions Define and develop client specific best practices around data management within a Hadoop environment or cloud environment Recommend design alternatives for data ingestion, processing and provisioning layers Design and develop data ingestion programs to process large data sets in Batch mode using HIVE, Pig and Sqoop, Spark Develop data ingestion programs to ingest real-time data from LIVE sources using Apache Kafka, Spark Streaming and related technologies Skills And Attributes For Success Architect in designing highly scalable solutions Azure, AWS and GCP. Strong understanding & familiarity with all Azure/AWS/GCP /Bigdata Ecosystem components Strong understanding of underlying Azure/AWS/GCP Architectural concepts and distributed computing paradigms Hands-on programming experience in Apache Spark using Python/Scala and Spark Streaming Hands on experience with major components like cloud ETLs,Spark, Databricks Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Knowledge of Spark and Kafka integration with multiple Spark jobs to consume messages from multiple Kafka partitions Solid understanding of ETL methodologies in a multi-tiered stack, integrating with Big Data systems like Cloudera and Databricks. Strong understanding of underlying Hadoop Architectural concepts and distributed computing paradigms Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Good knowledge in apache Kafka & Apache Flume Experience in Enterprise grade solution implementations. Experience in performance bench marking enterprise applications Experience in Data security [on the move, at rest] Strong UNIX operating system concepts and shell scripting knowledge To qualify for the role, you must have Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Ability to multi-task under pressure and work independently with minimal supervision. Strong verbal and written communication skills. Must be a team player and enjoy working in a cooperative and collaborative team environment. Adaptable to new technologies and standards. Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support Responsible for the evaluation of technical risks and map out mitigation strategies Experience in Data security [on the move, at rest] Experience in performance bench marking enterprise applications Working knowledge in any of the cloud platform, AWS or Azure or GCP Excellent business communication, Consulting, Quality process skills Excellent Consulting Skills Excellence in leading Solution Architecture, Design, Build and Execute for leading clients in Banking, Wealth Asset Management, or Insurance domain. Minimum 7 years hand-on experience in one or more of the above areas. Minimum 10 years industry experience Ideally, you’ll also have Strong project management skills Client management skills Solutioning skills Possess the innate quality to become the go to person for any marketing presales and solution accelerator within the practise. What We Look For People with technical experience and enthusiasm to learn new things in this fast-moving environment What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 1 week ago
15.0 years
0 Lacs
Bhubaneshwar
On-site
Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary: As a Data Platform Engineer, you will assist with the data platform blueprint and design, encompassing the relevant data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, while also engaging in discussions to refine and enhance the overall data architecture strategy. You will be actively involved in problem-solving and providing insights that contribute to the successful implementation of data solutions, ensuring that the data platform meets the evolving needs of the organization and its stakeholders. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Facilitate knowledge sharing sessions to enhance team capabilities and foster a culture of continuous improvement. - Monitor and evaluate the performance of data platform components, making recommendations for enhancements and optimizations. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Strong understanding of data integration techniques and best practices. - Experience with cloud-based data solutions and architectures. - Familiarity with data governance frameworks and compliance standards. - Ability to work with large datasets and perform data analysis. Additional Information: - The candidate should have minimum 5 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Bhubaneswar office. - A 15 years full time education is required. 15 years full time education
Posted 1 week ago
15.0 years
0 Lacs
Bhubaneshwar
On-site
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting any issues that arise in the data flow and processing stages. Your role will be pivotal in ensuring that data is accessible, reliable, and ready for analysis, contributing to informed decision-making within the organization. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Assist in the design and implementation of data architecture and data models. - Monitor and optimize data pipelines for performance and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with Apache Spark and data lake architectures. - Strong understanding of ETL processes and data integration techniques. - Familiarity with data quality frameworks and data governance practices. - Experience with cloud platforms such as AWS or Azure. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Bhubaneswar office. - A 15 years full time education is required. 15 years full time education
Posted 1 week ago
15.0 years
0 Lacs
Chennai
On-site
Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary: As a Data Platform Engineer, you will assist with the data platform blueprint and design, encompassing the relevant data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, while also engaging in discussions to refine and enhance the overall data architecture strategy. You will be involved in various stages of the data platform lifecycle, ensuring that all components work harmoniously to support the organization's data needs and objectives. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Facilitate knowledge sharing sessions to enhance team capabilities. - Monitor and evaluate team performance to ensure alignment with project goals. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Strong understanding of data integration techniques and best practices. - Experience with cloud-based data solutions and architectures. - Familiarity with data governance frameworks and compliance standards. - Ability to troubleshoot and optimize data workflows for efficiency. Additional Information: - The candidate should have minimum 5 years of experience in Databricks Unified Data Analytics Platform. - This position is based in Chennai. - A 15 years full time education is required. 15 years full time education
Posted 1 week ago
10.0 years
0 Lacs
India
On-site
Company Description 👋🏼 We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale — across all devices and digital mediums, and our people exist everywhere in the world (18000+ experts across 38 countries, to be exact). Our work culture is dynamic and non-hierarchical. We are looking for great new colleagues. That's where you come in! Job Description REQUIREMENTS: Total experience: 10+ Years Strong experience in delivering data engineering projects with Python. Strong proficiency in Python for data analysis and scripting. Hands-On experience in AWS technologies (Azure ADF, Synapse etc.), Have strong knowledge in ETL, Data warehousing, Business intelligence Proficient in designing and developing data integration workflows. Strong experience with Azure Synapse Analytics for data warehousing. Solid experience with Databricks for big data processing. Experience in managing complex and technical development projects in the areas of ETL, Datawarehouse & BI. Excellent problem-solving skills, strong communication abilities, and a collaborative mindset. Relevant certifications in Azure or data engineering are a plus RESPONSIBILITIES: Understanding the client’s business use cases and technical requirements and be able to convert them into technical design which elegantly meets the requirements. Mapping decisions with requirements and be able to translate the same to developers. Identifying different solutions and being able to narrow down the best option that meets the client’s requirements. Defining guidelines and benchmarks for NFR considerations during project implementation Writing and reviewing design document explaining overall architecture, framework, and high-level design of the application for the developers Reviewing architecture and design on various aspects like extensibility, scalability, security, design patterns, user experience, NFRs, etc., and ensure that all relevant best practices are followed. Developing and designing the overall solution for defined functional and non-functional requirements; and defining technologies, patterns, and frameworks to materialize it Understanding and relating technology integration scenarios and applying these learnings in projects Resolving issues that are raised during code/review, through exhaustive systematic analysis of the root cause, and being able to justify the decision taken. Carrying out POCs to make sure that suggested design/technologies meet the requirements. Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field. Show more Show less
Posted 1 week ago
0 years
9 - 10 Lacs
Chennai
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. About Us: Looking to hire a Java Full Stack Developer with cloud experience in Hyderabad location. Work Mode: Hybrid (10 days a month) Shifts : General shifts Primary Responsibilities: Positions in this function are predominantly involved in developing business solutions by creating new and modifying existing software applications Primary contributor in designing, coding, testing, debugging, documenting and supporting all types of applications consistent with established specifications and business requirements to deliver business value. Software engineering is the application of engineering to the design, development, implementation, testing and maintenance of software in a systematic method Cover all primary development activity across all technology functions that ensure we deliver code with high quality for our applications, products and services and to understand customer needs and to develop product roadmaps Analysis, design, coding, engineering, testing, debugging, standards, methods, tools analysis, documentation, research and development, maintenance, new development, operations and delivery Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so With every role in the company, each position has a requirement for building quality into every output. This also includes evaluating new tools, new techniques, strategies; Automation of common tasks; build of common utilities to drive organizational efficiency with a passion around technology and solutions and influence of thought and leadership on future capabilities and opportunities to apply technology in new and innovative ways. Required Qualifications: Graduate degree or equivalent experience B.E/B.Tech / MCA / Msc / MTech Technical experience: Java, J2EE, ReactJS/Angular Solid experience in Core Java, Spring and Hibernate/Spring Data JPA Working experience on AWS/GCP/Azure cloud Working experience on Data analytics - Azure Databricks, PowerBI, Synopse etc. Hands-on experience of RDBMS like SQL Server, Oracle, MySQL, PostgreSQL Hands-on with Core Java/ J2ee (Spring, Hibernate, MVC) Hands-on with SQL queries and MySQL experience Testing experience in JUnit/Spock/Groovy Experience in SOA based architecture, Web Services (Apache CXF/JAXWS/JAXRS/SOAP/REST) Experience in multiple application and web servers (JBoss/Tomcat/Websphere) Experience in continuous integration (Jenkins/Sonar/Nexus/PMD) Experience in using profiler tools (JProfiler/JMeter) Good working knowledge in SPRING Framework Good understanding of UML and design patterns Good understanding on Performance tuning Good in development of applications using Spring Core, Spring JDBC, Rest Web services & MySQL DB Thorough understanding of Object Oriented Analysis and Design (OOAD) concepts At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 1 week ago
0 years
7 - 9 Lacs
Noida
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within…. A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities: Design, develop, and optimize data pipelines and ETL processes using PySpark or Scala to extract, transform, and load large volumes of structured and unstructured data from diverse sources. Implement data ingestion, processing, and storage solutions on Azure cloud platform, leveraging services such as Azure Databricks, Azure Data Lake Storage, and Azure Synapse Analytics. Develop and maintain data models, schemas, and metadata to support efficient data access, query performance, and analytics requirements. Monitor pipeline performance, troubleshoot issues, and optimize data processing workflows for scalability, reliability, and cost-effectiveness. Implement data security and compliance measures to protect sensitive information and ensure regulatory compliance. Requirement Proven experience as a Data Engineer, with expertise in building and optimizing data pipelines using PySpark, Scala, and Apache Spark. Hands-on experience with cloud platforms, particularly Azure, and proficiency in Azure services such as Azure Databricks, Azure Data Lake Storage, Azure Synapse Analytics, and Azure SQL Database. Strong programming skills in Python and Scala, with experience in software development, version control, and CI/CD practices. Familiarity with data warehousing concepts, dimensional modeling, and relational databases (e.g., SQL Server, PostgreSQL, MySQL). Experience with big data technologies and frameworks (e.g., Hadoop, Hive, HBase) is a plus. Mandatory skill sets: Spark, Pyspark, Azure Preferred skill sets: Spark, Pyspark, Azure Years of experience required: 4 - 8 Education qualification: B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Technology, Master of Business Administration Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 1 week ago
0 years
0 Lacs
Noida
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary: A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities : Design, develop, and optimize data pipelines and ETL processes using PySpark or Scala to extract, transform, and load large volumes of structured and unstructured data from diverse sources. Implement data ingestion, processing, and storage solutions on Azure cloud platform, leveraging services such as Azure Databricks, Azure Data Lake Storage, and Azure Synapse Analytics. Develop and maintain data models, schemas, and metadata to support efficient data access, query performance, and analytics requirements. Monitor pipeline performance, troubleshoot issues, and optimize data processing workflows for scalability, reliability, and cost-effectiveness. Implement data security and compliance measures to protect sensitive information and ensure regulatory compliance. Requirement Proven experience as a Data Engineer, with expertise in building and optimizing data pipelines using PySpark , Scala, and Apache Spark. Hands-on experience with cloud platforms, particularly Azure, and proficiency in Azure services such as Azure Databricks, Azure Data Lake Storage, Azure Synapse Analytics, and Azure SQL Database. Strong programming skills in Python and Scala, with experience in software development, version control, and CI/CD practices. Familiarity with data warehousing concepts, dimensional modeling, and relational databases (e.g., SQL Server, PostgreSQL, MySQL). Experience with big data technologies and frameworks (e.g., Hadoop, Hive, HBase) is a plus. Mandatory skill set s: Spark, Pyspark , Azure Preferred skill sets : Spark, Pyspark , Azure Years of experience required : 8 - 12 Education qualification : B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Engineering, Master of Business Administration Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Data Science Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Coaching and Feedback, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling {+ 32 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 1 week ago
15.0 years
0 Lacs
Indore
On-site
Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary: As a Data Platform Engineer, you will assist with the data platform blueprint and design, encompassing the relevant data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, while also engaging in discussions to refine and enhance the overall data architecture. You will be involved in various stages of the data platform lifecycle, ensuring that all components work seamlessly together to support the organization's data needs and objectives. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Facilitate knowledge sharing sessions to enhance team capabilities. - Monitor and evaluate team performance to ensure alignment with project goals. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Strong understanding of data integration techniques and best practices. - Experience with cloud-based data solutions and architectures. - Familiarity with data governance frameworks and compliance standards. - Ability to troubleshoot and optimize data workflows for performance. Additional Information: - The candidate should have minimum 5 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Bhubaneswar office. - A 15 years full time education is required. 15 years full time education
Posted 1 week ago
15.0 years
0 Lacs
Indore
On-site
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and contribute to the overall data strategy, ensuring that the data architecture aligns with business objectives and supports analytical needs. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Assist in the design and implementation of data architecture to support data initiatives. - Monitor and optimize data pipelines for performance and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Strong understanding of data modeling and database design principles. - Experience with ETL tools and processes. - Familiarity with cloud platforms and services related to data storage and processing. - Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Indore office. - A 15 years full time education is required. 15 years full time education
Posted 1 week ago
8.0 - 11.0 years
0 Lacs
Andhra Pradesh
On-site
Business Analytics Associate Manager -HIH - Evernorth ABOUT EVERNORTH: Evernorth℠ exists to elevate health for all, because we believe health is the starting point for human potential and progress. As champions for affordable, predictable and simple health care, we solve the problems others don’t, won’t or can’t. Our innovation hub in India will allow us to work with the right talent, expand our global footprint, improve our competitive stance, and better deliver on our promises to stakeholders. We are passionate about making healthcare better by delivering world-class solutions that make a real difference. We are always looking upward. And that starts with finding the right talent to help us get there. Data & Analytics Associate Manager Position Summary: The Data & Analytics Associate Manager is responsible for helping support the Enterprise Data Strategy team in the identification of key data elements across the business, identification of sources, enabling the connection across sources and ultimately supporting the consumption model of this data. This individual will work with users, technology, accounting and finance to develop requirements and support delivery of consumable data for business insights. Job Description & Responsibilities : The Data & Analytics Associate Manager works with the other team members to support the development and maintenance of the Enterprise Data foundational structure and end use consumption model. Key stakeholder partners will be Finance, Accounting, Technical Teams and Automation and AI teams. Support the requirements gathering of data elements used by business areas and mapping of elements to sources Support the identification of connection points of disparate data sets to support joins for an integrated data solution Data collection and preparation inclusive of scrubbing, development of quality rules and processes for addressing gaps in data. Analysis & interpretation of data to support identification of trends and quality issues. Reporting and Visualization in the development of dashboards, reports, and other mechanisms to support consumption Leverage technologies inclusive of data virtualizer and other tools to support obtaining, mapping, transforming, storing and packaging data to make it useful for end consumers Support the development of prototype solutions to explore the application of technology leveraging data – e.g. application of AI, automation initiatives, etc. Continuous improvement in the identification of opportunities for process improvement or process enhancements in day to day execution Competencies / Skills: Ability to review deliverables for completeness, quality, and compliance with established project standards. Ability to resolve conflict (striving for win-win outcomes); ability to execute with limited information and ambiguity Ability to deal with organizational politics including ability to navigate a highly matrixed organization effectively. Strong Influencing skills (sound business and technical acumen as well as skilled at achieving buy-in for delivery strategies) Stakeholder management (setting and managing expectations) Strong business acumen including ability to effectively articulate business objectives. Analytical skills, Highly Focused, Team player, Versatile, Resourceful Ability to learn and apply quickly including ability to effectively impart knowledge to others. Effective under pressure Precise communication skills, including an ability to project clarity and precision in verbal and written communication and strong presentation skills. Strong problem-solving and critical thinking skills Project Management Requirements gathering User interaction / customer service Reporting and Dashboards Ability to be flexible with job responsibilities and workflow changes. Ability to identify process improvements and implement changes; outside thinker. Problem-solving, consulting skills, teamwork, leadership, and creativity skills a must. Analytical mind with outstanding ability to collect and analyze data. Experience Required: Qualified candidates will typically have 8 - 11 years of financial data and analytics work experience inclusive of disciplined project delivery with a focus on quality output within project timelines. Successful candidates will be high energy, self-starters with a focus on quality output and project delivery success. Candidates must be excellent problem solvers and creative thinkers. Experience Desired: Desired Tool Experience & Project Practices: Microsoft Excel, Agile, Jira, Sharepoint, Confluence, Tableau, Alteryx, Virtualizer Tools Experience with Big Data Platforms (Databricks, Hadoop, AWS). Demonstrated experience establishing and delivering complex projects/initiatives within agreed upon parameters while achieving the benefits and/or value-added results. Experience with Agile delivery methodology. Location & Hours of Work (Hyderabad – Hybrid - 11.30AM IST to 8.30PM IST) Equal Opportunity Statement Evernorth is an Equal Opportunity Employer actively encouraging and supporting organization-wide involvement of staff in diversity, equity, and inclusion efforts to educate, inform and advance both internal practices and external work with diverse client populations. About Evernorth Health Services Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.
Posted 1 week ago
3.0 - 5.0 years
15 - 17 Lacs
Pune
Work from Office
Performance Testing Specialist Databricks Pipelines Key Responsibilities: - Design and execute performance testing strategies specifically for Databricks-based data pipelines. - Identify performance bottlenecks and provide optimization recommendations across Spark/Databricks workloads. - Collaborate with development and DevOps teams to integrate performance testing into CI/CD pipelines. - Analyze job execution metrics, cluster utilization, memory/storage usage, and latency across various stages of data pipeline processing. - Create and maintain performance test scripts, frameworks, and dashboards using tools like JMeter, Locust, or custom Python utilities. - Generate detailed performance reports and suggest tuning at the code, configuration, and platform levels. - Conduct root cause analysis for slow-running ETL/ELT jobs and recommend remediation steps. - Participate in production issue resolution related to performance and contribute to RCA documentation. Technical Skills: Mandatory - Strong understanding of Databricks, Apache Spark, and performance tuning techniques for distributed data processing systems. - Hands-on experience in Spark (PySpark/Scala) performance profiling, partitioning strategies, and job parallelization. - 2+ years of experience in performance testing and load simulation of data pipelines. - Solid skills in SQL, Snowflake, and analyzing performance via query plans and optimization hints. - Familiarity with Azure Databricks, Azure Monitor, Log Analytics, or similar observability tools. - Proficient in scripting (Python/Shell) for test automation and pipeline instrumentation. - Experience with DevOps tools such as Azure DevOps, GitHub Actions, or Jenkins for automated testing. - Comfortable working in Unix/Linux environments and writing shell scripts for monitoring and debugging. Good to Have - Experience with job schedulers like Control-M, Autosys, or Azure Data Factory trigger flows. - Exposure to CI/CD integration for automated performance validation. - Understanding of network/storage I/O tuning parameters in cloud-based environments.
Posted 1 week ago
16.0 years
0 Lacs
Kanayannur, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) – Principal Cloud Architect As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We’re looking for Senior Managers (GTM +Cloud/ Big Data Architects) with strong technology and data understanding having proven delivery capability in delivery and pre sales. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team. Your Key Responsibilities Have proven experience in driving Analytics GTM/Pre-Sales by collaborating with senior stakeholder/s in the client and partner organization in BCM, WAM, Insurance. Activities will include pipeline building, RFP responses, creating new solutions and offerings, conducting workshops as well as managing in flight projects focused on cloud and big data. Need to work with client in converting business problems/challenges to technical solutions considering security, performance, scalability etc. [ 16- 20 years] Need to understand current & Future state enterprise architecture. Need to contribute in various technical streams during implementation of the project. Provide product and design level technical best practices Interact with senior client technology leaders, understand their business goals, create, architect, propose, develop and deliver technology solutions Define and develop client specific best practices around data management within a Hadoop environment or cloud environment Recommend design alternatives for data ingestion, processing and provisioning layers Design and develop data ingestion programs to process large data sets in Batch mode using HIVE, Pig and Sqoop, Spark Develop data ingestion programs to ingest real-time data from LIVE sources using Apache Kafka, Spark Streaming and related technologies Skills And Attributes For Success Architect in designing highly scalable solutions Azure, AWS and GCP. Strong understanding & familiarity with all Azure/AWS/GCP /Bigdata Ecosystem components Strong understanding of underlying Azure/AWS/GCP Architectural concepts and distributed computing paradigms Hands-on programming experience in Apache Spark using Python/Scala and Spark Streaming Hands on experience with major components like cloud ETLs,Spark, Databricks Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Knowledge of Spark and Kafka integration with multiple Spark jobs to consume messages from multiple Kafka partitions Solid understanding of ETL methodologies in a multi-tiered stack, integrating with Big Data systems like Cloudera and Databricks. Strong understanding of underlying Hadoop Architectural concepts and distributed computing paradigms Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Good knowledge in apache Kafka & Apache Flume Experience in Enterprise grade solution implementations. Experience in performance bench marking enterprise applications Experience in Data security [on the move, at rest] Strong UNIX operating system concepts and shell scripting knowledge To qualify for the role, you must have Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Ability to multi-task under pressure and work independently with minimal supervision. Strong verbal and written communication skills. Must be a team player and enjoy working in a cooperative and collaborative team environment. Adaptable to new technologies and standards. Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support Responsible for the evaluation of technical risks and map out mitigation strategies Experience in Data security [on the move, at rest] Experience in performance bench marking enterprise applications Working knowledge in any of the cloud platform, AWS or Azure or GCP Excellent business communication, Consulting, Quality process skills Excellent Consulting Skills Excellence in leading Solution Architecture, Design, Build and Execute for leading clients in Banking, Wealth Asset Management, or Insurance domain. Minimum 7 years hand-on experience in one or more of the above areas. Minimum 10 years industry experience Ideally, you’ll also have Strong project management skills Client management skills Solutioning skills Possess the innate quality to become the go to person for any marketing presales and solution accelerator within the practise. What We Look For People with technical experience and enthusiasm to learn new things in this fast-moving environment What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 1 week ago
16.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) – Principal Cloud Architect As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We’re looking for Senior Managers (GTM +Cloud/ Big Data Architects) with strong technology and data understanding having proven delivery capability in delivery and pre sales. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team. Your Key Responsibilities Have proven experience in driving Analytics GTM/Pre-Sales by collaborating with senior stakeholder/s in the client and partner organization in BCM, WAM, Insurance. Activities will include pipeline building, RFP responses, creating new solutions and offerings, conducting workshops as well as managing in flight projects focused on cloud and big data. Need to work with client in converting business problems/challenges to technical solutions considering security, performance, scalability etc. [ 16- 20 years] Need to understand current & Future state enterprise architecture. Need to contribute in various technical streams during implementation of the project. Provide product and design level technical best practices Interact with senior client technology leaders, understand their business goals, create, architect, propose, develop and deliver technology solutions Define and develop client specific best practices around data management within a Hadoop environment or cloud environment Recommend design alternatives for data ingestion, processing and provisioning layers Design and develop data ingestion programs to process large data sets in Batch mode using HIVE, Pig and Sqoop, Spark Develop data ingestion programs to ingest real-time data from LIVE sources using Apache Kafka, Spark Streaming and related technologies Skills And Attributes For Success Architect in designing highly scalable solutions Azure, AWS and GCP. Strong understanding & familiarity with all Azure/AWS/GCP /Bigdata Ecosystem components Strong understanding of underlying Azure/AWS/GCP Architectural concepts and distributed computing paradigms Hands-on programming experience in Apache Spark using Python/Scala and Spark Streaming Hands on experience with major components like cloud ETLs,Spark, Databricks Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Knowledge of Spark and Kafka integration with multiple Spark jobs to consume messages from multiple Kafka partitions Solid understanding of ETL methodologies in a multi-tiered stack, integrating with Big Data systems like Cloudera and Databricks. Strong understanding of underlying Hadoop Architectural concepts and distributed computing paradigms Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Good knowledge in apache Kafka & Apache Flume Experience in Enterprise grade solution implementations. Experience in performance bench marking enterprise applications Experience in Data security [on the move, at rest] Strong UNIX operating system concepts and shell scripting knowledge To qualify for the role, you must have Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Ability to multi-task under pressure and work independently with minimal supervision. Strong verbal and written communication skills. Must be a team player and enjoy working in a cooperative and collaborative team environment. Adaptable to new technologies and standards. Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support Responsible for the evaluation of technical risks and map out mitigation strategies Experience in Data security [on the move, at rest] Experience in performance bench marking enterprise applications Working knowledge in any of the cloud platform, AWS or Azure or GCP Excellent business communication, Consulting, Quality process skills Excellent Consulting Skills Excellence in leading Solution Architecture, Design, Build and Execute for leading clients in Banking, Wealth Asset Management, or Insurance domain. Minimum 7 years hand-on experience in one or more of the above areas. Minimum 10 years industry experience Ideally, you’ll also have Strong project management skills Client management skills Solutioning skills Possess the innate quality to become the go to person for any marketing presales and solution accelerator within the practise. What We Look For People with technical experience and enthusiasm to learn new things in this fast-moving environment What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 1 week ago
170.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Overview 170+ Years Strong. Industry Leader. Global Impact. At Pinkerton, the mission is to protect our clients. To do this, we provide enterprise risk management services and programs specifically designed for each client. Pinkerton employees are one of our most important assets and critical to the delivery of world-class solutions. Bonded together, we share a commitment to integrity, vigilance, and excellence. Pinkerton is an inclusive employer who seeks candidates with diverse backgrounds, experiences, and perspectives to join our family of industry subject matter experts. The Data Engineer will be part of a high-performing and international team with the goal to expand Data & Analytics solutions for our CRM application which is live in all Securitas countries. Together with the dedicated Frontend– & BI Developer you will be responsible for managing and maintaining the Databricks based BI Platform including the processes from data model changes, implementation and development of pipelines are part of the daily focus, but ETL will get most of your attention. Continuous development to do better will need the ability to think bigger and work closely with the whole team. The Data Engineer (ETL Specialist) will collaborate with the Frontend– & BI Developer to align on possibilities to improve the BI Platform deliverables specifically for the CEP organization. Cooperation with other departments such as integrations or specific IT/IS projects and business specialists is part of the job. The expectation is to always take data privacy into consideration when talking about moving data or sharing data with others. For that purpose, there is a need to develop the security layer as agreed with the legal department. Responsibilities Represent Pinkerton’s core values of integrity, vigilance, and excellence. Maintain & Develop the Databricks workspace used to host the BI CEP solution Active in advising needed changes the data model to accommodate new BI requirements Develop and implement new ETL scripts and improve the current ones Ownership on resolving the incoming tickets for both incidents and requests Plan activities to stay close to the Frontend- & BI Developer to foresee coming changes to the backend Through working with different team members improve the teamwork using the DevOps tool to keep track of the status of the deliverable from start to end Ensure understanding and visible implementation of the company’s core values of Integrity, Vigilance and Helpfulness. Knowledge about skills and experience available and required in your area today and tomorrow to drive liaison with other departments if needed. All other duties, as assigned. Qualifications At least 3+ years of experience in Data Engineering Understanding of designing and implementing data processing architectures in Azure environments Experience with different SSAS - modelling techniques (preferable Azure, databricks - Microsoft related) Understanding of data management and – treatment to secure data governance & security (Platform management and administration) An analytical mindset with clear communication and problem-solving skills Experience in working with SCRUM set up Fluent in English both spoken and written Bonus: knowledge of additional language(s) Ability to communicate, present and influence credibly at all levels both internally and externally Business Acumen & Commercial Awareness Working Conditions With or without reasonable accommodation, requires the physical and mental capacity to effectively perform all essential functions; Regular computer usage. Occasional reaching and lifting of small objects and operating office equipment. Frequent sitting, standing, and/or walking. Travel, as required. Pinkerton is an equal opportunity employer to all applicants and positions without regard to race/ethnicity, color, national origin, ancestry, sex/gender, gender identity/expression, sexual orientation, marital/prenatal status, pregnancy/childbirth or related conditions, religion, creed, age, disability, genetic information, veteran status, or any protected status by local, state, federal or country-specific law. Show more Show less
Posted 1 week ago
2.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Company Overview Viraaj HR Solutions is a dynamic HR consultancy dedicated to connecting talent with opportunity. Our mission is to enhance workforce efficiency and support organizations in achieving their goals through strategic recruitment and talent management. Our team values integrity, collaboration, and innovation, and we work diligently to align the right talent with the right role, ensuring a great fit both for clients and candidates. Role Responsibilities Develop and manage data pipelines on the Databricks platform. Optimize and maintain data processing workflows using Apache Spark. Implement ETL processes to integrate data from various sources. Collaborate with data engineers and analysts to design data models. Write optimized SQL queries for data retrieval and analysis. Utilize Python for scripting and automation tasks. Monitor and troubleshoot data processing jobs for performance issues. Work with cloud technologies (Azure/AWS) to enhance data solutions. Conduct data analytics to derive actionable insights. Implement version control mechanisms for code management. Participate in code reviews and contribute to documentation. Stay updated with the latest features and best practices of Databricks. Provide technical support and guidance to team members. Engage in collaborative projects to enhance data quality. Participate in strategy meetings to align data projects with business goals. Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field. 2+ years of experience in data engineering or development roles. Strong proficiency in the Databricks platform. Experience with Apache Spark and its components. Solid understanding of database management systems and SQL. Knowledge of Python for data manipulation and analytics. Familiarity with ETL tools and data integration techniques. Experience with cloud platforms such as AWS or Azure. Ability to work collaboratively in cross-functional teams. Excellent problem-solving skills and attention to detail. Strong communication skills, both verbal and written. Prior experience in data analysis and visualization is a plus. Understanding of data governance and security best practices. A proactive approach to learning new technologies. Experience in using version control software like Git. Skills: python scripting,databricks,version control (git),sql,cloud technologies,data governance and security,digital : databricks,cloud technologies (azure/aws),etl,apache spark,python,version control,data analytics,data integration Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About the Role: ASEC Engineers – A Verdantas Company is seeking an experienced Deputy Division Manager, Platform Engineering to join our dynamic Pune office. In this pivotal managerial role, you will be responsible for guiding the development, implementation, and continuous enhancement of Verdantas’ platform engineering initiatives. You’ll oversee the delivery of secure, scalable, and high-performing platform infrastructure—ensuring alignment with business strategy while providing technical direction that enables innovation, operational efficiency, and sustainable growth. Key Responsibilities: • Support the Division Manager in overseeing and guiding the platform engineering team. • Contribute to the planning and execution of strategic initiatives in platform engineering. • Manage the end-to-end design, development, and deployment of platform solutions. • Ensure platforms meet security, performance, and scalability requirements. • Work collaboratively with other departments to identify platform needs and deliver tailored solutions. • Maintain compliance with applicable standards and regulatory requirements. • Provide expert technical support and direction to engineering team members. • Track platform performance and recommend enhancements to drive continuous improvement. Core Competencies: • Technical Expertise: Knowledge of platform architecture (e.g., microservices, event-driven architecture, container orchestration) Software development (e.g., Java, Python, CI/CD pipelines, REST APIs) Cloud computing (e.g., AWS, Azure, Kubernetes, serverless infrastructure) • Modern Data Platforms: Experience with modern data platforms such as Microsoft Fabric, Databricks, Snowflake and data lake/data warehouse technologies. • Leadership: Ability to manage engineering teams and projects and provide guidance. • Strategic Thinking: Developing strategies, solving complex problems, and staying updated with industry trends. • Project Management: Managing budgets, resources, and overseeing platform development. • Security and Compliance: Ensuring platform security and regulatory compliance. • Collaboration: Working with other departments to meet their platform needs. Required Qualifications: • Bachelor’s or Master’s degree in computer science or equivalent • 8+ years of relevant experience • Strong verbal and written communication skills Location and Work Set-up • Pune, Maharashtra, India • Work Mode: In Office Why Join ASEC Engineers – A Verdantas Company? At our Pune office, you’ll be part of a vibrant, innovative environment that fuses local excellence with global impact. We foster a people-first culture and empower our employees with tools, support, and opportunities to thrive. What We Offer: • Be part of a global vision with the agility of a local team. • Work on high-impact projects that shape industries and communities. • Thrive in a collaborative and dynamic office culture. • Access continuous learning and professional development programs. • Grow with clear paths for career progression and recognition. • An employee-centric approach that values your well-being and ideas. Ready to Build the Future with Us? “Join ASEC Engineers – A Verdantas Company in Pune, where your technical expertise, leadership, and ideas will shape innovation and drive progress. Let’s engineer a better tomorrow—together.” Show more Show less
Posted 1 week ago
7.0 - 10.0 years
0 Lacs
Vadodara, Gujarat, India
On-site
We’re reinventing the market research industry. Let’s reinvent it together. At Numerator, we believe tomorrow’s success starts with today’s market intelligence. We empower the world’s leading brands and retailers with unmatched insights into consumer behavior and the influencers that drive it. We are seeking a highly skilled Technical Delivery Lead - Data Engineer with extensive experience in analysing existing data/databases, designing, building, and optimizing high-volume data pipelines. The ideal candidate will have strong expertise in Python, Databases, Databricks on Azure Cloud services, DevOps, and CI/CD tools, along with a solid understanding of AI/ML techniques and big data processing frameworks like Apache Spark and PySpark. Responsibilities Adhere to coding and Numerator technology standards Build suitable automation test suites within Azure DevOps Maintain and update automation test suites as required Carry out manual testing, load testing, exploratory testing as required Perform Technical Analysis and work closely with Business Analysts and Senior Data Developers to consistently deliver sprint goals Assist in estimation of sprint-by-sprint stories and tasks Pro-actively take a responsible approach to product delivery What You'll Bring to Numerator 7-10 years of experience in data engineering roles, handling large databases Good C# and Python skills Experience working with Microsoft Azure Cloud Experience in Agile methodologies (Scrum/Kanban) Experience with Apache Spark, PySpark, Databricks Experience working with Devops pipeline, preferably Azure DevOps Preferred Qualifications Bachelor's or master's degree in computer science, Information Technology, Data Science, or a related field. Experience working in a technical development/support focused role Knowledge/experience in AI ML techniques Knowledge/experience in Visual Basic 6 Certification in relevant Data Engineering discipline or related fields. Show more Show less
Posted 1 week ago
2.0 years
0 Lacs
Greater Kolkata Area
On-site
Company Overview Viraaj HR Solutions is a dynamic HR consultancy dedicated to connecting talent with opportunity. Our mission is to enhance workforce efficiency and support organizations in achieving their goals through strategic recruitment and talent management. Our team values integrity, collaboration, and innovation, and we work diligently to align the right talent with the right role, ensuring a great fit both for clients and candidates. Role Responsibilities Develop and manage data pipelines on the Databricks platform. Optimize and maintain data processing workflows using Apache Spark. Implement ETL processes to integrate data from various sources. Collaborate with data engineers and analysts to design data models. Write optimized SQL queries for data retrieval and analysis. Utilize Python for scripting and automation tasks. Monitor and troubleshoot data processing jobs for performance issues. Work with cloud technologies (Azure/AWS) to enhance data solutions. Conduct data analytics to derive actionable insights. Implement version control mechanisms for code management. Participate in code reviews and contribute to documentation. Stay updated with the latest features and best practices of Databricks. Provide technical support and guidance to team members. Engage in collaborative projects to enhance data quality. Participate in strategy meetings to align data projects with business goals. Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field. 2+ years of experience in data engineering or development roles. Strong proficiency in the Databricks platform. Experience with Apache Spark and its components. Solid understanding of database management systems and SQL. Knowledge of Python for data manipulation and analytics. Familiarity with ETL tools and data integration techniques. Experience with cloud platforms such as AWS or Azure. Ability to work collaboratively in cross-functional teams. Excellent problem-solving skills and attention to detail. Strong communication skills, both verbal and written. Prior experience in data analysis and visualization is a plus. Understanding of data governance and security best practices. A proactive approach to learning new technologies. Experience in using version control software like Git. Skills: python scripting,databricks,version control (git),sql,cloud technologies,data governance and security,digital : databricks,cloud technologies (azure/aws),etl,apache spark,python,version control,data analytics,data integration Show more Show less
Posted 1 week ago
5.0 - 10.0 years
10 - 16 Lacs
Chennai
Work from Office
Job Posting What You Will Do : Design, develop, and maintain robust, scalable, and efficient data pipelines and ETL/ELT processes. Lead and execute data engineering projects from inception to completion, ensuring timely delivery and high quality. Build and optimize data architectures for operational and analytical purposes. Collaborate with cross-functional teams to gather and define data requirements. Implement data quality, data governance, and data security practices. Manage and optimize cloud-based data platforms (Azure\AWS). Develop and maintain Python/PySpark libraries for data ingestion, Processing and integration with both internal and external data sources. Design and optimize scalable data pipelines using Azure data factory and Spark(Databricks) Work with stakeholders, including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Develop frameworks for data ingestion, transformation, and validation. Mentor junior data engineers and guide best practices in data engineering. Evaluate and integrate new technologies and tools to improve data infrastructure. Ensure compliance with data privacy regulations (HIPAA, etc.). Monitor performance and troubleshoot issues across the data ecosystem. Automated deployment of data pipelines using GIT hub actions \ Azure devops What You Will Need : Bachelors or masters degree in computer science, Information Systems, Statistics, Math, Engineering, or related discipline. Minimum 5 + years of solid hands-on experience in data engineering and cloud services. Extensive working experience with advanced SQL and deep understanding of SQL. Good Experience in Azure data factory (ADF), Databricks , Python and PySpark. Good experience in modern data storage concepts data lake, lake house. Experience in other cloud services (AWS) and data processing technologies will be added advantage. Ability to enhance , develop and resolve defects in ETL process using cloud services. Experience handling large volumes (multiple terabytes) of incoming data from clients and 3rd party sources in various formats such as text, csv, EDI X12 files and access database. Experience with software development methodologies (Agile, Waterfall) and version control tools Highly motivated, strong problem solver, self-starter, and fast learner with demonstrated analytic and quantitative skills. Good communication skill. What Would Be Nice To Have : AWS ETL Platform – Glue , S3 One or more programming languages such as Java, .Net Experience in US health care domain and insurance claim processing.
Posted 1 week ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Overview Viraaj HR Solutions is a dynamic HR consultancy dedicated to connecting talent with opportunity. Our mission is to enhance workforce efficiency and support organizations in achieving their goals through strategic recruitment and talent management. Our team values integrity, collaboration, and innovation, and we work diligently to align the right talent with the right role, ensuring a great fit both for clients and candidates. Role Responsibilities Develop and manage data pipelines on the Databricks platform. Optimize and maintain data processing workflows using Apache Spark. Implement ETL processes to integrate data from various sources. Collaborate with data engineers and analysts to design data models. Write optimized SQL queries for data retrieval and analysis. Utilize Python for scripting and automation tasks. Monitor and troubleshoot data processing jobs for performance issues. Work with cloud technologies (Azure/AWS) to enhance data solutions. Conduct data analytics to derive actionable insights. Implement version control mechanisms for code management. Participate in code reviews and contribute to documentation. Stay updated with the latest features and best practices of Databricks. Provide technical support and guidance to team members. Engage in collaborative projects to enhance data quality. Participate in strategy meetings to align data projects with business goals. Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field. 2+ years of experience in data engineering or development roles. Strong proficiency in the Databricks platform. Experience with Apache Spark and its components. Solid understanding of database management systems and SQL. Knowledge of Python for data manipulation and analytics. Familiarity with ETL tools and data integration techniques. Experience with cloud platforms such as AWS or Azure. Ability to work collaboratively in cross-functional teams. Excellent problem-solving skills and attention to detail. Strong communication skills, both verbal and written. Prior experience in data analysis and visualization is a plus. Understanding of data governance and security best practices. A proactive approach to learning new technologies. Experience in using version control software like Git. Skills: python scripting,databricks,version control (git),sql,cloud technologies,data governance and security,digital : databricks,cloud technologies (azure/aws),etl,apache spark,python,version control,data analytics,data integration Show more Show less
Posted 1 week ago
2.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Company Overview Viraaj HR Solutions is a dynamic HR consultancy dedicated to connecting talent with opportunity. Our mission is to enhance workforce efficiency and support organizations in achieving their goals through strategic recruitment and talent management. Our team values integrity, collaboration, and innovation, and we work diligently to align the right talent with the right role, ensuring a great fit both for clients and candidates. Role Responsibilities Develop and manage data pipelines on the Databricks platform. Optimize and maintain data processing workflows using Apache Spark. Implement ETL processes to integrate data from various sources. Collaborate with data engineers and analysts to design data models. Write optimized SQL queries for data retrieval and analysis. Utilize Python for scripting and automation tasks. Monitor and troubleshoot data processing jobs for performance issues. Work with cloud technologies (Azure/AWS) to enhance data solutions. Conduct data analytics to derive actionable insights. Implement version control mechanisms for code management. Participate in code reviews and contribute to documentation. Stay updated with the latest features and best practices of Databricks. Provide technical support and guidance to team members. Engage in collaborative projects to enhance data quality. Participate in strategy meetings to align data projects with business goals. Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field. 2+ years of experience in data engineering or development roles. Strong proficiency in the Databricks platform. Experience with Apache Spark and its components. Solid understanding of database management systems and SQL. Knowledge of Python for data manipulation and analytics. Familiarity with ETL tools and data integration techniques. Experience with cloud platforms such as AWS or Azure. Ability to work collaboratively in cross-functional teams. Excellent problem-solving skills and attention to detail. Strong communication skills, both verbal and written. Prior experience in data analysis and visualization is a plus. Understanding of data governance and security best practices. A proactive approach to learning new technologies. Experience in using version control software like Git. Skills: python scripting,databricks,version control (git),sql,cloud technologies,data governance and security,digital : databricks,cloud technologies (azure/aws),etl,apache spark,python,version control,data analytics,data integration Show more Show less
Posted 1 week ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Key Responsibilities: Design and manage distributed system architectures using Azure services such as Event Hub, Data Factory, ADLS Gen2, Cosmos DB, Synapse, Databricks, APIM, Function App, Logic App, and App Services . Implement infrastructure as code (IaC) using ARM templates and Terraform for consistent, automated environment provisioning. Deploy and manage containerized applications using Docker and orchestrate them with Azure Kubernetes Service (AKS) . Monitor and troubleshoot infrastructure and applications using Azure Monitor , Log Analytics , and Application Insights . Design and implement disaster recovery strategies , backups, and failover mechanisms to ensure business continuity. Automate provisioning, scaling, and infrastructure management to maintain system reliability and performance. Manage Azure environments across development, test, pre production, and production stages. Monitor and define job flows , set up proactive alerts, and ensure smooth ETL operations in Azure Data Factory and Databricks . Conduct root cause analysis and implement fixes for job failures. Work with Jenkins and Azure DevOps for automating CI/CD pipelines and deployment workflows. Write automation scripts using Python and Shell scripting for various operational tasks. Monitor VM performance metrics (CPU, memory, OS, network, storage) and recommend optimizations. Collaborate with development teams to improve application reliability and performance. Work in Agile environments with a proactive and results-driven mindset. Expertise in Azure services for data engineering and application deployment. Strong knowledge of Terraform , ARM templates, and CI/CD tools . Hands-on experience with Databricks , Data Factory , and Event Hub . Familiarity with Python , Shell scripting , Jenkins , and Azure DevOps . Deep understanding of container orchestration using AKS . Experience in monitoring , alerting , and log analysis for cloud-native application Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About the Role We are looking for a skilled and motivated Data Analyst with 2–5 years of experience to join our team. In this role, you will work closely with the product team to support strategic decision-making by delivering data-driven insights, dashboards, and performance reports. Your ability to transform raw data into actionable insights will directly impact on how we build and improve our products. Key Responsibilities Collaborate with the product team to understand data needs and define key performance indicators (KPIs) Develop and maintain insightful reports and dashboards using Power BI Write efficient and optimized SQL queries to extract and manipulate data from multiple sources Perform data analysis using Python and pandas for deeper trend analysis and data modeling Present findings clearly through visualizations and written summaries to stakeholders Ensure data quality and integrity across reporting pipelines Contribute to ongoing improvements in data processes and tooling Required Skills & Experience 2–5 years of hands-on experience as a Data Analyst or in a similar role Strong proficiency in SQL for querying and data manipulation Experience in building interactive dashboards with Power BI Good command of Python , especially with pandas for data wrangling and analysis Experience with Databricks or working with big data tools Understanding of Medallion Architecture and its application in analytics pipelines Strong communication and collaboration skills, especially in cross-functional team settings Good to Have Familiarity with data engineering practices , including: Data transformation using Databricks notebooks Apache Spark SQL for distributed data processing Azure Data Factory (ADF) for orchestration Version control using Git Exposure to product analytics, cohort analysis, or A/B testing methodologies Interested candidates please share your resume with balaji.kumar@flyerssoft.com Show more Show less
Posted 1 week ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Key Responsibilities: Design and manage distributed system architectures using Azure services such as Event Hub, Data Factory, ADLS Gen2, Cosmos DB, Synapse, Databricks, APIM, Function App, Logic App, and App Services . Implement infrastructure as code (IaC) using ARM templates and Terraform for consistent, automated environment provisioning. Deploy and manage containerized applications using Docker and orchestrate them with Azure Kubernetes Service (AKS) . Monitor and troubleshoot infrastructure and applications using Azure Monitor , Log Analytics , and Application Insights . Design and implement disaster recovery strategies , backups, and failover mechanisms to ensure business continuity. Automate provisioning, scaling, and infrastructure management to maintain system reliability and performance. Manage Azure environments across development, test, pre-production, and production stages. Monitor and define job flows , set up proactive alerts, and ensure smooth ETL operations in Azure Data Factory and Databricks . Conduct root cause analysis and implement fixes for job failures. Work with Jenkins and Azure DevOps for automating CI/CD pipelines and deployment workflows. Write automation scripts using Python and Shell scripting for various operational tasks. Monitor VM performance metrics (CPU, memory, OS, network, storage) and recommend optimizations. Collaborate with development teams to improve application reliability and performance. Work in Agile environments with a proactive and results-driven mindset. Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Databricks is a popular technology in the field of big data and analytics, and the job market for Databricks professionals in India is growing rapidly. Companies across various industries are actively looking for skilled individuals with expertise in Databricks to help them harness the power of data. If you are considering a career in Databricks, here is a detailed guide to help you navigate the job market in India.
The average salary range for Databricks professionals in India varies based on experience level: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-25 lakhs per annum
In the field of Databricks, a typical career path may include: - Junior Developer - Senior Developer - Tech Lead - Architect
In addition to Databricks expertise, other skills that are often expected or helpful alongside Databricks include: - Apache Spark - Python/Scala programming - Data modeling - SQL - Data visualization tools
As you prepare for Databricks job interviews, make sure to brush up on your technical skills, stay updated with the latest trends in the field, and showcase your problem-solving abilities. With the right preparation and confidence, you can land your dream job in the exciting world of Databricks in India. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
16869 Jobs | Dublin
Wipro
9024 Jobs | Bengaluru
EY
7266 Jobs | London
Amazon
5652 Jobs | Seattle,WA
Uplers
5629 Jobs | Ahmedabad
IBM
5547 Jobs | Armonk
Oracle
5387 Jobs | Redwood City
Accenture in India
5156 Jobs | Dublin 2
Capgemini
3242 Jobs | Paris,France
Tata Consultancy Services
3099 Jobs | Thane