Jobs
Interviews

464 Nifi Jobs - Page 18

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Persistent We are an AI-led, platform-driven Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what’s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world, including 12 of the 30 most innovative global companies, 60% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our disruptor’s mindset, commitment to client success, and agility to thrive in the dynamic environment have enabled us to sustain our growth momentum by reporting $1,409.1M revenue in FY25, delivering 18.8% Y-o-Y growth. Our 23,900+ global team members, located in 19 countries, have been instrumental in helping the market leaders transform their industries. We are also pleased to share that Persistent won in four categories at the prestigious 2024 ISG Star of Excellence™ Awards , including the Overall Award based on the voice of the customer. We were included in the Dow Jones Sustainability World Index, setting high standards in sustainability and corporate responsibility. We were awarded for our state-of-the-art learning and development initiatives at the 16 th TISS LeapVault CLO Awards. In addition, we were cited as the fastest-growing IT services brand in the 2024 Brand Finance India 100 Report. Throughout our market-leading growth, we’ve maintained a strong employee satisfaction score of 8.2/10. At Persistent, we embrace diversity to unlock everyone's potential. Our programs empower our workforce by harnessing varied backgrounds for creative, innovative problem-solving. Our inclusive environment fosters belonging, encouraging employees to unleash their full potential. For more details please login to www.persistent.com About The Position We are looking for a Data Architect with creativity and results-oriented critical thinking to meet complex challenges and develop new strategies for acquiring, analyzing, modeling and storing data. In this role you will guide the company into the future and utilize the latest technology and information management methodologies to meet our requirements for effective logical data modeling, metadata management and database warehouse domains. You will be working with experts in a variety of industries, including computer science and software development, as well as department heads and senior executives to integrate new technologies and refine system performance. We reward dedicated performance with exceptional pay and benefits, as well as tuition reimbursement and career growth opportunities. What You?ll Do Define data retention policies Monitor performance and advise any necessary infrastructure changes Mentor junior engineers and work with other architects to deliver best in class solutions Implement ETL / ELT process and orchestration of data flows Recommend and drive adoption of newer tools and techniques from the big data ecosystem Expertise You?ll Bring 10+ years in industry, building and managing big data systems Building, monitoring, and optimizing reliable and cost-efficient pipelines for SaaS is a must Building stream-processing systems, using solutions such as Storm or Spark-Streaming Dealing and integrating with data storage systems like SQL and NoSQL databases, file systems and object storage like s3 Reporting solutions like Pentaho, PowerBI, Looker including customizations Developing high concurrency, high performance applications that are database-intensive and have interactive, browser-based clients Working with SaaS based data management products will be an added advantage Proficiency and expertise in Cloudera / Hortonworks Spark HDF and NiFi RDBMS, NoSQL like Vertica, Redshift, Data Modelling with physical design and SQL performance optimization Messaging systems, JMS, Active MQ, Rabbit MQ, Kafka Big Data technology like Hadoop, Spark, NoSQL based data-warehousing solutions Data warehousing, reporting including customization, Hadoop, Spark, Kafka, Core java, Spring/IOC, Design patterns Big Data querying tools, such as Pig, Hive, and Impala Open-source technologies and databases (SQL & NoSQL) Proficient understanding of distributed computing principles Ability to solve any ongoing issues with operating the cluster Scale data pipelines using open-source components and AWS services Cloud (AWS), provisioning, capacity planning and performance analysis at various levels Web-based SOA architecture implementation with design pattern experience will be an added advantage Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage : group term life , personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. Inclusive Environment We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a values-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry’s best Let’s unleash your full potential at Persistent - persistent.com/careers Show more Show less

Posted 2 months ago

Apply

3.0 years

0 Lacs

Delhi, Delhi

On-site

Job Description: Hadoop & ETL Developer Job Summary We are looking for a Hadoop & ETL Developer with strong expertise in big data processing, ETL pipelines, and workflow automation. The ideal candidate will have hands-on experience in the Hadoop ecosystem, including HDFS, MapReduce, Hive, Spark, HBase, and PySpark, as well as expertise in real-time data streaming and workflow orchestration. This role requires proficiency in designing and optimizing large-scale data pipelines to support enterprise data processing needs. Key Responsibilities Design, develop, and optimize ETL pipelines leveraging Hadoop ecosystem technologies. Work extensively with HDFS, MapReduce, Hive, Sqoop, Spark, HBase, and PySpark for data processing and transformation. Implement real-time and batch data ingestion using Apache NiFi, Kafka, and Airbyte. • Develop and manage workflow orchestration using Apache Airflow. Perform data integration across structured and unstructured data sources, including MongoDB and Hadoop-based storage. Optimize MapReduce and Spark jobs for performance, scalability, and efficiency. Ensure data quality, governance, and consistency across the pipeline. Collaborate with data engineering teams to build scalable and high-performance data solutions. Monitor, debug, and enhance big data workflows to improve reliability and efficiency. Required Skills & Experience 3+ years of experience in Hadoop ecosystem (HDFS, MapReduce, Hive, Sqoop, Spark, HBase, PySpark). Strong expertise in ETL processes, data transformation, and data warehousing. Hands-on experience with Apache NiFi, Kafka, Airflow, and Airbyte. Proficiency in SQL and handling structured and unstructured data. Experience with NoSQL databases like MongoDB. Strong programming skills in Python or Scala for scripting and automation. Experience in optimizing Spark and MapReduce jobs for high-performance computing. Good understanding of data lake architectures and big data best practices. Preferred Qualifications Experience in real-time data streaming and processing. Familiarity with Docker/Kubernetes for deployment and orchestration. Strong analytical and problem-solving skills with the ability to debug and optimize data workflows. If you have a passion for big data, ETL, and large-scale data processing, we’d love to hear from you! Job Types: Full-time, Contractual / Temporary Pay: ₹400,000.00 - ₹1,100,000.00 per year Schedule: Day shift Monday to Friday Morning shift Application Question(s): How many years of experience do you have in Big Data ETL? How many years of experience do you have in Hadoop? Are you willing to work on contractual basis of job ? Are you comfortable on 3rd party payroll? Are you from Delhi? What is the notice period in your current company? Work Location: In person

Posted 2 months ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About Persistent We are an AI-led, platform-driven Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what’s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world, including 12 of the 30 most innovative global companies, 60% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our disruptor’s mindset, commitment to client success, and agility to thrive in the dynamic environment have enabled us to sustain our growth momentum by reporting $1,409.1M revenue in FY25, delivering 18.8% Y-o-Y growth. Our 23,900+ global team members, located in 19 countries, have been instrumental in helping the market leaders transform their industries. We are also pleased to share that Persistent won in four categories at the prestigious 2024 ISG Star of Excellence™ Awards , including the Overall Award based on the voice of the customer. We were included in the Dow Jones Sustainability World Index, setting high standards in sustainability and corporate responsibility. We were awarded for our state-of-the-art learning and development initiatives at the 16 th TISS LeapVault CLO Awards. In addition, we were cited as the fastest-growing IT services brand in the 2024 Brand Finance India 100 Report. Throughout our market-leading growth, we’ve maintained a strong employee satisfaction score of 8.2/10. At Persistent, we embrace diversity to unlock everyone's potential. Our programs empower our workforce by harnessing varied backgrounds for creative, innovative problem-solving. Our inclusive environment fosters belonging, encouraging employees to unleash their full potential. For more details please login to www.persistent.com About The Position We are looking for a Data Architect with creativity and results-oriented critical thinking to meet complex challenges and develop new strategies for acquiring, analyzing, modeling and storing data. In this role you will guide the company into the future and utilize the latest technology and information management methodologies to meet our requirements for effective logical data modeling, metadata management and database warehouse domains. You will be working with experts in a variety of industries, including computer science and software development, as well as department heads and senior executives to integrate new technologies and refine system performance. We reward dedicated performance with exceptional pay and benefits, as well as tuition reimbursement and career growth opportunities. What You?ll Do Define data retention policies Monitor performance and advise any necessary infrastructure changes Mentor junior engineers and work with other architects to deliver best in class solutions Implement ETL / ELT process and orchestration of data flows Recommend and drive adoption of newer tools and techniques from the big data ecosystem Expertise You?ll Bring 10+ years in industry, building and managing big data systems Building, monitoring, and optimizing reliable and cost-efficient pipelines for SaaS is a must Building stream-processing systems, using solutions such as Storm or Spark-Streaming Dealing and integrating with data storage systems like SQL and NoSQL databases, file systems and object storage like s3 Reporting solutions like Pentaho, PowerBI, Looker including customizations Developing high concurrency, high performance applications that are database-intensive and have interactive, browser-based clients Working with SaaS based data management products will be an added advantage Proficiency and expertise in Cloudera / Hortonworks Spark HDF and NiFi RDBMS, NoSQL like Vertica, Redshift, Data Modelling with physical design and SQL performance optimization Messaging systems, JMS, Active MQ, Rabbit MQ, Kafka Big Data technology like Hadoop, Spark, NoSQL based data-warehousing solutions Data warehousing, reporting including customization, Hadoop, Spark, Kafka, Core java, Spring/IOC, Design patterns Big Data querying tools, such as Pig, Hive, and Impala Open-source technologies and databases (SQL & NoSQL) Proficient understanding of distributed computing principles Ability to solve any ongoing issues with operating the cluster Scale data pipelines using open-source components and AWS services Cloud (AWS), provisioning, capacity planning and performance analysis at various levels Web-based SOA architecture implementation with design pattern experience will be an added advantage Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage : group term life , personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. Inclusive Environment We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a values-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry’s best Let’s unleash your full potential at Persistent - persistent.com/careers Show more Show less

Posted 2 months ago

Apply

8.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Job Description How you will contribute Integration Architecture Ownership: Take ownership of the end-to-end integration architecture across all planning tracks (Demand, Supply, etc.) Design and maintain the overall integration strategy, ensuring scalability, reliability, and security. Oversee inbound and outbound data transformations and orchestration processes. Decision Support & Guidance: Support decision-making related to integration disposition, data transformations, and performance assessments. Provide guidance and recommendations on integration approaches, technologies, and best practices. Collaborate with stakeholders to understand business requirements and translate them into technical solutions. Inter-Tenant Data Transfer Design: Design and implement secure and efficient inter-tenant data transfer mechanisms. Ensure data integrity and consistency across different o9 environments. Team Guidance & Mentoring: Provide technical guidance and mentorship to the juniors in the team on building and maintaining interfaces. Share best practices for integration development, testing, and deployment. Conduct code reviews and ensure adherence to coding standards. CI/CD Implementation: Design and implement a robust CI/CD pipeline for integration deployments. Automate integration testing and deployment processes to ensure rapid and reliable releases. Batch Orchestration Design: Design and implement batch orchestration processes for all planning tracks. Optimize batch processing schedules to minimize processing time and resource utilization. Technical Leadership & Implementation: Serve as a technical leader and subject matter expert on o9 integration. Lead and participate in the implementation of end-to-end SCM solutions. Provide hands-on support for troubleshooting and resolving integration issues. Qualifications Experience: Delivered a minimum of 2-3 comprehensive end-to-end SCM Product implementations as a Technical Architect. At least 8 years of experience in SDLC with a key emphasis on architecting, designing, and developing solutions using big data technologies. Technical Skills: Proficiency in SSIS Packages, Python, Pyspark, SQL programming languages. Experience with workflow management tools like Airflow, SSIS. Experience with Amazon Web Services (AWS), AZURE, Google Cloud Infrastructures preferred. Experience working with Parquet, JSON, Restful APIs, HDFS, Delta Lake and query frameworks like Hive, Presto. Deep understanding and hands-on experience with writing orchestration workflows and/or API coding (knowledge of Apache NiFi is a plus). Good hands-on technical expertise in building scalable Interfaces, performance tuning, data cleansing, and validation strategies. Experience working with version control platforms (e.g., GitHub, Azure DevOps). Experience with DeltaLake and Pyspark is a must. Other Skills: Good to have experience in Cloud Data Quality, Source Systems Analysis, Business Rules Validation, Source Target Mapping Design, Performance Tuning, and High-Volume Data Loads. Familiarity with Agile methodology. Proficient in the use of Microsoft Excel/PowerPoint /Visio for analysis and presentation. Excellent communication and interpersonal skills. Strong problem-solving and analytical abilities. Ability to work independently and as part of a team. Proactive and results-oriented. Ability to thrive in a fast-paced environment. Within Country Relocation support available and for candidates voluntarily moving internationally some minimal support is offered through our Volunteer International Transfer Policy Business Unit Summary Headquartered in Singapore, Mondelēz International’s Asia, Middle East and Africa (AMEA) region is comprised of six business units, has more than 21,000 employees and operates in more than 27 countries including Australia, China, Indonesia, Ghana, India, Japan, Malaysia, New Zealand, Nigeria, Philippines, Saudi Arabia, South Africa, Thailand, United Arab Emirates and Vietnam. Seventy-six nationalities work across a network of more than 35 manufacturing plants, three global research and development technical centers and in offices stretching from Auckland, New Zealand to Casablanca, Morocco. Mondelēz International in the AMEA region is the proud maker of global and local iconic brands such as Oreo and belVita biscuits, Kinh Do mooncakes, Cadbury, Cadbury Dairy Milk and Milka chocolate, Halls candy, Stride gum, Tang powdered beverage and Philadelphia cheese. We are also proud to be named a Top Employer in many of our markets. Mondelēz International is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, sexual orientation or preference, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law. Job Type Regular Software & Applications Technology & Digital Show more Show less

Posted 2 months ago

Apply

8.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Title: Sr. DevOps Engineer Location: Ahmedabad (Onsite) Experience: 8+ Years Job Description: 6+ years of experience in a SRE role, deploying and maintaining applications, performance tuning, conducting application upgrades, patches, and supporting continuous integration and deployment tooling 4+ years of experience deploying and maintaining applications in AWS . Experience with Dockers or similar and experience with Kubernetes or similar Experience supporting Hadoop or any other big data platform (Spark, Pyspark/Deltalake, Nifi, Airflow, Hive, Hafs, Kafka, Impala etc...) Skills: Ability to debug issues and solve problems Working knowledge with Jenkins, Ansible, Terraform, ArgoCD Knowledge on any of the scripting language (Bash, shell, PowerShell or Python etc) Administration of databases (MS SQL, Mongo, SSIS) Working knowledge with Linux operating system Strong in operating system concepts, Linux and troubleshooting. Automation and cloud Passion to learn and adapt to new technology We really value team spirit: Transparency and frequent communication is key. At 09, this is not limited by hierarchy, distance, or function Education: Bachelor's degree in computer science, Software Engineering, Information Technology, Industrial Engineering, Engineering Management Cloud (at least one) and Kubernetes administration certification Show more Show less

Posted 2 months ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Title: Data Architect – Data Integration & Engineering Location: Hybrid Experience: 8+ years Job Summary: We are seeking an experienced Data Architect specializing in data integration, data engineering, and hands-on coding to design, implement, and manage scalable and high-performance data solutions. The ideal candidate should have expertise in ETL/ELT, cloud data platforms, big data technologies, and enterprise data architecture. Key Responsibilities: 1. Data Architecture & Design: Develop enterprise-level data architecture solutions, ensuring scalability, performance, and reliability. Design data models (conceptual, logical, physical) for structured and unstructured data. Define and implement data integration frameworks using industry-standard tools. Ensure compliance with data governance, security, and regulatory policies (GDPR, HIPAA, etc.). 2. Data Integration & Engineering: Implement ETL/ELT pipelines using Informatica, Talend, Apache Nifi, or DBT. Work with batch and real-time data processing tools such as Apache Kafka, Kinesis, and Apache Flink. Integrate and optimize data lakes, data warehouses, and NoSQL databases. 3. Hands-on Coding & Development: Write efficient and scalable code in Python, Java, or Scala for data transformation and processing. Optimize SQL queries, stored procedures, and indexing strategies for performance tuning. Build and maintain Spark-based data processing solutions in Databricks and Cloudera ecosystems. Develop workflow automation using Apache Airflow, Prefect, or similar tools. 4. Cloud & Big Data Technologies: Work with cloud platforms such as AWS (Redshift, Glue), Azure (Data Factory, Synapse), and GCP (BigQuery, Dataflow). Manage big data processing using Cloudera, Hadoop, HBase, and Apache Spark. Deploy containerized data services using Kubernetes and Docker. Automate infrastructure using Terraform and CloudFormation. 5. Governance, Security & Compliance: Implement data security, masking, and encryption strategies. Define RBAC (Role-Based Access Control) and IAM policies for data access. Work on metadata management, data lineage, and cataloging. Required Skills & Technologies: Data Engineering & Integration: ETL/ELT Tools: Informatica, Talend, Apache Nifi, DBT Big Data Ecosystem: Cloudera, HBase, Apache Hadoop, Spark Data Streaming: Apache Kafka, AWS Kinesis, Apache Flink Data Warehouses: Snowflake, AWS Redshift, Google Big Query, Azure Synapse Databases: PostgreSQL, MySQL, MongoDB, Cassandra Programming & Scripting: Languages: Python, Java, Scala Scripting: Shell, PowerShell, Bash Frameworks: PySpark, SparkSQL Cloud & DevOps: Cloud Platforms: AWS, Azure, GCP Containerization & Orchestration: Kubernetes, Docker CI/CD Pipelines: Jenkins, GitHub Actions, Terraform, CloudFormation Security & Governance: Compliance Standards: GDPR, HIPAA, SOC 2 Data Cataloging: Collibra, Alation Access Controls: IAM, RBAC, ABAC Preferred Certifications: AWS Certified Data Analytics – Specialty Microsoft Certified: Azure Data Engineer Associate Google Professional Data Engineer Databricks Certified Data Engineer Associate/Professional Cloudera Certified Data Engineer Informatica Certified Professional Education & Experience: Bachelor's/Master’s degree in Computer Science/ MCA, Data Engineering, or a related field. 8+ years of experience in data architecture, integration, and engineering. Proven expertise in designing and implementing enterprise-scale data solutions. Show more Show less

Posted 2 months ago

Apply

0.0 - 4.0 years

0 Lacs

Hyderabad, Telangana

On-site

Job Information Date Opened 05/23/2025 Industry Information Technology Job Type Full time Work Experience 4-5 years City Hyderabad State/Province Telangana Country India Zip/Postal Code 500059 Job Description KMC is seeking a motivated and adaptable NiFi/Astro/ETL Engineer with 3-4 years of experience in ETL workflows, data integration, and data pipeline management. The ideal candidate will thrive in an operational setting, collaborate well with team members, and demonstrate a readiness to learn and embrace new technologies. This role will focus on the development, maintenance, and support of ETL processes to ensure efficient data workflows and high-quality deliverables. Roles and Responsibilities: Design, implement, and maintain ETL workflows using Apache NiFi, Astro, and other relevant tools. Support data extraction, transformation, and loading (ETL) processes to ensure efficient data flow across systems. Collaborate with data teams to ensure seamless integration of data from various sources, supporting data consistency and availability.Configure and manage data ingestion processes from both structured and unstructured data sources. Monitor ETL processes and data pipelines, troubleshoot and resolve issues in real-time to ensure data accuracy and availability. Provide on-call support as necessary to maintain smooth data operations.Work closely with cross-functional teams to gather requirements, refine workflows, and ensure optimal data solutions. Contribute actively to team discussions, solution planning, and provide input for continuous improvement. Stay updated with industry trends and emerging technologies in data integration and ETL practices. Show willingness to learn and adapt to new tools and methodologies as required by project or team needs. Requirements 3-4 years of experience in ETL workflows, specifically with Apache NiFi and Astro (or similar platforms). Proficient in SQL and experience with data warehousing concepts. Familiarity with scripting languages (e.g., Python, Shell scripting) is a plus. Basic understanding of cloud platforms (AWS, Azure, or Google Cloud) Soft Skills: Strong problem-solving abilities with an operational mindset. Team player with effective communication skills to collaborate well within and across teams. Quick learner, adaptable to new tools, and willing to take on challenges with a positive attitude. Benefits Insurance - Family Term Insurance PF Paid Time Off - 20 days Holidays - 10 days Flexi timing Competitive Salary Diverse & Inclusive workspace

Posted 2 months ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Skills: Data Engineer, Python, Spark, Cloudera, onpremise, Azure, Snowflow, kafka, Overview Of The Company Jio Platforms Ltd. is a revolutionary Indian multinational tech company, often referred to as India's biggest startup, headquartered in Mumbai. Launched in 2019, it's the powerhouse behind Jio, India's largest mobile network with over 400 million users. But Jio Platforms is more than just telecom. It's a comprehensive digital ecosystem, developing cutting-edge solutions across media, entertainment, and enterprise services through popular brands like JioMart, JioFiber, and JioSaavn. Join us at Jio Platforms and be part of a fast-paced, dynamic environment at the forefront of India's digital transformation. Collaborate with brilliant minds to develop next-gen solutions that empower millions and revolutionize industries. Team Overview The Data Platforms Team is the launchpad for a data-driven future, empowering the Reliance Group of Companies. We're a passionate group of experts architecting an enterprise-scale data mesh to unlock the power of big data, generative AI, and ML modelling across various domains. We don't just manage data we transform it into intelligent actions that fuel strategic decision-making. Imagine crafting a platform that automates data flow, fuels intelligent insights, and empowers the organization that's what we do. Join our collaborative and innovative team, and be a part of shaping the future of data for India's biggest digital revolution! About the role. Title: Lead Data Engineer Location : Mumbai Responsibilities End-to-End Data Pipeline Development: Design, build, optimize, and maintain robust data pipelines across cloud, on-premises, or hybrid environments, ensuring performance, scalability, and seamless data flow. Reusable Components & Frameworks: Develop reusable data pipeline components and contribute to the team's data pipeline framework evolution. Data Architecture & Solutions: Contribute to data architecture design, applying data modelling, storage, and retrieval expertise. Data Governance & Automation: Champion data integrity, security, and efficiency through metadata management, automation, and data governance best practices. Collaborative Problem Solving: Partner with stakeholders, data teams, and engineers to define requirements, troubleshoot, optimize, and deliver data-driven insights. Mentorship & Knowledge Transfer: Guide and mentor junior data engineers, fostering knowledge sharing and professional growth. Qualification Details Education: Bachelor's degree or higher in Computer Science, Data Science, Engineering, or a related technical field. Core Programming: Excellent command of a primary data engineering language (Scala, Python, or Java) with a strong foundation in OOPS and functional programming concepts. Big Data Technologies: Hands-on experience with data processing frameworks (e.g., Hadoop, Spark, Apache Hive, NiFi, Ozone, Kudu), ideally including streaming technologies (Kafka, Spark Streaming, Flink, etc.). Database Expertise: Excellent querying skills (SQL) and strong understanding of relational databases (e.g., MySQL, PostgreSQL). Experience with NoSQL databases (e.g., MongoDB, Cassandra) is a plus. End-to-End Pipelines: Demonstrated experience in implementing, optimizing, and maintaining complete data pipelines, integrating varied sources and sinks including streaming real-time data. Cloud Expertise: Knowledge of Cloud Technologies like Azure HDInsights, Synapse, EventHub and GCP DataProc, Dataflow, BigQuery. CI/CD Expertise: Experience with CI/CD methodologies and tools, including strong Linux and shell scripting skills for automation. Desired Skills & Attributes Problem-Solving & Troubleshooting: Proven ability to analyze and solve complex data problems, troubleshoot data pipeline issues effectively. Communication & Collaboration: Excellent communication skills, both written and verbal, with the ability to collaborate across teams (data scientists, engineers, stakeholders). Continuous Learning & Adaptability: A demonstrated passion for staying up-to-date with emerging data technologies and a willingness to adapt to new tools. Show more Show less

Posted 2 months ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Country India Location: Capital Cyberscape, 2nd Floor, Ullahwas, Sector 59, Gurugram, Haryana 122102 Role: Data Engineer Location: Gurgaon Full/ Part-time: Full Time Build a career with confidence. Summary Carrier Global Corporation, global leader in intelligent climate and energy solutions is committed to creating solutions that matter for people and our planet for generations to come. From the beginning, we've led in inventing new technologies and entirely new industries. Today, we continue to lead because we have a world-class, diverse workforce that puts the customer at the center of everything we do.: Established Data Science & Analytics professional. Creating data mining architectures/models/protocols, statistical reporting, and data analysis methodologies to identify trends in large data sets About The Role Work with data to solve business problems, building and maintaining the infrastructure to answer questions and improve processes Help streamline our data science workflows, adding value to our product offerings and building out the customer lifecycle and retention models Work closely with the data science and business intelligence teams to develop data models and pipelines for research, reporting, and machine learning Be an advocate for best practices and continued learning Key Responsibilities Expert coding proficiency On Snowflake Exposure to SnowSQL, Snowpipe, Role based access controls, ETL / ELT tools like Nifi,Snaplogic,DBT Familiar with building data pipelines that leverage the full power and best practices of Snowflake as well as how to integrate common technologies that work with Snowflake (code CICD, monitoring, orchestration, data quality, monitoring) Designing Data Ingestion and Orchestration Pipelines using nifi, AWS, kafka, spark, control M Establish strategies for data extraction, ingestion, transformation, automation, and consumption. Role Responsibilities Experience in Data Lake Concepts with Structured, Semi-Structured and Unstructured Data Experience in strategies for Data Testing, Data Quality, Code Quality, Code Coverage Hands on expertise with Snowflake preferably with SnowPro Core Certification Develop a data model/architecture providing integrated data architecture that enables business services with strict quality management and provides the basis for the future knowledge management processes. Act as interface between business and development teams to guide thru solution end-to-end. Define tools used for design specifications, data modelling and data management capabilities with exploration into standard tools. Good understanding of data technologies including RDBMS, No-SQL databases. Requirements A minimum of 6 years prior relevant experience Strong exposure to Data Modelling, Data Access Patterns and SQL Knowledge of Data Storage Fundamentals, Networking Good to Have Exposure of AWS tools/Services Ability to conduct testing at different levels and stages of the project Knowledge of scripting languages like Java, Python Education Bachelor's degree in computer systems, Information Technology, Analytics, or related business area. Benefits We are committed to offering competitive benefits programs for all of our employees and enhancing our programs when necessary. Have peace of mind and body with our health insurance Drive forward your career through professional development opportunities Achieve your personal goals with our Employee Assistance Programme Our commitment to you Our greatest assets are the expertise, creativity and passion of our employees. We strive to provide a great place to work that attracts, develops and retains the best talent, promotes employee engagement, fosters teamwork and ultimately drives innovation for the benefit of our customers. We strive to create an environment where you feel that you belong, with diversity and inclusion as the engine to growth and innovation. We develop and deploy best-in-class programs and practices, providing enriching career opportunities, listening to employee feedback and always challenging ourselves to do better. This is The Carrier Way. Join us and make a difference. Apply Now! Carrier is An Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or veteran status, age or any other federally protected class. Job Applicant's Privacy Notice Click on this link to read the Job Applicant's Privacy Notice Show more Show less

Posted 2 months ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Country India Location: Capital Cyberscape, 2nd Floor, Ullahwas, Sector 59, Gurugram, Haryana 122102 Role: Data Engineer Location: Gurgaon Full/ Part-time: Full Time Build a career with confidence. Summary Carrier Global Corporation, global leader in intelligent climate and energy solutions is committed to creating solutions that matter for people and our planet for generations to come. From the beginning, we've led in inventing new technologies and entirely new industries. Today, we continue to lead because we have a world-class, diverse workforce that puts the customer at the center of everything we do.: Established Data Science & Analytics professional. Creating data mining architectures/models/protocols, statistical reporting, and data analysis methodologies to identify trends in large data sets About The Role Work with data to solve business problems, building and maintaining the infrastructure to answer questions and improve processes Help streamline our data science workflows, adding value to our product offerings and building out the customer lifecycle and retention models Work closely with the data science and business intelligence teams to develop data models and pipelines for research, reporting, and machine learning Be an advocate for best practices and continued learning Key Responsibilities Expert coding proficiency On Snowflake Exposure to SnowSQL, Snowpipe, Role based access controls, ETL / ELT tools like Nifi,Snaplogic,DBT Familiar with building data pipelines that leverage the full power and best practices of Snowflake as well as how to integrate common technologies that work with Snowflake (code CICD, monitoring, orchestration, data quality, monitoring) Designing Data Ingestion and Orchestration Pipelines using nifi, AWS, kafka, spark, control M Establish strategies for data extraction, ingestion, transformation, automation, and consumption. Role Responsibilities Experience in Data Lake Concepts with Structured, Semi-Structured and Unstructured Data Experience in strategies for Data Testing, Data Quality, Code Quality, Code Coverage Hands on expertise with Snowflake preferably with SnowPro Core Certification Develop a data model/architecture providing integrated data architecture that enables business services with strict quality management and provides the basis for the future knowledge management processes. Act as interface between business and development teams to guide thru solution end-to-end. Define tools used for design specifications, data modelling and data management capabilities with exploration into standard tools. Good understanding of data technologies including RDBMS, No-SQL databases. Requirements A minimum of 3 years prior relevant experience Strong exposure to Data Modelling, Data Access Patterns and SQL Knowledge of Data Storage Fundamentals, Networking Good to Have Exposure of AWS tools/Services Ability to conduct testing at different levels and stages of the project Knowledge of scripting languages like Java, Python Education Bachelor's degree in computer systems, Information Technology, Analytics, or related business area. Benefits We are committed to offering competitive benefits programs for all of our employees and enhancing our programs when necessary. Have peace of mind and body with our health insurance Drive forward your career through professional development opportunities Achieve your personal goals with our Employee Assistance Programme Our commitment to you Our greatest assets are the expertise, creativity and passion of our employees. We strive to provide a great place to work that attracts, develops and retains the best talent, promotes employee engagement, fosters teamwork and ultimately drives innovation for the benefit of our customers. We strive to create an environment where you feel that you belong, with diversity and inclusion as the engine to growth and innovation. We develop and deploy best-in-class programs and practices, providing enriching career opportunities, listening to employee feedback and always challenging ourselves to do better. This is The Carrier Way. Join us and make a difference. Apply Now! Carrier is An Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or veteran status, age or any other federally protected class. Job Applicant's Privacy Notice Click on this link to read the Job Applicant's Privacy Notice Show more Show less

Posted 2 months ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About GSPANN GSPANN is a global IT services and consultancy provider headquartered in Milpitas, California (U.S.A.). With five global delivery centers across the globe, GSPANN provides digital solutions that support the customer buying journeys of B2B and B2C brands worldwide. With a strong focus on innovation and client satisfaction, GSPANN delivers cutting-edge solutions that drive business success and operational excellence. GSPANN helps retail, finance, manufacturing, and high-technology brands deliver competitive customer experiences and increased revenues through our solution delivery, technologies, practices, and operations for each client. For more information, visit www.gspann.com JD for your reference: We are looking for a passionate Data Modeler to build, optimize and maintain conceptual and logical/Physical database models. The Candidate will turn data into information, information into insight and insight into business decisions. Job Position-Data Modeller Experience- 5+ years Location- Hyderabad, Gurugram Skills- Data Modeling, Data Analysis, Cloud and SQL Responsibilities: Design and develop conceptual, logical, and physical data models for databases, data warehouses, and data lakes. Translate business requirements into data structures that fit both OLTP (Online Transaction Processing) and OLAP (Online Analytical Processing) environments.Bachelor’s or Master’s degree in Computer Science, Information Systems, or related field. 3+ years of experience as a Data Modeler or in a related role. Proficiency in data modeling tools (Erwin, ER/Studio, SQL Developer Data Modeler). Strong experience with SQL and database technologies (Oracle, SQL Server, MySQL, PostgreSQL). Familiarity with ETL tools (Informatica, Talend, Apache NiFi) and data integration techniques. Knowledge of data warehousing concepts and data lake architecture. Understanding of Big Data technologies (Hadoop, Spark) is a plus. Experience with cloud platforms like AWS, GCP, or Azure Why Choose GSPANN? At GSPANN, we don’t just serve our clients—we co-create. The GSPANNians are passionate technologists who thrive on solving the toughest business challenges, delivering trailblazing innovations for marquee clients. This collaborative spirit fuels a culture where every individual is encouraged to sharpen their skills, feed their curiosity, and take ownership to learn, experiment, and succeed. We believe in celebrating each other’s successes—big or small—and giving back to the communities we call home. If you’re ready to push boundaries and be part of a close-knit team that’s shaping the future of tech, we invite you to carry forward the baton of innovation with us. Let’s Co-Create the Future—Together. Discover Your Inner Technologist Explore and expand the boundaries of tech innovation without the fear of failure. Accelerate Your Learning Shape your career while scripting the future of tech. Seize the ample learning opportunities to grow at a rapid pace. Feel Included At GSPANN, everyone is welcome. Age, gender, culture, and nationality do not matter here, what matters is YOU. Inspire and Be Inspired When you work with the experts, you raise your game. At GSPANN, you’re in the company of marquee clients and extremely talented colleagues. Enjoy Life We love to celebrate milestones and victories, big or small. Ever so often, we come together as one large GSPANN family. Give Back Together, we serve communities. We take steps, small and large so we can do good for the environment, weaving in sustainability and social change in our endeavors. We invite you to carry forward the baton of innovation in technology with us. Let’s Co-Create GSPANN | Consulting Services, Technology Services, and IT Services Provider GSPANN provides consulting services, technology services, and IT services to e-commerce businesses with high technology, manufacturing, and financial services. Show more Show less

Posted 2 months ago

Apply

2.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Advisory - Data and Analytics – Staff – Data Engineer(Scala) EY's Advisory Services is a unique, industry-focused business unit that provides a broad range of integrated services that leverage deep industry experience with strong functional and technical capabilities and product knowledge. EY’s financial services practice provides integrated advisory services to financial institutions and other capital markets participants, including commercial banks, retail banks, investment banks, broker-dealers & asset management firms, and insurance firms from leading Fortune 500 Companies. Within EY’s Advisory Practice, Data and Analytics team solves big, complex issues and capitalize on opportunities to deliver better working outcomes that help expand and safeguard the businesses, now and in the future. This way we help create a compelling business case for embedding the right analytical practice at the heart of client’s decision-making. The opportunity We’re looking for Senior – Big Data Experts with expertise in Financial Services domain hand-on experience with Big data ecosystem. Primary Skills And Key Responsibilities Strong knowledge in Spark, good understanding of Spark framework, Performance tuning. Proficiency in Scala & SQL. Good exposure to one of the Cloud technologies - GCP/ Azure/ AWS Hands-on Experience in designing, building, and maintaining scalable data pipelines and solutions to manage and process large datasets efficiently. Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Ability to multi-task under pressure and work independently with minimal supervision. Strong verbal and written communication skills. Must be a team player and enjoy working in a cooperative and collaborative team environment. Adaptable to new technologies and standards. Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support Nice To Have Skills Design, develop, and deploy robust and scalable data pipelines using GCP services such as BigQuery, Dataflow, Data Composer/Cloud Composer (Airflow) and related technologies. Good experience in experience in GCP technology areas of Data store, Big Query, Cloud storage, Persistent disk IAM, Roles, Projects, Organization. Understanding & familiarity with all Hadoop Ecosystem components and Hadoop Administrative Fundamentals Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Experience in HDFS, Hive, Impala Experience is schedulers like Airflow, Nifi etc Experienced in Hadoop clustering and Auto scaling. Develop standardized practices for delivering new products and capabilities using Big Data technologies, including data acquisition, transformation, and analysis. Define and develop client specific best practices around data management within a Hadoop environment on Azure cloud To qualify for the role, you must have BE/BTech/MCA/MBA Minimum 2 years hand-on experience in one or more relevant areas. Total of 1-3 years industry experience Ideally, you’ll also have Experience on Banking and Capital Markets domains Skills And Attributes For Success Use an issue-based approach to deliver growth, market and portfolio strategy engagements for corporates Strong communication, presentation and team building skills and experience in producing high quality reports, papers, and presentations. Experience in executing and managing research and analysis of companies and markets, preferably from a commercial due diligence standpoint. What We Look For A Team of people with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment An opportunity to be a part of market-leading, multi-disciplinary team of 1400 + professionals, in the only integrated global transaction business worldwide. Opportunities to work with EY Advisory practices globally with leading businesses across a range of industries What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 2 months ago

Apply

2.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Advisory - Data and Analytics – Staff – Data Engineer(Scala) EY's Advisory Services is a unique, industry-focused business unit that provides a broad range of integrated services that leverage deep industry experience with strong functional and technical capabilities and product knowledge. EY’s financial services practice provides integrated advisory services to financial institutions and other capital markets participants, including commercial banks, retail banks, investment banks, broker-dealers & asset management firms, and insurance firms from leading Fortune 500 Companies. Within EY’s Advisory Practice, Data and Analytics team solves big, complex issues and capitalize on opportunities to deliver better working outcomes that help expand and safeguard the businesses, now and in the future. This way we help create a compelling business case for embedding the right analytical practice at the heart of client’s decision-making. The opportunity We’re looking for Senior – Big Data Experts with expertise in Financial Services domain hand-on experience with Big data ecosystem. Primary Skills And Key Responsibilities Strong knowledge in Spark, good understanding of Spark framework, Performance tuning. Proficiency in Scala & SQL. Good exposure to one of the Cloud technologies - GCP/ Azure/ AWS Hands-on Experience in designing, building, and maintaining scalable data pipelines and solutions to manage and process large datasets efficiently. Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Ability to multi-task under pressure and work independently with minimal supervision. Strong verbal and written communication skills. Must be a team player and enjoy working in a cooperative and collaborative team environment. Adaptable to new technologies and standards. Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support Nice To Have Skills Design, develop, and deploy robust and scalable data pipelines using GCP services such as BigQuery, Dataflow, Data Composer/Cloud Composer (Airflow) and related technologies. Good experience in experience in GCP technology areas of Data store, Big Query, Cloud storage, Persistent disk IAM, Roles, Projects, Organization. Understanding & familiarity with all Hadoop Ecosystem components and Hadoop Administrative Fundamentals Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Experience in HDFS, Hive, Impala Experience is schedulers like Airflow, Nifi etc Experienced in Hadoop clustering and Auto scaling. Develop standardized practices for delivering new products and capabilities using Big Data technologies, including data acquisition, transformation, and analysis. Define and develop client specific best practices around data management within a Hadoop environment on Azure cloud To qualify for the role, you must have BE/BTech/MCA/MBA Minimum 2 years hand-on experience in one or more relevant areas. Total of 1-3 years industry experience Ideally, you’ll also have Experience on Banking and Capital Markets domains Skills And Attributes For Success Use an issue-based approach to deliver growth, market and portfolio strategy engagements for corporates Strong communication, presentation and team building skills and experience in producing high quality reports, papers, and presentations. Experience in executing and managing research and analysis of companies and markets, preferably from a commercial due diligence standpoint. What We Look For A Team of people with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment An opportunity to be a part of market-leading, multi-disciplinary team of 1400 + professionals, in the only integrated global transaction business worldwide. Opportunities to work with EY Advisory practices globally with leading businesses across a range of industries What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 2 months ago

Apply

2.0 years

0 Lacs

Kanayannur, Kerala, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY-Advisory - Data and Analytics – Staff – Data Engineer(Scala) EY's Advisory Services is a unique, industry-focused business unit that provides a broad range of integrated services that leverage deep industry experience with strong functional and technical capabilities and product knowledge. EY’s financial services practice provides integrated advisory services to financial institutions and other capital markets participants, including commercial banks, retail banks, investment banks, broker-dealers & asset management firms, and insurance firms from leading Fortune 500 Companies. Within EY’s Advisory Practice, Data and Analytics team solves big, complex issues and capitalize on opportunities to deliver better working outcomes that help expand and safeguard the businesses, now and in the future. This way we help create a compelling business case for embedding the right analytical practice at the heart of client’s decision-making. The opportunity We’re looking for Senior – Big Data Experts with expertise in Financial Services domain hand-on experience with Big data ecosystem. Primary Skills And Key Responsibilities Strong knowledge in Spark, good understanding of Spark framework, Performance tuning. Proficiency in Scala & SQL. Good exposure to one of the Cloud technologies - GCP/ Azure/ AWS Hands-on Experience in designing, building, and maintaining scalable data pipelines and solutions to manage and process large datasets efficiently. Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Ability to multi-task under pressure and work independently with minimal supervision. Strong verbal and written communication skills. Must be a team player and enjoy working in a cooperative and collaborative team environment. Adaptable to new technologies and standards. Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support Nice To Have Skills Design, develop, and deploy robust and scalable data pipelines using GCP services such as BigQuery, Dataflow, Data Composer/Cloud Composer (Airflow) and related technologies. Good experience in experience in GCP technology areas of Data store, Big Query, Cloud storage, Persistent disk IAM, Roles, Projects, Organization. Understanding & familiarity with all Hadoop Ecosystem components and Hadoop Administrative Fundamentals Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Experience in HDFS, Hive, Impala Experience is schedulers like Airflow, Nifi etc Experienced in Hadoop clustering and Auto scaling. Develop standardized practices for delivering new products and capabilities using Big Data technologies, including data acquisition, transformation, and analysis. Define and develop client specific best practices around data management within a Hadoop environment on Azure cloud To qualify for the role, you must have BE/BTech/MCA/MBA Minimum 2 years hand-on experience in one or more relevant areas. Total of 1-3 years industry experience Ideally, you’ll also have Experience on Banking and Capital Markets domains Skills And Attributes For Success Use an issue-based approach to deliver growth, market and portfolio strategy engagements for corporates Strong communication, presentation and team building skills and experience in producing high quality reports, papers, and presentations. Experience in executing and managing research and analysis of companies and markets, preferably from a commercial due diligence standpoint. What We Look For A Team of people with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment An opportunity to be a part of market-leading, multi-disciplinary team of 1400 + professionals, in the only integrated global transaction business worldwide. Opportunities to work with EY Advisory practices globally with leading businesses across a range of industries What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 2 months ago

Apply

4.0 - 9.0 years

5 - 8 Lacs

Gurugram

Work from Office

Requirements : We are seeking a proactive and technically strong Site Reliability Engineer (SRE) to ensure the stability, performance, and scalability of our Data Engineering Platform. You will work on cutting-edge technologies including Cloudera Hadoop, Spark, Airflow, NiFi, and Kubernetes ensuring high availability and driving automation to support massive-scale data workloads, especially in the telecom domain. Key Responsibilities • Ensure platform uptime and application health as per SLOs/KPIs • Monitor infrastructure and applications using ELK, Prometheus, Zabbix, etc. • Debug and resolve complex production issues, performing root cause analysis • Automate routine tasks and implement self-healing systems • Design and maintain dashboards, alerts, and operational playbooks • Participate in incident management, problem resolution, and RCA documentation • Own and update SOPs for repeatable processes • Collaborate with L3 and Product teams for deeper issue resolution • Support and guide L1 operations team • Conduct periodic system maintenance and performance tuning • Respond to user data requests and ensure timely resolution • Address and mitigate security vulnerabilities and compliance issues Technical Skillset • Hands-on with Spark, Hive, Cloudera Hadoop, Kafka, Ranger • Strong Linux fundamentals and scripting (Python, Shell) • Experience with Apache NiFi, Airflow, Yarn, and Zookeeper • Proficient in monitoring and observability tools: ELK Stack, Prometheus, Loki • Working knowledge of Kubernetes, Docker, Jenkins CI/CD pipelines • Strong SQL skills (Oracle/Exadata preferred)

Posted 2 months ago

Apply

7 years

0 Lacs

Hyderabad, Telangana, India

On-site

Role: Lead Data Engineer (Hadoop, Hive, Python, SQL, Spark or PySpark) Location: Hyderabad Experience: 7+ Years Role and responsibilities Strong technical, analytical, and problem-solving skills Strong organizational skills, with the ability to work autonomously as well as in a team-based environment Data pipeline framework development Technical skills requirements CDH On-premise for data processing and extraction Ability to own and deliver on large, multi-faceted projects Fluency in complex SQL and experience with RDBMSs Project Experience in CDH experience, Spark, PySpark, Scala, Python, NiFi, Hive, NoSql DBs) Experience designing and building big data pipelines Experience working on large scale, distributed systems Experience working on any Databricks would be added advantage Strong hands-on experience of programming language like PySpark, Scala with Spark, Python. Exposure to various ETL and Business Intelligence tools Experience in shell scripting to automate pipeline execution. Solid grounding in Agile methodologies Experience with git and other source control systems Strong communication and presentation skills Regards, Manvendra Singh manvendra.singh1@incedoinc.com Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

The Branding Club, a dynamic and forward-thinking branding agency, is embarking on an ambitious project to redefine the boundaries of digital branding. In collaboration with Hard.Coded, a leader in creating software development teams, we are assembling a high-performance team of around 17-20 FTE to deliver a groundbreaking product from the ground up. This project is not just about meeting expectations but exceeding them, setting new benchmarks for quality and innovation in the industry. Role Overview We are seeking a highly skilled and experienced Data Engineer to join our dynamic team. In this role, you will be crucial in managing and optimizing the data infrastructure required to support our cutting-edge applications. You will work closely with our development and product teams to ensure our data architecture aligns with the company's goals and supports the needs of our applications. Key Responsibilities Design and implement data pipelines to collect, process, and store large datasets. Develop and maintain scalable ETL (Extract, Transform, Load) processes. Collaborate with software engineers and data scientists to understand data requirements and deliver solutions. Optimize and maintain data architecture, ensuring data quality, integrity, and availability. Implement data governance and security measures to protect sensitive information. Monitor and troubleshoot data pipelines to ensure reliable operation. Stay up-to-date with the latest industry trends and technologies in data engineering. Qualifications Proven experience with SQL and NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB). Experience with data warehousing solutions (e.g., Redshift, BigQuery, Snowflake). Strong experience with ETL tools and frameworks (e.g., Apache NiFi, Airflow, Talend). Proficiency in programming languages such as Python, Java, or Scala. Familiarity with cloud platforms (e.g., AWS, Azure, GCP) and related data services. Knowledge of big data technologies (e.g., Hadoop, Spark, Kafka). Strong understanding of data modeling and database design. Excellent problem-solving skills and attention to detail. Strong communication skills and the ability to work effectively in a team environment. Nice to Have (but not mandatory) Experience with containerization and orchestration tools (e.g., Docker, Kubernetes). Knowledge of machine learning pipelines and model deployment. Familiarity with data visualization tools (e.g., Tableau, Power BI). Additional Requirements To be considered for this position, candidates must complete an assessment. Preferred start date around 1st Sep. This position is onsite around Kennedy Bridge in Mumbai. Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Key Responsibilities Work closely with clients to understand their business requirements and design data solutions that meet their needs. Develop and implement end-to-end data solutions that include data ingestion, data storage, data processing, and data visualization components. Design and implement data architectures that are scalable, secure, and compliant with industry standards. Work with data engineers, data analysts, and other stakeholders to ensure the successful delivery of data solutions. Participate in presales activities, including solution design, proposal creation, and client presentations. Act as a technical liaison between the client and our internal teams, providing technical guidance and expertise throughout the project lifecycle. Stay up-to-date with industry trends and emerging technologies related to data architecture and engineering. Develop and maintain relationships with clients to ensure their ongoing satisfaction and identify opportunities for additional business. Understands Entire End to End AI Life Cycle starting from Ingestion to Inferencing along with Operations. Exposure to Gen AI Emerging technologies. Exposure to Kubernetes Platform and hands on deploying and containorizing Applications. Good Knowledge on Data Governance, data warehousing and data modelling. Requirements Bachelor's or Master's degree in Computer Science, Data Science, or related field. 10+ years of experience as a Data Solution Architect, with a proven track record of designing and implementing end-to-end data solutions. Strong technical background in data architecture, data engineering, and data management. Extensive experience on working with any of the hadoop flavours preferably Data Fabric. Experience with presales activities such as solution design, proposal creation, and client presentations. Familiarity with cloud-based data platforms (e.g., AWS, Azure, Google Cloud) and related technologies such as data warehousing, data lakes, and data streaming. Experience with Kubernetes and Gen AI tools and tech stack. Excellent communication and interpersonal skills, with the ability to effectively communicate technical concepts to both technical and non-technical audiences. Strong problem-solving skills, with the ability to analyze complex data systems and identify areas for improvement. Strong project management skills, with the ability to manage multiple projects simultaneously and prioritize tasks effectively. Tools and Tech Stack Hadoop Ecosystem Data Architecture and Engineering: Preferred: Cloudera Data Platform (CDP) or Data Fabric. Tools: HDFS, Hive, Spark, HBase, Oozie. Data Warehousing Cloud-based: Azure Synapse, Amazon Redshift, Google Big Query, Snowflake, Azure Synapsis and Azure Data Bricks On-premises: , Teradata, Vertica Data Integration And ETL Tools Apache NiFi, Talend, Informatica, Azure Data Factory, Glue. Cloud Platforms Azure (preferred for its Data Services and Synapse integration), AWS, or GCP. Cloud-native Components Data Lakes: Azure Data Lake Storage, AWS S3, or Google Cloud Storage. Data Streaming: Apache Kafka, Azure Event Hubs, AWS Kinesis. HPE Platforms Data Fabric, AI Essentials or Unified Analytics, HPE MLDM and HPE MLDE AI And Gen AI Technologies AI Lifecycle Management: MLOps: MLflow, KubeFlow, Azure ML, or SageMaker, Ray Inference tools: TensorFlow Serving, K Serve, Seldon Generative AI Frameworks: Hugging Face Transformers, LangChain. Tools: OpenAI API (e.g., GPT-4) Kubernetes Orchestration and Deployment: Platforms: Azure Kubernetes Service (AKS)or Amazon EKS or Google Kubernetes Engine (GKE) or Open Source K8 Tools: Helm CI/CD For Data Pipelines And Applications Jenkins, GitHub Actions, GitLab CI, or Azure DevOps Show more Show less

Posted 2 months ago

Apply

7 - 9 years

0 Lacs

Pune, Maharashtra, India

On-site

About Improzo At Improzo ( Improve + Zoe; meaning Life in Greek ), we believe in improving life by empowering our customers. Founded by seasoned Industry leaders, we are laser focused on delivering quality-led commercial analytical solutions to our clients. Our dedicated team of experts in commercial data, technology, and operations has been evolving and learning together since our inception. Here, you won't find yourself confined to a cubicle; instead, you'll be navigating open waters, collaborating with brilliant minds to shape the future. You will work with leading Life Sciences clients, seasoned leaders and carefully chosen peers like you! People are at the heart of our success, so we have defined our CARE values framework with a lot of effort, and we use it as our guiding light in everything we do. We CARE ! Customer-Centric: Client success is our success. Prioritize customer needs and outcomes in every action. Adaptive: Agile and Innovative, with a growth mindset. Pursue bold and disruptive avenues that push the boundaries of possibilities. Respect: Deep respect for our clients & colleagues. Foster a culture of collaboration and act with honesty, transparency, and ethical responsibility. Execution: Laser focused on quality-led execution; we deliver! Strive for the highest quality in our services, solutions, and customer experiences. About The Role Introduction: We are seeking an experienced and highly skilled Data Architect to lead a strategic project focused on Pharma Commercial Data Management Operations. This role demands a professional with 7-9 years of experience in data architecture, data management, ETL, data transformation, and governance, with an emphasis on providing scalable and secure data solutions for the pharmaceutical sector. The ideal candidate will bring a deep understanding of data architecture principles, experience with cloud platforms such as Snowflake, and a solid background in driving commercial data management projects. If you're passionate about leading impactful data initiatives, optimizing data workflows, and supporting the pharmaceutical industry's data needs, we invite you to apply. Responsibilities Key Responsibilities: Lead Data Architecture and Strategy: Design, develop, and implement the overall data architecture for commercial data management operations within the pharmaceutical business. Lead the design and operations of scalable and secure data systems that meet the specific needs of the pharma commercial team, including marketing, sales, and operations. Define and implement best practices for data architecture, ensuring alignment with business goals and technical requirements. Develop a strategic data roadmap for efficient data management and integration across multiple platforms and systems. Data Integration, ETL & Transformation: Oversee the ETL (Extract, Transform, Load) processes to ensure seamless integration and transformation of data from multiple sources, including commercial, sales, marketing, and regulatory databases. Collaborate with data engineers and developers to design efficient and automated data pipelines for processing large volumes of data. Lead efforts to optimize data workflows and improve data transformation processes to enhance reporting and analytics capabilities. Data Governance & Quality Assurance: Implement and enforce data governance standards across the data management ecosystem, ensuring the consistency, accuracy, and integrity of commercial data. Develop and maintain policies for data stewardship, data security, and compliance with industry regulations, such as HIPAA, GDPR, and other pharma-specific compliance requirements. Work closely with business stakeholders to ensure the proper definition of master data and reference data standards. Cloud Platform Expertise (Snowflake (critical to have), AWS, Azure): Lead the adoption and utilization of cloud-based data platforms, particularly Snowflake, to support data warehousing, analytics, and business intelligence needs. Collaborate with cloud infrastructure teams to ensure efficient management of data storage, compute resources, and performance optimization within cloud environments. Stay up-to-date with the latest cloud technologies, such as Snowflake, AWS, Azure, or Google Cloud (optional)), and evaluate opportunities for incorporating them into data architectures. Collaboration with Cross-functional Teams: Work closely with business leaders in commercial operations, analytics, and IT teams to understand their data needs and provide strategic data solutions that enhance business operations. Collaborate with data scientists, analysts, and business intelligence teams to ensure data is available for reporting, analysis, and decision-making. Facilitate communication between IT, business stakeholders, and external vendors to ensure data architecture solutions align with business requirements. Continuous Improvement & Innovation: Drive continuous improvement efforts to optimize data pipelines, data storage, and analytics workflows. Identify opportunities to improve data quality, streamline processes, and enhance the efficiency of data management operations. Advocate for the adoption of new data management technologies, tools, and methodologies to improve data processing, security, and integration. Leadership and Mentorship: Lead and mentor a team of data engineers, analysts, and other technical resources, fostering a collaborative and innovative work environment. Provide leadership in setting clear goals, performance metrics, and expectations for the team. Offer guidance on data architecture best practices, ensuring all team members are aligned with the organization’s data strategy. Required Qualifications Bachelor’s degree in Computer Science, Data Science, Information Systems, or a related field. 7-9 years of experience in data architecture, data management, and data governance, with a proven track record of leading commercial data management operations projects. Extensive experience in data integration, ETL, and data transformation processes, including familiarity with tools like Informatica, Talend, or Apache NiFi. Strong expertise with cloud platforms, particularly Snowflake, AWS, Azure, or Google Cloud. Strong knowledge of data governance frameworks, including data security, privacy regulations, and compliance standards in the pharmaceutical industry (e.g., HIPAA, GDPR). Hands-on experience in designing scalable and efficient data architecture solutions to support business intelligence, analytics, and reporting needs. Proficient in SQL and other query languages, with a solid understanding of database management and optimization techniques. Ability to communicate technical concepts effectively to non-technical stakeholders and align data strategies with business goals. Preferred Qualifications Experience in the pharmaceutical or life sciences sector, particularly in commercial data management, sales, marketing, or operations. Certification or formal training in cloud platforms (e.g., Snowflake, AWS, Azure) or data management frameworks. Familiarity with data science methodologies, machine learning, and advanced analytics tools. Knowledge of Agile methodologies for managing data projects. Key Skills Data Architecture & Design Cloud Platforms (Snowflake – critical to have) Data Governance & Quality Assurance ETL & Data Transformation Data Integration & Pipelines Pharmaceutical Data Management (Preferred) SQL & Database Optimization Leadership & Mentorship Business & Technical Collaboration Benefits Competitive salary and benefits package. Opportunity to work on cutting-edge tech projects, transforming the life sciences industry Collaborative and supportive work environment. Opportunities for professional development and growth. Skills: snowflake,database,data governance & quality assurance,data integration & pipelines,etl & data transformation,azure,business & technical collaboration,aws,data management,analytics,sql,sql & database optimization,cloud platforms (snowflake),leadership & mentorship,cloud platforms,data architecture & design,data Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Require candidates with 5+ years of experience in DATA 2. Candidate must have skills Apache Nifi 3. Desirable skills Kafka, Airflow, Impala, Snowflake Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Thane, Maharashtra, India

On-site

Job Requirements Role/ Job Title: Data Architect Business: New Age Function/ Department: Data & Analytics Place of Work: Mumbai Roles & Responsibilities 'Developing and implementing an overall organizational data strategy that is in line with business processes. The strategy includes data model designs, database development standards, implementation and management of data warehouses and data analytics systems. Identifying data sources, both internal and external, and working out a plan for data management that is aligned with organizational data strategy. Coordinating and collaborating with cross-functional teams, stakeholders, and vendors for the smooth functioning of the enterprise data system. Managing end-to-end data architecture, from selecting the platform, designing the technical architecture, and developing the application to finally testing and implementing the proposed solution. Planning and execution of big data solutions using technologies such as Spark, Hadoop, AWS, NIFI, KAFKA and Airflow. Proficiency in data modeling and design, including SQL development and database administration. Secondary Responsibilities 'Data mining, visualization, and Machine Learning skills. Give training and mentorship to team members to make them better on the job Key Success Metrics 'Successfully deliver projects on committed time. Lead and design technical aspects of the projects. Timely update of the jira and documentations Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description Oracle Customer Success Services Building on the mindset that "Who knows Oracle …. better than Oracle?" Oracle Customer Success Services assists customers with their requirements for some of the most cutting-edge applications and solutions by utilizing the strengths of more than two decades of expertise in developing mission-critical solutions for enterprise customers and combining it with cutting-edge technology to provide our customers' speed, flexibility, resiliency, and security to enable customers to optimize their investment, minimize risk, and achieve more. The business was established with an entrepreneurial mindset and supports a vibrant, imaginative, and highly varied workplace. We are free of obligations, so we'll need your help to turn it into a premier engineering hub that prioritizes quality. Why? Oracle Customer Success Services Engineering is responsible for designing, building, and managing cutting-edge solutions, services, and core platforms to support the managed cloud business including but not limited to OCI, Oracle SaaS, and Oracle Enterprise Applications. This position is for the CSS Engineering Team, and we are searching for the finest and brightest technologists as we begin on the road of cloud-native digital transformation. We operate under a garage culture, rely on cutting-edge technology in our daily work, and provide a highly innovative, creative, and experimental work environment. We prefer to innovate and move quickly, putting a strong emphasis on scalability and robustness. We need your assistance to build a top-tier engineering team that has a significant influence. What? As a senior member of the team, you lead as well as provide hands-on in designing and developing software products, services, and platforms, as well as creating, testing, and managing the systems and applications we create in line with the architecture patterns and standards. You will be expected to advocate for the adoption of software architecture and design patterns among cross-functional teams both within and outside of engineering roles. You will also be expected to act as a mentor and advisor to the team(s) within the software domain as a leader. As we push for digital transformation throughout the organization, you will constantly be expected to think creatively and optimize and harmonize business processes. Required Qualifications: Master’s or Bachelors in Computer Science, or a closely related field. 10+ years of experience in software development, data science, and data engineering design. Advanced proficiency in Python and frameworks such as FastAPI and Dapr. Demonstrated ability to write full-stack applications using polyglot programming with languages/frameworks like FastAPI, Python, and Golang. Familiarity with OOP design principles (SOLID, DRY, KISS, Common Closure, and Module Encapsulation). Proven ability to design software systems using various design patterns (Creational, Structural, and Behavioral). Strong interpersonal skills and the ability to effectively communicate with business stakeholders. Demonstrated ability to drive technology adoption in AIML Solutions and CNCF software stack. Experience with real-time distributed systems using streaming data with Kafka, NiFi, or Pulsar. Strong expertise in software design concepts, patterns (e.g., 12-Factor Apps), and tools to create CNCF-compliant software with hands-on knowledge of containerization technologies like Docker and Kubernetes. Proven ability to build and deploy software applications on one or more public cloud providers (OCI, AWS, Azure, GCP, or similar). Experience designing API-first systems with application stacks like FARM and MERN, and technologies such as gRPC and REST. Solid understanding of Design Thinking, Test-Driven Development (TDD), BDD, and end-to-end SDLC. Experience in DevOps practices, including Kubernetes, CI/CD, Blue-Green, and Canary deployments. Experience with Micro-service architecture patterns, including API Gateways, Event-Driven & Reactive Architecture, CQRS, and SAGA. Hands-on experience working with various data types and storage formats, including NoSQL, SQL, Graph databases, and data serialization formats like Parquet and Arrow. Experience building Agentic Systems with SLMs and LLMs using frameworks like Langgraph + Langchain, AutoGen, LlamaIndex, and Haystack or equivalent. Experience in Data Engineering using data lakehouse stacks such as ETL/ELT, and data processing with Apache Hadoop, Spark, Flink, Beam, and dbt. Experience with Data Warehousing and Lakes such as Apache Iceberg, Hudi, Delta Lake, and cloud-managed solutions like OCI Data Lakehouse. Experience in data visualization and analytics with Apache Superset, Apache Zeppelin, Oracle Analytics Cloud or similar. Responsibilities Core Responsibilities include: Provide thought leadership, technology oversight and hands on development direction to the Development Teams across the business. Liaise with senior executives across multiple business lines to combine business requirements into technology work packages in alignment with the overall AI Strategy and Next-Gen Technology Stack. Lead the development of architecture patterns, integration with full stack software ecosystem, data engineering and contribute to the design strategy. Collaborate with product managers and development teams to identify software requirements and define project scopes. Develop and maintain technical documentation, including architecture diagrams, design specifications, and system diagrams. Analyze and recommend new software technologies and platforms to ensure the company stays ahead of the curve. Work with development teams to ensure software projects are delivered on time, within budget, and to the required quality standards. Provide guidance and mentorship to junior developers. Stay up-to-date with industry trends and developments in software architecture and development practices. Innovation and critical problem solving skills with exceptional communication skills are a must in this role as the Senior Architect would effectively act as a conduit between business executives, functional teams and technology engineering teams. The role requires very strong technology thought leadership skills with practical hands on knowledge along with influential skills to create a broader impact within the business and engineering functions. Qualifications Career Level - IC5 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less

Posted 2 months ago

Apply

0.0 years

0 Lacs

Mohali, Punjab

On-site

Senior Data Engineer (6-7 Years Experience minimum) Location: Mohali, Punjab (Full-Time, Onsite) Company: Data Couch Pvt. Ltd. About Data Couch Pvt. Ltd. Data Couch Pvt. Ltd. is a premier consulting and enterprise training company specializing in Data Engineering, Big Data, Cloud Technologies, DevOps, and AI/ML . With a strong presence across India and global client partnerships, we deliver impactful solutions and upskill teams across industries. Our expert consultants and trainers work with the latest technologies to empower digital transformation and data-driven decision-making for businesses. Technologies We Work With At Data Couch, you’ll gain exposure to a wide range of modern tools and technologies, including: Big Data: Apache Spark, Hadoop, Hive, HBase, Pig Cloud Platforms: AWS, GCP, Microsoft Azure Programming: Python, Scala, SQL, PySpark DevOps & Orchestration: Kubernetes, Docker, Jenkins, Terraform Data Engineering Tools: Apache Airflow, Kafka, Flink, NiFi Data Warehousing: Snowflake, Amazon Redshift, Google BigQuery Analytics & Visualization: Power BI, Tableau Machine Learning & MLOps: MLflow, Databricks, TensorFlow, PyTorch Version Control & CI/CD: Git, GitLab CI/CD, CircleCI Key Responsibilities Design, build, and maintain robust and scalable data pipelines using PySpark Leverage Hadoop ecosystem (HDFS, Hive, etc.) for big data processing Develop and deploy data workflows in cloud environments (AWS, GCP, or Azure) Use Kubernetes to manage and orchestrate containerized data services Collaborate with cross-functional teams to develop integrated data solutions Monitor and optimize data workflows for performance, reliability, and security Follow best practices for data governance , compliance, and documentation Must-Have Skills Proficiency in PySpark for ETL and data transformation tasks Hands-on experience with at least one cloud platform (AWS, GCP, or Azure) Strong grasp of Hadoop ecosystem tools such as HDFS, Hive, etc. Practical experience in Kubernetes for service orchestration Proficiency in Python and SQL Experience working with large-scale, distributed data systems Familiarity with tools like Apache Airflow , Kafka , or Databricks Experience working with data warehouses like Snowflake, Redshift, or BigQuery Exposure to MLOps or integration of AI/ML pipelines Understanding of CI/CD pipelines and DevOps practices for data workflows What We Offer Opportunity to work on cutting-edge data projects with global clients A collaborative, innovation-driven work culture Continuous learning via internal training, certifications, and mentorship Competitive compensation and growth opportunities Job Type: Full-time Pay: ₹1,200,000.00 - ₹15,000,000.00 per year Benefits: Health insurance Leave encashment Paid sick time Paid time off Schedule: Day shift Work Location: In person

Posted 2 months ago

Apply

0 - 10 years

0 Lacs

Bengaluru, Karnataka

Work from Office

Solution Architect (AI / Gen AI) At ABB, we are dedicated to addressing global challenges. Our core values: care, courage, curiosity, and collaboration - combined with a focus on diversity, inclusion, and equal opportunities - are key drivers in our aim to empower everyone to create sustainable solutions. Write the next chapter of your ABB story. This position reports to BL technology Manager, Digital Your role and responsibilities In this role, you will have the opportunity to initiate and drive technology, software, product, and/or solution development using in-depth technical expertise in a specific area. Each day, you will act as the first point of contact in Research and Development (R&D) for in-depth product or technology-related issues. You will also showcase your expertise by supporting strategic corporate technology management and future product/software/solution architecture. The work model for the role is: #LI Hybrid This role is contributing to the Process Automation Business of Process Industry Division in Bangalore . You will be mainly accountable for: • Architect and implement AI, ML, and Gen AI solutions to address critical industrial challenges. This includes designing data pipelines, developing machine learning models, and deploying scalable solutions • Partner with cross-functional teams, including data engineers, data scientists, software developers, and domain experts, to ensure seamless integration and deployment of AI solutions • Continuously research and integrate the latest advancements in AI, ML, and GenAI technologies into our solutions. Experiment with new algorithms, frameworks, and tools to enhance solution performance • Provide technical guidance and mentorship to junior team members. Lead code reviews, design discussions, and technical workshops Qualifications for the role Bachelors or Masters Degree in Computer Science, Software Engineering or equivalent Should have architected and designed AI / Gen AI / Web based software applications 8-10 years of experience with designing Software Architecture and overall software development experience of 15+ years. Experience with Implementing ML/AI/ GenAI Industrial software in one of these industries – Cement / Mining / Pulp & Paper / similar ones Proficiency in Python, R, Java /C++. Expertise in TensorFlow, PyTorch, Keras, Scikit-learn, and other relevant frameworks. Strong knowledge of data preprocessing, feature engineering, and data pipeline development using tools like Apache Airflow, Apache NiFi, and ETL processes More about us The Process Industries Division serves the mining, minerals processing, metals, cement, pulp and paper, battery manufacturing, and food and beverage, as well as their associated service industries. The Division brings deep industry domain expertise coupled with the ability to integrate both automation and electrical systems, increase productivity and reduce overall capital and operating costs for customers. For mining, metals and cement customers, solutions include specialized products and services, as well as total production systems. The Division designs, plans, engineers, supplies, installs and commissions integrated electrical and motion systems, including electric equipment, drives, motors, high power rectifiers and equipment for automation and supervisory control within a variety of areas including mineral handling, mining operations, aluminum smelting, hot and cold steel applications and cement production. The offering for the pulp and paper industries includes control systems, quality control systems, drive systems, on-line sensors, actuators and field instruments. Digitalization solutions, including collaborative operations and augmented reality, help improve plant and enterprise productivity, and reduce maintenance and energy costs. We value people from different backgrounds. Apply today for your next career step within ABB and visit www.abb.com to learn about the impact of our solutions across the globe. #MyABBStory "It has come to our attention that the name of ABB is being used for asking candidates to make payments for job opportunities (interviews, offers). Please be advised that ABB makes no such requests. All our open positions are made available on our career portal for all fitting the criteria to apply. ABB does not charge any fee whatsoever for recruitment process. Please do not make payments to any individuals / entities in connection to recruitment with ABB, even if is claimed that the money is refundable. ABB is not liable for such transactions. For current open positions you can visit our career website https://global.abb/group/en/careers and apply. Please refer to detailed recruitment fraud caution notice using the link https://global.abb/group/en/careers/how-to-apply/fraud-warning"

Posted 2 months ago

Apply

10 years

0 Lacs

Gurugram, Haryana, India

On-site

Line of Service Tax Industry/Sector Not Applicable Specialism Operations Management Level Manager Job Description & Summary At PwC, our people in tax services focus on providing advice and guidance to clients on tax planning, compliance, and strategy. These individuals help businesses navigate complex tax regulations and optimise their tax positions. In quantitative tax solutions and technologies at PwC, you will focus on leveraging data analytics and technology to develop innovative tax solutions. In this field, you will use quantitative methods to optimise tax processes and enhance decision-making for clients. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC , we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary : As Sr. Data Engineer with experience of up to 10 years , you will be responsible for providing technical solution to Business problem in Datawarehouse and Data Lake. Responsibilities: You will be closely work ing with business team during requirement gathering and further collaborating with other teams to develop a solution. Hands-on designing, building, and maintaining Data Lake and Data Warehouse on AWS or Azure to support Data and AI/ML workloads. Should have a fair knowledge on Data storage, data security, data cataloging of data lake. Should have a fair knowledge on programming languages like Python, PySpark and SQL. Strong understanding of AWS cloud components like S3, Redshift, DMS, Glue, Athena, Airflow, EMR, NiFi and any ETL tool Should have experience of 8-10 years in data engineering and development. Should have experience of various data architecture patterns of Data Lake and Data warehouse Should have experience of various data modelling techniques, data design patterns, Star Schema etc. Should have exposure to internal working of various Reporting tools like Power BI, Quicksight etc. Ability to multitask across different tracks Proven experience in design, architecture review and Impact Analysis of Technical changes. Hands on knowledge on various cloud platform s like AWS . Proven experience in working with large teams Excellent presentation and communication skills Excellent Interpersonal Skills Highly self-motivated and eager to learn. Always watching out for new technologies and adopting appropriate ones for improving your productivity, as well as the quality & effectiveness of your deliverables. Should be able to do POCs on emerging technologies. Well versed with the emerging Technology Trends Working experience of Generative AI will be a big plus Any Data Engineer Certification would be an added advantage Mandatory skill sets: Data Engineer Preferred skill sets: Data Engineer Years of experience required : 8 to 1 5 Yrs Education qualification: B.Tech / M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor Degree, Master Degree Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Sales Taxes Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Coaching and Feedback, Communication, Corporate Tax Planning, Creativity, Data Analytics, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Intellectual Curiosity, Learning Agility, Optimism, Professional Courage, Relationship Building, Scenario Planning, Self-Awareness, Service Excellence, Statistical Analysis, Statistical Theory, Strategic Questioning {+ 6 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies