Jobs
Interviews

3311 Big Data Jobs - Page 32

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 12.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Job Title:TPM- Data Engineering or Background Development .Experience5-10YearsLocation:Bangalore : TPM, Data Engineering, Background Development.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Job Title:EMR_Spark SMEExperience:5-10 YearsLocation:Bangalore : Technical Skills: 5+ years of experience in big data technologies with hands-on expertise in AWS EMR and Apache Spark. Proficiency in Spark Core, Spark SQL, and Spark Streaming for large-scale data processing. Strong experience with data formats (Parquet, Avro, JSON) and data storage solutions (Amazon S3, HDFS). Solid understanding of distributed systems architecture and cluster resource management (YARN). Familiarity with AWS services (S3, IAM, Lambda, Glue, Redshift, Athena). Experience in scripting and programming languages such as Python, Scala, and Java. Knowledge of containerization and orchestration (Docker, Kubernetes) is a plus. Architect and develop scalable data processing solutions using AWS EMR and Apache Spark. Optimize and tune Spark jobs for performance and cost efficiency on EMR clusters. Monitor, troubleshoot, and resolve issues related to EMR and Spark workloads. Implement best practices for cluster management, data partitioning, and job execution. Collaborate with data engineering and analytics teams to integrate Spark solutions with broader data ecosystems (S3, RDS, Redshift, Glue, etc.). Automate deployments and cluster management using infrastructure-as-code tools like CloudFormation, Terraform, and CI/CD pipelines. Ensure data security and governance in EMR and Spark environments in compliance with company policies. Provide technical leadership and mentorship to junior engineers and data analysts. Stay current with new AWS EMR features and Spark versions to recommend improvements and upgrades. Requirements and Skills Performance tuning and optimization of Spark jobs. Problem-solving skills with the ability to diagnose and resolve complex technical issues. Strong experience with version control systems (Git) and CI/CD pipelines. Excellent communication skills to explain technical concepts to both technical and non-technical audiences. Qualification: Education qualificationB.Tech, BE, BCA, MCA, M. Tech or equivalent technical degree from a reputed college. Certifications: AWS Certified Solutions Architect Associate/Professional AWS Certified Data Analytics Specialty

Posted 3 weeks ago

Apply

5.0 - 8.0 years

15 - 20 Lacs

Hyderabad, Pune

Hybrid

Warm Greetings from SP Staffing!! Role: GCP Data Engineer Experience Required : 5 to 12 yrs Work Location :Pune/Hyderabad Required Skills, GCP + Pyspark/ GCP + Big query SQL Interested candidates can send resumes to nandhini.spstaffing@gmail.com

Posted 3 weeks ago

Apply

4.0 - 6.0 years

12 - 19 Lacs

Noida, Hyderabad

Work from Office

Your Journey at Crowe Starts Here: At Crowe, you can build a meaningful and rewarding career. With real flexibility to balance work with life moments, you’re trusted to deliver results and make an impact. We embrace you for who you are, care for your well-being, and nurture your career. Everyone has equitable access to opportunities for career growth and leadership. Over our 80-year history, delivering excellent service through innovation has been a core part of our DNA across our audit, tax, and consulting groups. That’s why we continuously invest in innovative ideas, such as AI-enabled insights and technology-powered solutions, to enhance our services. Join us at Crowe and embark on a career where you can help shape the future of our industry. Job Description: As a Cloud Engineer, your primary responsibility will be to design, deploy, and maintain cloud-based solutions primarily on the Microsoft Azure platform. You will work closely with our clients, development teams, infrastructure teams, data & analytics team, and other stakeholders to ensure the successful implementation and operation of cloud services. Your role will involve designing and implementing scalable, secure, and highly available cloud architectures, as well as troubleshooting and resolving any issues that arise. Responsibilities: Azure Solution Design: Collaborate with development teams and architects to design cloud-based solutions on the Azure platform. Evaluate requirements, propose design options, and recommend best practices for scalability, performance, security, and cost optimization. Azure Networking: Designing, implementing, and maintaining network infrastructure solutions based on Azure services i.e Configuring and managing virtual networks, subnets, routing tables, network security groups, and load balancers within the Azure environment, Configuring the IP-Sec VPN tunnels and troubleshooting network-related issues and providing timely resolution to minimize downtime. Azure Infrastructure Deployment: Deploy and configure Azure infrastructure components, such as virtual machines, storage accounts, virtual networks, load balancers and other azure services. Implement automation and infrastructure-as-code (IaaC) practices using tools like Terraform, Azure Resource Manager (ARM) templates, PowerShell, or Azure CLI. Azure Service Provisioning: Provision and manage Azure services, including but not limited to Azure Data Factory, Azure Storage, Azure Virtual Machines, Azure App Service, Azure Functions, Azure Logic Apps, Azure SQL Database, Azure Kubernetes Service (AKS), and Azure Active Directory (AAD). Monitoring and Troubleshooting: Set up monitoring and alerting mechanisms to ensure the health and performance of Azure resources. Investigate and resolve issues related to application availability, performance, and security. Perform root cause analysis and implement preventive measures. Security and Compliance: Implement and enforce security measures to protect Azure resources and data. Follow industry best practices and compliance standards. Conduct security assessments and vulnerability scans and implement necessary remediation actions. Backup and Disaster Recovery: Design and implement backup and disaster recovery strategies for Azure-based applications and data. Configure and manage Azure Backup, Azure Site Recovery, and other relevant services. Collaboration and Documentation: Collaborate with cross-functional teams to gather requirements, provide technical guidance, and support project delivery. Create and maintain technical documentation, including architecture diagrams, standard operating procedures, and deployment guides. Continuous Improvement: Stay up to date with the latest Azure features, services, and industry trends. Continuously explore and propose new solutions, tools, and methodologies to optimize cloud infrastructure, enhance security, and streamline operations. Qualifications: The shift timings for this role is 5:00 PM - 2:00 AM. Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent work experience). 3+ years of experience in designing, implementing, and managing cloud solutions on the Azure platform (AWS and GCP is a plus) Proficiency in Azure services, including compute, storage, networking, and identity and access management (IAM). Proven experience in designing and implementing network infrastructure solutions in Microsoft Azure. Strong knowledge of Azure networking services, including virtual networks, VPN gateways, ExpressRoute, Azure Firewall, and Azure Load Balancer. Knowledge of security best practices and compliance standards related to cloud environments. Solid understanding of cloud architecture patterns, scalability, high availability, and disaster recovery. Experience with infrastructure-as-code (IaaC) tools like ARM templates, PowerShell, or Azure CLI. Strong troubleshooting and problem-solving skills in Azure cloud environments (AWS and GCP is a plus). Excellent communication and collaboration skills to work effectively with teams and stakeholders. Key Stakeholders This Role Interacts With: Internal Senior BI Analyst Data Product Manager BI Architect / Senior Architect Data Engineers Analytics Developers (Power BI, Tableau) External Clients Operations Leads and Mid-Level Managers Data Owners, Data Stewards Enterprise Information Management and Data Governance Team We expect the candidate to uphold Crowe’s values of Care, Trust, Courage, and Stewardship. These values define who we are. We expect all of our people to act ethically and with integrity at all times. Our Benefits: At Crowe, we know that great people are what makes a great firm. We value our people and offer employees a comprehensive benefits package. Learn more about what working at Crowe can mean for you! How You Can Grow: We will nurture your talent in an inclusive culture that values diversity. You will have the chance to meet on a consistent basis with your Career Coach that will guide you in your career goals and aspirations. Learn more about where talent can prosper! More about Crowe: C3 India Delivery Centre LLP formerly known as Crowe Howarth IT Services LLP is a wholly owned subsidiary of Crowe LLP (U.S.A.), a public accounting, consulting and technology firm with offices around the world. Crowe LLP is an independent member firm of Crowe Global, one of the largest global accounting networks in the world. The network consists of more than 200 independent accounting and advisory firms in more than 130 countries around the world. Crowe does not accept unsolicited candidates, referrals or resumes from any staffing agency, recruiting service, sourcing entity or any other third-party paid service at any time. Any referrals, resumes or candidates submitted to Crowe, or any employee or owner of Crowe without a pre-existing agreement signed by both parties covering the submission will be considered the property of Crowe, and free of charge.

Posted 3 weeks ago

Apply

3.0 - 6.0 years

5 - 8 Lacs

Bengaluru

Work from Office

SLK Software is seeking a skilled and passionate Data Engineer to join our growing data team. The ideal candidate will have a strong understanding of data engineering principles, experience building and maintaining data pipelines, and a passion for working with data to solve business problems. Job Summary: The Data Engineer is responsible for designing, building, and maintaining the infrastructure that enables us to collect, process, and store data. This includes developing data pipelines, building data warehouses, and ensuring data quality and availability. You will play a crucial role in empowering our data scientists and analysts to extract valuable insights from our data. Responsibilities: Data Pipeline Development: Design, build, and maintain robust and scalable data pipelines to ingest, process, and transform data from various sources. Data Warehousing: Design and implement data warehouses and data lakes to store and manage large datasets. ETL Processes: Develop and optimize ETL

Posted 3 weeks ago

Apply

2.0 - 7.0 years

2 - 4 Lacs

Chennai, Bengaluru

Work from Office

Required Skills: Hands-on experience in Big Data technologies. Proficient in Apache Hive writing complex queries, partitioning, bucketing, and performance tuning. Strong programming experience with PySpark – RDDs, DataFrames, Spark SQL, UDFs. Experience in working with Hadoop ecosystem (HDFS, YARN, Oozie, etc.). Good understanding of distributed computing principles and data formats like Parquet, Avro, ORC. Strong SQL and debugging skills. Familiarity with version control tools like Git and workflow schedulers like Airflow or Oozie. Preferred Skills: Exposure to cloud-based big data platforms such as AWS EMR, Azure Data Lake, or GCP Dataproc. Experience with performance tuning of Spark jobs and Hive queries. Knowledge of Scala or Java is a plus. Familiarity with data governance, data masking, and security best practices. Experience with CI/CD pipelines, Docker, or container-based deployments is an advantage.

Posted 3 weeks ago

Apply

3.0 - 6.0 years

13 - 18 Lacs

Bengaluru

Work from Office

We are looking to hire Data engineer for the Platform Engineering team. It is a collection of highly skilled individuals ranging from development to operations with a security first mindset who strive to push the boundaries of technology. We champion a DevSecOps culture and raise the bar on how and when we deploy applications to production. Our core principals are centered around automation, testing, quality, and immutability all via code. The role is responsible for building self-service capabilities that improve our security posture, productivity, and reduce time to market with automation at the core of these objectives. The individual collaborates with teams across the organization to ensure applications are designed for Continuous Delivery (CD) and are well-architected for their targeted platform which can be on-premise or the cloud. If you are passionate about developer productivity, cloud native applications, and container orchestration, this job is for you! Principal Accountabilities: The incumbent is mentored by senior individuals on the team to capture the flow and bottlenecks in the holistic IT delivery process and define future tool sets Skills and Software Requirements: Experience with a language such as Python, Go,SQL, Java, or Scala GCP data services (BigQuery; Dataflow; Dataproc; Cloud Composer; Pub/Sub; Google Cloud Storage; IAM) Experience with Jenkins, Maven, Git, Ansible, or CHEF Experience working with containers, orchestration tools (like Kubernetes, Mesos, Docker Swarm etc.) and container registries (GCE, Docker hub etc.) Experience with [SPI]aaS- Software-as-a-Service, Platform-as-a-Service, or Infrastructure-as- a-Service Acquire, cleanse, and ingest structured and unstructured data on the cloud Combine data from disparate sources in to a single, unified, authoritative view of data (e.g., Data Lake) Enable and support data movement from one system service to another system service Experience implementing or supporting automated solutions to technical problems Experience working in a team environment, proactively executing on tasks while meeting agreed delivery timelines Ability to contribute to effective and timely solutions Excellent oral and written communication skills

Posted 3 weeks ago

Apply

8.0 - 10.0 years

32 - 35 Lacs

Hyderabad

Work from Office

Position Summary MetLife established a Global capability center (MGCC) in India to scale and mature Data & Analytics, technology capabilities in a cost-effective manner and make MetLife future ready. The center is integral to Global Technology and Operations with a with a focus to protect & build MetLife IP, promote reusability and drive experimentation and innovation. The Data & Analytics team in India mirrors the Global D&A team with an objective to drive business value through trusted data, scaled capabilities, and actionable insights Role Value Proposition MetLife Global Capability Center (MGCC) is looking for a Senior Cloud data engineer who has the responsibility of building ETL/ELT, data warehousing and reusable components using Azure, Databricks and spark. He/She will collaborate with the business systems analyst, technical leads, project managers and business/operations teams in building data enablement solutions across different LOBs and use cases. Job Responsibilities Collect, store, process and analyze large datasets to build and implement extract, transfer, load (ETL) processes Develop metadata and configuration based reusable frameworks to reduce the development effort Develop quality code with integral performance optimizations in place right at the development stage. Collaborate with global team in driving the delivery of projects and recommend development and performance improvements. Extensive experience of various databases types and knowledge to leverage the right one for the need. Strong understanding of data tools and ability to leverage them to understand the data and generate insights Hands on experience in building/designing at-scale Data Lake, Data warehouses, data stores for analytics consumption On prem and Cloud (real time as well as batch use cases) Ability to interact with business analysts and functional analysts in getting the requirements and implementing the ETL solutions. Education, Technical Skills & Other Critical Requirement Education Bachelors degree in computer science, Engineering, or related discipline Experience (In Years) 8 to 10 years of working experience on Azure Cloud using Databricks or Synapse Technical Skills Experience in transforming data using Python, Spark or Scala Technical depth in Cloud Architecture Framework, Lakehouse Architecture and One Lake solutions. Experience in implementing data ingestion and curation process using Azure with tools such as Azure Data Factory, Databricks Workflows, Azure Synapse, Cosmos DB, Spark (Scala/python), Data bricks . Experience in cloud optimized code on Azure using Databricks, Synapse dedicated SQL Pool and serverless Pools, Cosmos, SQL APIs loading and consumption optimizations. Scripting experience primarily on shell/bash/PowerShell would be desirable. Experience in writing SQL and performing data analysis skills for data anomaly detection and data quality assurance. Other Preferred Skills Expertise in Python and experience writing Azure functions using Python/Node.js Experience using Event Hub for data integrations . Required working knowledge of Azure DevOps pipelines Self-starter and ability to adapt with changing business needs

Posted 3 weeks ago

Apply

3.0 - 7.0 years

11 - 16 Lacs

Noida

Work from Office

Must have 5+ hands-on experience in test automation development using Python. Must have Basic knowledge of Big Data and AI Ecosystem. Must have API testing experience using any framework available in the market using Python. Continuous testing experience and expertise required. Proven success in position of similar responsibilities in a QA environment. Must be strong in writing efficient code in Python using data frames. Must have hands on experience on Python, PySpark, Linux, Big Data(data validation), Jenkins, Github. Good to haveAWS-Hadoop Commands, QTest, Java, Rest Assured, Selenium, Pytest, Playwright, Cypress, Cucumber, Behave, Jmeter, LoadRunner. Mandatory Competencies QA/QE - QA Automation - Selenium QA/QE - QA Automation - Core Java ETL - ETL - Tester Data Science and Machine Learning - Data Science and Machine Learning - Python

Posted 3 weeks ago

Apply

4.0 - 7.0 years

7 - 11 Lacs

Noida

Work from Office

Design, implement, and maintain data pipelines for processing large datasets, ensuring data availability, quality, and efficiency for machine learning model training and inference. Collaborate with data scientists to streamline the deployment of machine learning models, ensuring scalability, performance, and reliability in production environments. Develop and optimize ETL (Extract, Transform, Load) processes, ensuring data flow from various sources into structured data storage systems. Automate ML workflows using ML Ops tools and frameworks (e.g., Kubeflow, MLflow, TensorFlow Extended (TFX)). Ensure effective model monitoring, versioning, and logging to track performance and metrics in a production setting. Collaborate with cross-functional teams to improve data architectures and facilitate the continuous integration and deployment of ML models. Work on data storage solutions, including databases, data lakes, and cloud-based storage systems (e.g., AWS, GCP, Azure). Ensure data security, integrity, and compliance with data governance policies. Perform troubleshooting and root cause analysis on production-level machine learning systems. Skills: Glue, Pyspark, AWS Services, Strong in SQL; Nice to have : Redshift, Knowledge of SAS Dataset Mandatory Competencies DevOps - CLOUD AWS DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Docker ETL - ETL - AWS Glue Big Data - Big Data - Pyspark Database - Other Databases - Redshift Data Science and Machine Learning - Data Science and Machine Learning - Azure ML Beh - Communication and collaboration DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Containerization (Docker, Kubernetes) Database - Sql Server - SQL Packages Cloud - Azure - Azure Data Factory (ADF), Azure Databricks, Azure Data Lake Storage, Event Hubs, HDInsight DevOps/Configuration Mgmt - Cloud Platforms - AWS

Posted 3 weeks ago

Apply

2.0 - 5.0 years

8 - 12 Lacs

Noida

Work from Office

Extensive experience with Redis, MongoDB, ElasticSearch, SQL, and NoSQL. Deep understanding of test-driven development and unit testing practices. Exceptional problem-solving abilities and keen attention to detail. Capable of working both independently and collaboratively within a team. Strong communication skills with the ability to effectively collaborate with stakeholders. Mandatory Competencies Beh - Communication Big Data - Big Data - Mongo DB Programming Language - Java - Java Caching (Redis/Memcache/Hazlecast) Database - No SQL - Elastic, Solr, Lucene etc Database - Database Programming - SQL

Posted 3 weeks ago

Apply

4.0 - 8.0 years

5 - 15 Lacs

Thiruvananthapuram

Work from Office

Job Title: Data Associate - Cloud Data Engineering Experience: 4+ Years Employment Type: Full-Time Industry: Information Technology / Data Engineering / Cloud Platforms Job Summary: We are seeking a highly skilled and experienced Senior Data Associate to join our data engineering team. The ideal candidate will have a strong background in cloud data platforms, big data processing, and enterprise data systems, with hands-on experience across both AWS and Azure ecosystems. This role involves building and optimizing data pipelines, managing large-scale data lakes and warehouses, and enabling advanced analytics and reporting. Key Responsibilities: Design, develop, and maintain scalable data pipelines using AWS Glue, PySpark, and Azure Data Factory. Work with AWS Redshift, Athena, Azure Synapse, and Databricks to support data warehousing and analytics solutions. Integrate and manage data across MongoDB, Oracle, and cloud-native storage like Azure Data Lake and S3. Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver high-quality datasets. Implement data quality checks, monitoring, and governance practices. Optimize data workflows for performance, scalability, and cost-efficiency. Support data migration and modernization initiatives across cloud platforms. Document data flows, architecture, and technical specifications. Required Skills & Qualifications: 8+ years of experience in data engineering, data integration, or related roles. Strong hands-on experience with: AWS Redshift, Athena, Glue, S3 Azure Data Lake, Synapse Analytics, Databricks PySpark for distributed data processing MongoDB and Oracle databases Proficiency in SQL, Python, and data modeling. Experience with ETL/ELT design and implementation. Familiarity with data governance, security, and compliance standards. Strong problem-solving and communication skills. Preferred Qualifications: Certifications in AWS (e.g., Data Analytics Specialty) or Azure (e.g., Azure Data Engineer Associate). Experience with CI/CD pipelines and DevOps for data workflows. Knowledge of data cataloging tools (e.g., AWS Glue Data Catalog, Azure Purview). Exposure to real-time data processing and streaming technologies. Required Skills Azure,AWS REDSHIFT,Athena,Azure Data Lake

Posted 3 weeks ago

Apply

8.0 - 13.0 years

10 - 15 Lacs

Noida

Work from Office

8+ years of experience in data engineering with a strong focus on AWS services . Proven expertise in: Amazon S3 for scalable data storage AWS Glue for ETL and serverless data integration using Amazon S3, DataSync, EMR, Redshift for data warehousing and analytics Proficiency in SQL , Python , or PySpark for data processing. Experience with data modeling , partitioning strategies , and performance optimization . Familiarity with orchestration tools like AWS Step Functions , Apache Airflow , or Glue Workflows . Strong understanding of data lake and data warehouse architectures. Excellent problem-solving and communication skills. Mandatory Competencies Beh - Communication ETL - ETL - AWS Glue Big Data - Big Data - Pyspark Cloud - AWS - AWS S3, S3 glacier, AWS EBS Cloud - AWS - Tensorflow on AWS, AWS Glue, AWS EMR, Amazon Data Pipeline, AWS Redshift Database - Database Programming - SQL Programming Language - Python - Python Shell Cloud - Azure - Azure Data Factory (ADF), Azure Databricks, Azure Data Lake Storage, Event Hubs, HDInsight

Posted 3 weeks ago

Apply

10.0 - 15.0 years

0 Lacs

delhi

On-site

As a Senior Manager in Industry X at New Delhi, you will leverage your in-depth understanding of Industry X concepts and technologies such as Industrial IoT, Predictive Maintenance, and Digital Twins to drive client value creation through digital transformation projects. Your expertise in lean manufacturing principles will be crucial in integrating Industry X solutions effectively. In this role, you will conduct thorough assessments of client manufacturing operations to identify opportunities for Industry X interventions. Your strong grasp of digital and AI use-cases in manufacturing will enable you to develop actionable roadmaps incorporating technologies like IoT, Big Data, Cloud, AI, and Machine Learning. You will be responsible for creating compelling business cases that highlight the ROI associated with Industry X initiatives. As a Senior Manager, you will lead client engagements, ensuring successful project execution within budget and timelines. Collaborating with internal and external stakeholders, including technology vendors and system integrators, will be essential to deliver seamless project implementation. Staying updated on the latest Industry X trends and technologies will allow you to provide clients with cutting-edge insights and establish yourself as a trusted advisor. With a minimum of 10 years of experience in management consulting, preferably in industries like Automotive, Electronics & Semiconductors, and Machinery & Equipment, you should have a proven track record of leading complex client engagements in discrete manufacturing. Your familiarity with Manufacturing Execution Systems (MES) and other industry-specific software will be advantageous. The ideal candidate will also have a background in utilizing technologies such as AR/VR, cloud, AI, 5G, robotics, and digital twins to help businesses adapt to change and build resilient operations. A Bachelor's degree in Engineering, Business Administration, or a related field with a focus on industrial or manufacturing engineering is required for this role. This position offers the opportunity to work at Accenture's New Delhi office and requires a minimum of 15 years of relevant experience.,

Posted 3 weeks ago

Apply

6.0 - 16.0 years

0 Lacs

karnataka

On-site

At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture, and technology to become the best version of you. And we're counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. As an Assistant Director - AI/GenAI in the Data and Analytics team at EY, you will be part of a multi-disciplinary technology team delivering client projects and solutions across Data Mining & Management, Visualization, Business Analytics, Automation, Statistical Insights, and AI/GenAI. Your assignments will cover a wide range of countries and industry sectors. Your key responsibilities include developing, reviewing, and implementing solutions applying AI, Machine Learning, Deep Learning, and developing APIs using Python. You will lead the development and implementation of Generative AI applications, work with advanced models for natural language processing and creative content generation, and optimize solutions leveraging Vector databases for efficient storage and retrieval of contextual data for LLMs. Additionally, you will work on identifying opportunities for analytics application, manage projects, study resource needs, provide expert reviews, and communicate effectively with cross-functional teams. To qualify for this role, you must have 12-16 years of relevant work experience in developing and implementing AI, Machine Learning Models, experience in Azure Cloud Framework, excellent presentation skills, and familiarity with statistical techniques, deep learning, and machine learning algorithms. Proficiency in Python programming, experience with SDLC, and willingness to mentor team members are also required. Ideal candidates will have the ability to think strategically, build rapport with clients, and be willing to travel extensively. In addition, we look for individuals with commercial acumen, technical experience, and enthusiasm to learn in a fast-moving environment. This role offers the opportunity to be part of a market-prominent, multi-disciplinary team and work with EY SaT practices globally across various industries. EY Global Delivery Services (GDS) is a dynamic and truly global delivery network that offers fulfilling career opportunities in collaboration with EY teams on exciting projects worldwide. Continuous learning, transformative leadership, and a diverse and inclusive culture are some of the key aspects of working at EY. EY exists to build a better working world by providing trust through assurance and helping clients grow, transform, and operate across various sectors. Your role at EY will contribute to creating long-term value for clients, people, and society while building trust in the capital markets.,

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As a Senior Cloud Support Engineer at Snowflake, you will be a key member of the expanding Support team, dedicated to providing high-quality resolutions to help deliver data-driven business insights and results. You will have the opportunity to work with a wide variety of operating systems, database technologies, big data, data integration, connectors, and networking to solve complex issues. Snowflake's core values of putting customers first, acting with integrity, owning initiative and accountability, and getting it done are reflected in everything we do. Your role will involve delighting customers with your passion and knowledge of Snowflake Data Warehouse, providing technical guidance, and expert advice on effective and optimal use of Snowflake. You will also be the voice of the customer, providing product feedback and improvements to Snowflake's product and engineering teams. Additionally, you will play a crucial role in building knowledge within the team and contributing to strategic initiatives for organizational and process improvements. As a Senior Cloud Support Engineer, you will drive technical solutions to complex problems, adhere to response and resolution SLAs, and demonstrate good problem-solving skills. You will utilize the Snowflake environment, connectors, 3rd party partner software, and tools to investigate issues, document known solutions, and report bugs and feature requests. Partnering with engineering teams, you will prioritize and resolve customer requests, participate in a variety of Support initiatives, and provide support coverage during holidays and weekends based on business needs. The ideal candidate for this role will have a Bachelor's or Master's degree in Computer Science or equivalent discipline, along with 5+ years of experience in a Technical Support environment or a similar technical function in a customer-facing role. Solid knowledge of at least one major RDBMS, in-depth understanding of SQL data types, aggregations, and advanced functions, as well as proficiency in database patch and release management are essential. Additionally, familiarity with distributed computing principles and frameworks, scripting/coding experience, database migration and ETL experience, and the ability to monitor and optimize cloud spending using cost management tools are considered nice-to-haves. Special requirements for this role include participation in pager duty rotations during nights, weekends, and holidays, as well as the ability to work the 4th/night shift starting from 10 pm IST. Applicants should be flexible with schedule changes to meet business needs. Snowflake is a rapidly growing company, and we are looking for individuals who share our values, challenge ordinary thinking, and drive innovation while building a future for themselves and Snowflake. Join us in making an impact and accelerating our growth.,

Posted 3 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

As a Data Scientist at mPokket, you will play a crucial role in deriving valuable insights from raw data to drive business decisions and improve our products. You will collaborate with the data science team, plan projects, and develop analytics models. Your problem-solving skills and expertise in statistical analysis will be key in aligning our data products with our business objectives. Your responsibilities will include overseeing the data scientists team, guiding colleagues on new techniques, and working closely with data and software engineers to deploy scalable technologies. You will lead the conception, planning, and prioritization of data projects, build analytic systems and predictive models, and explore new techniques to enhance our data capabilities. Ensuring that data projects are in line with our organizational goals will be a critical aspect of your role. The minimum qualifications for this position include a Master's degree in Computer Science, Operations Research, Econometrics, Statistics, or a related technical field, along with at least 6 years of experience in solving analytical problems using quantitative approaches. Proficiency in communicating quantitative analysis results, knowledge of relational databases, SQL, and development experience in scripting languages such as Python, PHP, or Perl are required. You should also have expertise in statistics, experience with statistical software like R or SAS, and a strong technical skill set. Key Technical Skills Required: - Programming: Python (Preferred) / R - ML Models: Regression (Linear, Logistic, Multinomial, Mixed effect), Classification (Bagging & Boosting, Decision tree, SVM), Clustering (K-Means, hierarchical, DB-Scan), Time series (ARIMA, SARIMA, ARIMAX, Holt-Winters, Multi TS, UCM), Neural Networks, Naive Bayes - Excel and SQL - Dimensionality Reduction: PCA, SVD, etc. - Optimization Techniques: Linear programming, Gradient Descent, Genetic Algorithm - Cloud: Understanding of Azure / AWS offerings, Setting up ML pipeline on the cloud Additional Skills (Good to have): - Visualization: Tableau, Power BI, Looker, QlikView - Data Management: HDFS, Spark, Advanced Excel - Agile Tools: Azure DevOps, JIRA - Pyspark - Big Data/Hive Database - IDE: Pycharm If you are a proactive and talented individual with a passion for data science and a desire to contribute to a rapidly growing fintech startup, we would love to have you join us on this exciting journey at mPokket.,

Posted 3 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

hyderabad, telangana

On-site

As a Software Engineer II at JPMorgan Chase within the Employee Platforms team, you will have the opportunity to enhance your software engineering career while working with a team of agile professionals. Your main responsibility will be to design and deliver cutting-edge technology products in a secure, stable, and scalable manner. You will play a crucial role in developing technology solutions across different technical areas to support the firm's business objectives. Your key responsibilities will include executing innovative software solutions, developing high-quality production code, and identifying opportunities to enhance operational stability. You will lead evaluation sessions with external vendors and internal teams to drive architectural designs and technical applicability. Additionally, you will collaborate with various teams to drive feature development and produce documentation of cloud solutions. To qualify for this role, you should have formal training or certification in software engineering concepts along with at least 2 years of practical experience. You must possess advanced skills in system design, application development, and testing. Proficiency in programming languages, automation, and continuous delivery methods is essential. An in-depth understanding of agile methodologies, such as CI/CD, Application Resiliency, and Security, is required. Knowledge in Python, Big Data technologies, and financial services industry IT systems will be advantageous. Your success in this role will depend on your ability to innovate, collaborate with stakeholders, and excel in a diverse and improvement-focused environment. You should have a strong track record of technology implementation projects, along with expertise in software applications and technical processes within a technical discipline. Preferred skills include teamwork, initiative, and knowledge of financial instruments and specific programming languages like Core Java 8, Spring, JPA/Hibernate, and React JavaScript.,

Posted 3 weeks ago

Apply

1.0 - 5.0 years

0 Lacs

karnataka

On-site

You have over 4 years of experience and are now looking for a new opportunity. The interview process will be conducted virtually, and the notice period for this role is 30 days. The work locations available for this position are Bangalore, Pune, and Noida. Your main responsibilities in this role will include developing technical solutions related to Integrations such as OIC, Reports, and Conversions. You will also be responsible for creating technical solutions for Reports like BI Publisher, OTBI, and FRS. It is essential to have a good understanding of Oracle Cloud Modules like AP, AR, GL, PA, FA, PO, and Cash Management. Additionally, you will be required to prepare Technical Design Documents and Unit Test Scripts. To succeed in this role, you should be able to adapt to a dynamically changing environment. Hands-on experience in Fusion technologies like OIC, BIP, and Conversions is necessary. You should also have practical experience in Fusion Reporting technologies such as BI Publisher, OTBI, and FRS. Knowledge of Oracle Fusion RICE Components and at least 1 year of experience in end-to-end implementation projects are required. Any knowledge of OCI (Dev Ops, Big Data, Data Flow, and NoSQL Databases) will be considered an advantage. Furthermore, having certification in any Oracle Technology will also be beneficial for this role.,

Posted 3 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

You should have a solid working knowledge of AWS database and data services as well as the Power BI stack. Your experience in gathering requirements, modeling data, designing, and supporting high-performance big data backend and data visualization systems will be crucial. You should be adept at utilizing methodologies and platform stacks such as Map Reduce, Spark, streaming solutions like Kafka and Kinesis, ETL systems like Glue and Firehose, storage solutions like S3, warehouse stacks like Redshift and DynamoDB, and equivalent open source stacks. Designing and implementing solutions using visualization technologies like Power BI and Quick Sight should be within your expertise. You will be responsible for maintaining and continuously grooming the product backlog, the release pipeline, and the product roadmap. It will be your responsibility to capture problem statements and opportunities raised by customers as demand items, epics, and stories. Leading database physical design sessions with the engineers in the team and ensuring quality assurance and load testing of the solution to maintain customer experience are also part of the role. Additionally, you will be supporting data governance and data quality (cleansing) efforts. Your primary skills should include proficiency in AWS database, data services, PowerBi stack, and big data.,

Posted 3 weeks ago

Apply

7.0 - 11.0 years

0 Lacs

karnataka

On-site

As a Talend ETL Lead, you will be responsible for leading the design and development of scalable ETL pipelines using Talend, integrating with big data platforms, and mentoring junior developers. This is a high-impact, client-facing role requiring hands-on leadership and solution ownership. Lead the end-to-end development of ETL pipelines using Talend Data Fabric. Collaborate with data architects and business stakeholders to understand requirements. Build and optimize data ingestion, transformation, and loading processes. Ensure high performance, scalability, and reliability of data solutions. Mentor and guide junior developers in the team. Troubleshoot and resolve ETL-related issues quickly. Manage deployments and promote code through different environments. Qualifications: - 7+ years of experience in ETL/Data Engineering. - Strong hands-on experience with Talend Data Fabric. - Solid understanding of SQL, Hadoop ecosystem (HDFS, Hive, Pig, etc.). - Experience building robust data ingestion pipelines. - Excellent communication and leadership skills.,

Posted 3 weeks ago

Apply

5.0 - 8.0 years

15 - 20 Lacs

Pune

Work from Office

Critical Skills to Possess: Expertise in data ingestion, data processing and analytical pipelines for big data, relational databases, and data warehouse solutions Hands-on experience with Agile software development Experience in designing and hands-on development in cloud-based analytics solutions. Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required. Designing and building of data pipelines using API ingestion and Streaming ingestion methods. Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential. Thorough understanding of Azure and AWS Cloud Infrastructure offerings. Expertise in Azure Databricks, Azure Stream Analytics, Power BI is desirable Knowledge of SAP and BW/ BPC is desirable. Expert in Python, Scala, SQL is desirable Experience developing security models. Preferred Qualifications: BS degree in Computer Science or Engineering or equivalent experience Roles and Responsibilities Roles and Responsibilities: Design, develop, and deploy data pipelines and ETL processes using Azure Data Factory. Implement data integration solutions, ensuring data flows efficiently and reliably between various data sources and destinations. Collaborate with data architects and analysts to understand data requirements and translate them into technical specifications. Build and maintain scalable and optimized data storage solutions using Azure Data Lake Storage, Azure SQL Data Warehouse, and other relevant Azure services. Develop and manage data transformation and cleansing processes to ensure data quality and accuracy. Monitor and troubleshoot data pipelines to identify and resolve issues in a timely manner. Optimize data pipelines for performance, cost, and scalability

Posted 3 weeks ago

Apply

5.0 - 7.0 years

15 - 20 Lacs

Pune

Work from Office

Critical Skills to Possess: Advanced working knowledge and experience with relational and non-relational databases. Advanced working knowledge and experience with API data providers Experience building and optimizing Big Data pipelines, architectures, and datasets. Strong analytic skills related to working with structured and unstructured datasets. Hands-on experience in Azure Databricks utilizing Spark to develop ETL pipelines. Strong proficiency in data analysis, manipulation, and statistical modeling using tools like Spark, Python, Scala, SQL, or similar languages. Strong experience in Azure Data Lake Storage Gen2, Azure Data Factory, Databricks, Event Hub, Azure Synapse. Familiarity with several of the following technologies: Event Hub, Docker, Azure Kubernetes Service, Azure DWH, API Azure, Azure Function, Power BI, Azure Cognitive Services. Azure DevOps experience to deploy the data pipelines through CI/CD. Preferred Qualifications: BS degree in Computer Science or Engineering or equivalent experience Roles and Responsibilities Roles and Responsibilities: You are detailed reviewing and analyzing structured, semi-structured and unstructured data sources for quality, completeness, and business value. You design, architect, implement and test rapid prototypes that demonstrate value of the data and present them to diverse audiences. You participate in early state design and feature definition activities. Responsible for implementing robust data pipeline using Microsoft, Databricks Stack Responsible for creating reusable and scalable data pipelines. You are a Team-Player, collaborating with team members across multiple engineering teams to support the integration of proven prototypes into core intelligence products. You have strong communication skills to effectively convey complex data insights to non-technical stakeholders.

Posted 3 weeks ago

Apply

3.0 - 5.0 years

15 - 30 Lacs

Bengaluru

Work from Office

Position summary: We are seeking a Senior Software Development Engineer – Data Engineering with 3-5 years of experience to design, develop, and optimize data pipelines and analytics workflows using Snowflake, Databricks, and Apache Spark. The ideal candidate will have a strong background in big data processing, cloud data platforms, and performance optimization to enable scalable data-driven solutions. Key Responsibilities: Work with cloud-based data solutions (Azure, AWS, GCP). Implement data modeling and warehousing solutions. Developing and maintaining data pipelines for efficient data extraction, transformation, and loading (ETL) processes. Designing and optimizing data storage solutions, including data warehouses and data lakes. Ensuring data quality and integrity through data validation, cleansing, and error handling. Collaborating with data analysts, data architects, and software engineers to understand data requirements and deliver relevant data sets (e.g., for business intelligence). Implementing data security measures and access controls to protect sensitive information. Monitor and troubleshoot issues in data pipelines, notebooks, and SQL queries to ensure seamless data processing. Develop and maintain Power BI dashboards and reports. Work with DAX and Power Query to manipulate and transform data. Basic Qualifications Bachelor’s or master’s degree in computer science or data science 3-5 years of experience in data engineering, big data processing, and cloud-based data platforms. Proficient in SQL, Python, or Scala for data manipulation and processing. Proficient in developing data pipelines using Azure Synapse, Azure Data Factory, Microsoft Fabric. Experience with Apache Spark, Databricks and Snowflake is highly beneficial for handling big data and cloud-based analytics solutions. Preferred Qualifications Knowledge of streaming data processing (Apache Kafka, Flink, Kinesis, Pub/Sub). Experience in BI and analytics tools (Tableau, Power BI, Looker). Familiarity with data observability tools (Monte Carlo, Great Expectations). Contributions to open-source data engineering projects.

Posted 3 weeks ago

Apply

5.0 - 8.0 years

11 - 21 Lacs

Hyderabad

Hybrid

AWS (EMR, S3, Glue, Airflow, RDS, Dynamodb, similar) CICD (Jenkins or another) Relational Databases experience (any) No SQL databases experience (any) Microservices or Domain services or API gateways or similar Containers (Docker, K8s, similar) Required Candidate profile Immediate Joiners Preferred.

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies