Jobs
Interviews

100 Big Query Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

0 Lacs

karnataka

On-site

NTT DATA is seeking a Sr. Java Backend Developer to join their team in Bangalore, Karnataka (IN-KA), India. In this role, you will be responsible for providing input, support, and performing full systems life cycle management activities. This includes analyses, technical requirements, design, coding, testing, and implementation of systems and applications software. You will participate in component and data architecture design, technology planning, and testing for Applications Development (AD) initiatives to meet business requirements. Additionally, you will provide input to applications development project plans and integrations, collaborate with teams, and support emerging technologies to ensure effective communication and achievement of objectives. Responsibilities: - Perform systems analysis and design. - Design and develop moderate to highly complex applications. - Develop application documentation. - Produce integration builds. - Perform maintenance and support. - Support emerging technologies and products. Qualifications: - At least 5 years of experience. - Mandatory Skills: GIT, Scrum, Azure DevOps, GCP, Big Query, Power BI, Microservice, SQL Server, DB2, Spring Boot, JSON, Java, C#, AMQP, AzureAD, HTTP, readme documentation. - Understanding of Agile Development. - Strong written and verbal communication skills. - Ability to work in a team environment. - Familiarity with Accountability, being detail-oriented, and taking initiative. - Excellent written and verbal communications skills. - Bachelor's degree in computer science or related discipline, or equivalent education and work experience. About NTT DATA: NTT DATA is a trusted global innovator of business and technology services with a commitment to helping clients innovate, optimize, and transform for long-term success. They serve 75% of the Fortune Global 100 and have diverse experts in more than 50 countries. NTT DATA's services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation, and management of applications, infrastructure, and connectivity. As a part of NTT Group, they invest over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com.,

Posted 1 day ago

Apply

10.0 - 13.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Manager Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firms growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: Job title: Senior Manager About The Role: As a Senior Manager, you&aposll be taking the lead in designing and maintaining complex data ecosystems. Your experience will be instrumental in optimizing data processes, ensuring data quality, and driving data-driven decision-making within the organization. Responsibilities: Architecting and designing complex data systems and pipelines. Leading and mentoring junior data engineers and team members. Collaborating with cross-functional teams to define data requirements. Implementing advanced data quality checks and ensuring data integrity. Optimizing data processes for efficiency and scalability. Overseeing data security and compliance measures. Evaluating and recommending new technologies to enhance data infrastructure. Providing technical expertise and guidance for critical data projects. Required Skills & Experience: Proficiency in designing and building complex data pipelines and data processing systems. Leadership and mentorship capabilities to guide junior data engineers and foster skill development. Strong expertise in data modeling and database design for optimal performance. Skill in optimizing data processes and infrastructure for efficiency, scalability, and cost-effectiveness. Knowledge of data governance principles, ensuring data quality, security, and compliance. Familiarity with big data technologies like Hadoop, Spark, or NoSQL. Expertise in implementing robust data security measures and access controls. Effective communication and collaboration skills for cross-functional teamwork and defining data requirements. Skills: Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc., Mandatory Skill Sets: Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc. Preferred Skill Sets: Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc. Years Of Experience Required: 10-13years Education Qualification: BE/BTECH, ME/MTECH, MBA, MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Engineering, Master of Business Administration Degrees/Field Of Study Preferred: Certifications (if blank, certifications not specified) Required Skills AWS Glue, Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Hadoop, Azure Data Factory, Coaching and Feedback, Communication, Creativity, Data Anonymization, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline, Data Quality, Data Transformation + 28 more Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship Government Clearance Required Job Posting End Date Show more Show less

Posted 1 day ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

You will be working as a Sr. Technical Architect specializing in Web Data & Analytics, focusing on Google web data tools, web data architecture, and JavaScript-based tracking. Your role will involve offering strategic guidance, technical direction, and actionable insights to support business decisions. It is essential for you to excel in data implementation best practices, collaborate effectively across teams globally, and possess strong communication and execution skills. As the Technical Lead for the Web Data & Analytics team, your responsibilities will include ensuring best practices in data collection, governance, implementation, tracking, and reporting. You will collaborate with stakeholders from Web Development, Data Science, and Marketing Leadership to comprehend analytics requirements. Your tasks will also involve defining and implementing a scalable web data architecture for efficient tracking, tagging, and data collection across analytics and marketing platforms. You will need to provide hands-on expertise in Google Analytics (GA4), Google Tag Manager (GTM), and JavaScript-based tracking solutions. Developing and enhancing a scalable data layer to facilitate standardized and efficient data transmission across platforms will also be a key part of your role. Tracking and analyzing user journeys and purchase funnels, monitoring key performance indicators (KPIs), and delivering insights on user behavior and site performance will be crucial. Moreover, you will continuously enhance dashboards and data visualizations using tools like Google Data Studio, Tableau, or other BI tools. Interpreting data trends and providing actionable insights to enable data-driven decision-making will be an essential aspect of your job. You will manage analytics projects, ensure timely execution of strategic initiatives, and stay updated with the latest web analytics trends and privacy regulations. To qualify for this role, you should have at least 10 years of experience in Web Analytics, including a minimum of 2 years in a technical leadership role. A strong understanding of digital marketing analytics, attribution modeling, and customer segmentation is required. Additionally, expertise in web data architecture, tracking strategy development, and data layer design is essential. Hands-on experience with event-driven tracking, structured data layers, and tagging frameworks is also necessary. Proficiency in Google Analytics (GA4), Google Tag Manager (GTM), and web tracking tools like Amplitude, Glassbox, or similar technologies is preferred. Familiarity with audience segmentation through Customer Data Platforms (CDPs) or Google Analytics, deep understanding of JavaScript, HTML, and CSS, and proficiency in SQL and scripting languages such as Python or JavaScript are desired skills for this role. Furthermore, experience managing Big Query or other marketing databases, strong analytical skills, and the ability to generate and present actionable insights are important qualifications. Excellent communication, presentation, and project management skills are also required. A Bachelor's degree in marketing, Computer Science, or a related field is necessary, and an advanced certification in GA4 would be advantageous. If you meet these qualifications and are interested in this opportunity, send your resume and a brief introduction to rajeshwari.vh@careerxperts.com.,

Posted 2 days ago

Apply

2.0 - 6.0 years

0 Lacs

chennai, tamil nadu

On-site

The Essbase ETL and Application Developer position based in Chennai involves delivering solutions that extract data from upstream systems and develop load scripts for Oracle Essbase backend applications. As an Essbase application engineer, you will be responsible for maintaining and enhancing budgeting, forecasting, and long-term planning systems. You will serve as a Subject Matter Expert (SME) in supporting multiple Planning and Consolidation applications in Oracle Essbase environments, both IaaS/PaaS and SaaS EPM Cloud. Your tasks will include performance tuning, optimization of applications, and providing system support to users. You should be comfortable working both independently and as part of a team. Key responsibilities include conducting various testing phases, implementing partitioning, automation, optimization, and performance tuning of Essbase application data/metadata processing. You will also be involved in developing and maintaining Block Storage Outline (BSO) / Aggregate Storage Outline (ASO) cubes, creating data forms, calculation scripts, automation using MAXL, batch scripts, and business rules. The ideal candidate should have at least 5 years of experience in Oracle 19C or 21C IaaS/PaaS Essbase applications, with an additional 2+ years of experience in Oracle EPM Hyperion Planning design, development, or administration being a plus. Proficiency in writing complex code in Calculation Scripts & Business Rules is essential. Experience in EPBCS components such as metadata administration, outlines, dimensions, complex calculations, and security setup is highly beneficial. Moreover, you should possess expertise in metadata upload processes and creating automations for seamless metadata loading. Being proactive in providing alternative solutions based on best practices and application functionality is a key attribute. Financial process and functional knowledge are preferred, along with the ability to analyze business needs and develop solutions to support the business effectively. In summary, the Essbase ETL and Application Developer role in Chennai requires a candidate with a strong background in Oracle Essbase applications, ETL processes, and a proactive approach to problem-solving and application development.,

Posted 3 days ago

Apply

10.0 - 14.0 years

0 Lacs

delhi

On-site

As a key member of the team at EE, your role holds significant importance in delivering exceptional, personalized experiences to our vast customer base of 30 million individuals. Our ongoing investment in automating operations, enhancing network capabilities, and fortifying our data management strategies is crucial for the future prosperity of our business. By harnessing the power of data-driven automation and decision-making, we aim to elevate customer interactions across various channels, ensuring the delivery of unparalleled personal experiences. In this chapter of our journey, you will take charge of developing a new data capability for our Consumer segment. Your responsibilities will encompass overseeing the data architecture, tooling, and frameworks utilized by our engineers and data scientists. From sourcing and integrating data to ensuring its quality and availability, you will play a pivotal role in shaping the data landscape of our organization. Your primary focus will be on supporting key stakeholders by providing them with quality insights that facilitate informed decision-making. Collaborating closely with other data and decisioning teams, you will contribute to creating a unified approach and sharing best practices across the organization. Your day-to-day tasks will involve driving decision-making processes through data insights, cultivating strong relationships with internal customers, and owning self-serve dashboards that offer comprehensive insights into our products and business performance. By producing actionable insights and conducting in-depth analysis, you will contribute to building a holistic view of our customers while ensuring data accuracy and timeliness. Moreover, you will act as a subject matter expert for specific business analysis, lead projects to deliver end-to-end solutions, and focus on enhancing process efficiency through technology automation. Your role will also entail engaging with senior stakeholders, coaching the business on interpreting analyses effectively, and providing technical support to junior team members to foster continuous upskilling. To excel in this role, you should possess experience in customer-facing functions such as Sales, Marketing, and Personalisation, along with proficiency in analytics, reporting, and data analysis tools like GCP and Big Query. Strong Excel skills, automation capabilities, and a growth mindset are essential attributes that will enable you to drive impactful outcomes and contribute to the growth of both yourself and the organization. In terms of qualifications and experience, a minimum of 10 years in Marketing/Customer Analytics, proficiency in SQL and Qlik Sense, and familiarity with marketing operations tools are preferred. Additionally, a collaborative approach, excellent communication skills, and a commercially savvy mindset are key traits that will help you thrive in this dynamic and transformative environment at EE. Join us at EE, a part of the BT Group, and be part of a pioneering team that is redefining the future of telecommunications with innovative solutions and a customer-centric approach. Together, we are committed to creating a diverse and inclusive workplace where everyone can contribute their unique talents and thrive in a culture of continuous growth and transformation.,

Posted 3 days ago

Apply

6.0 - 10.0 years

1 - 1 Lacs

Chennai

Hybrid

Overview: TekWissen is a global workforce management provider throughout India and many other countries in the world. The below clientis a global company with shared ideals and a deep sense of family. From our earliest days as a pioneer of modern transportation, we have sought to make the world a better place one that benefits lives, communities and the planet Job Title: Specialty Development Practitioner Location: Chennai Work Type: Hybrid Position Description: At the client's Credit Company, we are modernizing our enterprise data warehouse in Google Cloud to enhance data, analytics, and AI/ML capabilities, improve customer experience, ensure regulatory compliance, and boost operational efficiencies. As a GCP Data Engineer, you will integrate data from various sources into novel data products. You will build upon existing analytical data, including merging historical data from legacy platforms with data ingested from new platforms. You will also analyze and manipulate large datasets, activating data assets to enable enterprise platforms and analytics within GCP. You will design and implement the transformation and modernization on GCP, creating scalable data pipelines that land data from source applications, integrate into subject areas, and build data marts and products for analytics solutions. You will also conduct deep-dive analysis of Current State Receivables and Originations data in our data warehouse, performing impact analysis related to the client's Credit North America's modernization and providing implementation solutions. Moreover, you will partner closely with our AI, data science, and product teams, developing creative solutions that build the future for the client's Credit. Experience with large-scale solutions and operationalizing data warehouses, data lakes, and analytics platforms on Google Cloud Platform or other cloud environments is a must. We are looking for candidates with a broad set of analytical and technology skills across these areas and who can demonstrate an ability to design the right solutions with the appropriate combination of GCP and 3rd party technologies for deployment on Google Cloud Platform. Skills Required: Big Query,, Data Flow, DataForm, Data Fusion, Dataproc, Cloud Composer, AIRFLOW, Cloud SQL, Compute Engine, Google Cloud Platform - Biq Query Experience Required: GCP Data Engineer Certified Successfully designed and implemented data warehouses and ETL processes for over five years, delivering high-quality data solutions. 5+ years of complex SQL development experience 2+ experience with programming languages such as Python, Java, or Apache Beam. Experienced cloud engineer with 3+ years of GCP expertise, specializing in managing cloud infrastructure and applications into production-scale solutions. Big Query,, Data Flow, DataForm, Data Fusion, Dataproc, Cloud Composer, AIRFLOW, Cloud SQL, Compute Engine, Google Cloud Platform Biq Query, Data Flow, Dataproc, Data Fusion, TERRAFORM, Tekton,Cloud SQL, AIRFLOW, POSTGRES, Airflow PySpark, Python, API, cloudbuild, App Engine, Apache Kafka, Pub/Sub, AI/ML, Kubernetes Experience Preferred: In-depth understanding of GCP's underlying architecture and hands-on experience of crucial GCP services, especially those related to data processing (Batch/Real Time) leveraging Terraform, Big Query, Dataflow, Pub/Sub, Data form, astronomer, Data Fusion, DataProc, Pyspark, Cloud Composer/Air Flow, Cloud SQL, Compute Engine, Cloud Functions, Cloud Run, Cloud build and App Engine, alongside and storage including Cloud Storage DevOps tools such as Tekton, GitHub, Terraform, Docker. Expert in designing, optimizing, and troubleshooting complex data pipelines. Experience developing with microservice architecture from container orchestration framework. Experience in designing pipelines and architectures for data processing. Passion and self-motivation to develop/experiment/implement state-of-the-art data engineering methods/techniques. Self-directed, work independently with minimal supervision, and adapts to ambiguous environments. Evidence of a proactive problem-solving mindset and willingness to take the initiative. Strong prioritization, collaboration & coordination skills, and ability to simplify and communicate complex ideas with cross-functional teams and all levels of management. Proven ability to juggle multiple responsibilities and competing demands while maintaining a high level of productivity. Data engineering or development experience gained in a regulated financial environment. Experience in coaching and mentoring Data Engineers Project management tools like Atlassian JIRA Experience working in an implementation team from concept to operations, providing deep technical subject matter expertise for successful deployment. Experience with data security, governance, and compliance best practices in the cloud. Experience with AI solutions or platforms that support AI solutions Experience using data science concepts on production datasets to generate insights Experience Range: 5+ years Education Required: Bachelor's Degree TekWissen® Group is an equal opportunity employer supporting workforce diversity.

Posted 4 days ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

As a Data Engineering Lead, you will collaborate with marketing, analytics, and business teams to understand data requirements and develop data solutions that address critical business inquiries. Your responsibilities will include leading the implementation and strategic optimization of tag management solutions such as Tealium and Google Tag Manager (GTM) to ensure precise and comprehensive data capture. You will leverage your expertise in Google Analytics 4 (GA4) to configure and customize data collection processes for enhanced insights. Additionally, you will architect scalable and performant data models on Google Cloud, utilizing BigQuery for data warehousing and analysis purposes. In this role, you will proficiently use SQL and scripting languages like JavaScript and HTML for data extraction, manipulation, and visualization. You will also play a pivotal role in mentoring and guiding a team of engineers, fostering a culture of collaboration and continuous improvement. Staying updated on the latest trends and technologies in data engineering and analytics, you will bring innovative ideas to the table and drive the deliverables by mentoring team members effectively. To qualify for this position, you must have experience with Tealium and tag management tools, along with a proven ability to use communication effectively to build positive relationships and drive project success. Your expertise in tag management solutions such as Tealium and GTM will be crucial for comprehensive website and app data tracking, including the implementation of scripting languages for Tag Extensions. Proficiency in Tealium concepts like IQ Tag Management, Audience Stream, Event Stream API Hub, Customer Data Hub, and Debugging tools is essential. Experience in utilizing Google Analytics 4 (GA4) for advanced data collection and analysis, as well as knowledge of Google Cloud, particularly Google BigQuery for data warehousing and analysis, will be advantageous. Preferred qualifications for this role include experience in a similar industry (e.g., retail, e-commerce, digital marketing), proficiency with Python/PySpark for data processing and analysis, working knowledge of Snowflake for data warehousing, experience with Airflow or similar workflow orchestration tools for managing data pipelines, and familiarity with AWS Cloud Technology. Additionally, skills in frontend technologies like React, JavaScript, and HTML, coupled with Python expertise for backend development, will be beneficial. Overall, as a Data Engineering Lead, you will play a critical role in designing robust data pipelines and architectures that support data-driven decision-making for websites and mobile applications, ensuring seamless data orchestration and processing through best-in-class ETL tools and technologies. Your expertise in Tealium, Google Analytics 4, and SQL will be instrumental in driving the success of data engineering initiatives within the organization.,

Posted 4 days ago

Apply

1.0 - 5.0 years

0 Lacs

chennai, tamil nadu

On-site

As a skilled Data Engineer, you will leverage your expertise to contribute to the development of data modeling, ETL processes, and reporting systems. With over 3 years of hands-on experience in areas such as ETL, Big Query, SQL, Python, or Alteryx, you will play a crucial role in enhancing data engineering processes. Your advanced knowledge of SQL programming and database management will be key in ensuring the efficiency of data operations. In this role, you will utilize your solid experience with Business Intelligence reporting tools like Power BI, Qlik Sense, Looker, or Tableau to create insightful reports and analytics. Your understanding of data warehousing concepts and best practices will enable you to design robust data solutions. Your problem-solving skills and attention to detail will be instrumental in addressing data quality issues and proposing effective BI solutions. Collaboration and communication are essential aspects of this role, as you will work closely with stakeholders to define requirements and develop data-driven insights. Your ability to work both independently and as part of a team will be crucial in ensuring the successful delivery of projects. Additionally, your proactive approach to learning new tools and techniques will help you stay ahead in a dynamic environment. Preferred skills include experience with GCP cloud services, Python, Hive, Spark, Scala, JavaScript, and various BI/reporting tools. Your strong oral, written, and interpersonal communication skills will enable you to effectively convey insights and solutions to stakeholders. A Bachelor's degree in Computer Science, Computer Information Systems, or a related field is required for this role. Overall, as a Data Engineer, you will play a vital role in developing and maintaining data pipelines, reporting systems, and dashboards. Your expertise in SQL, BI tools, and data validation will contribute to ensuring data accuracy and integrity across all systems. Your analytical mindset and ability to perform root cause analysis will be key in identifying opportunities for improvement and driving data-driven decision-making within the organization.,

Posted 4 days ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

As a Full Stack Developer, you will be responsible for developing full stack applications by writing clean and efficient code. Your role will involve automating tasks using appropriate tools and scripting, as well as reviewing and debugging code. It will be essential for you to document development phases and closely monitor systems to ensure smooth operation. To excel in this role, you should have at least 2+ years of experience working with Python, Machine Learning, and related frameworks. You should be proficient in creating RESTful APIs using Python Django and have the ability to seamlessly integrate backend APIs with frontend applications. Additionally, you should possess knowledge of developing applications based on machine learning, deep learning, Gen AI, and LLM. Your expertise should include a strong command over programming languages like Python, SQL, and Big Query. Hands-on experience with ML tools such as Numpy, SciPy, Pandas, Scikit-Learn, Tensorflow, Langchain, and Vector Base will be highly advantageous. You should also demonstrate excellent problem-solving skills and the capability to work both independently and collaboratively within a team environment. If you are passionate about developing innovative applications and have a keen interest in the field of Machine Learning, this role offers you the opportunity to showcase your skills and contribute to cutting-edge projects.,

Posted 4 days ago

Apply

8.0 - 13.0 years

15 - 22 Lacs

Chennai

Work from Office

Technical specifications 7+ years of experience in managing Data & Analytics service delivery, preferably within a Managed Services or consulting environment. Experience in managing support for modern data platforms across Azure, Databricks, Fabric, or Snowflake environments. Strong understanding of data engineering and analytics concepts, including ELT/ETL pipelines, data warehousing, and reporting layers. Experience in ticketing, issue triaging, SLAs, and capacity planning for BAU operations. Hands-on understanding of SQL and scripting languages (Python preferred) for debugging/troubleshooting. Proficient with cloud platforms like Azure and AWS; familiarity with DevOps practices is a plus. Familiarity with orchestration and data pipeline tools such as ADF, Synapse, dbt, Matillion, or Fabric. Understanding of monitoring tools, incident management practices, and alerting systems (e.g., Datadog, Azure Monitor, PagerDuty). Circulation Limited: Internal Version 1.0 June 2025 2 Strong stakeholder communication, documentation, and presentation skills. Experience working with global teams and collaborating across time zones. 3.1 Responsibilities Serve as the primary owner for all managed service engagements across all clients, ensuring SLAs and KPIs are met consistently. Continuously improve the operating model, including ticket workflows, escalation paths, and monitoring practices. Coordinate triaging and resolution of incidents and service requests raised by client stakeholders. Collaborate with client and internal cluster teams to manage operational roadmaps, recurring issues, and enhancement backlogs. Lead a >40 member team of Data Engineers and Consultants across offices, ensuring high- quality delivery and adherence to standards. Support transition from project mode to Managed Services including knowledge transfer, documentation, and platform walkthroughs. Ensure documentation is up to date for architecture, SOPs, and common issues. Contribute to service reviews, retrospectives, and continuous improvement planning. Report on service metrics, root cause analyses, and team utilization to internal and client stakeholders. Participate in resourcing and onboarding planning in collaboration with engagement managers, resourcing managers and internal cluster leads. Act as a coach and mentor to junior team members, promoting skill development and strong delivery culture. 3.2 Required Skillset: ETL or ELT: Azure Data Factory, Databricks, Synapse, dbt (any two Mandatory). Data Warehousing: Azure SQL Server/Redshift/Big Query/Databricks/Snowflake (Anyone Mandatory). Data Visualization: Looker, Power BI, Tableau (Basic understanding to support stakeholder queries). Cloud: Azure (Mandatory), AWS or GCP (Good to have). SQL and Scripting: Ability to read/debug SQL and Python scripts. Monitoring: Azure Monitor, Log Analytics, Datadog, or equivalent tools. Ticketing & Workflow Tools: Freshdesk, Jira, ServiceNow, or similar. DevOps: Containerization technologies (e.g., Docker, Kubernetes), Git, CI/CD pipelines (Exposure preferred). 3.3 Behavioural Competencies: At JMAN, we expect our team members to embody the following: Self-Driven & Proactive Own delivery and service outcomes, ensure proactive communication, and manage expectations confidently. Circulation Limited: Internal Version 1.0 June 2025 3 Adaptability & Resilience Thrive in a high-performance, entrepreneurial environment and navigate dynamic challenges effectively. Operational Excellence Be process-oriented and focused on SLA adherence, documentation, and delivery consistency. Agility & Problem Solving Adapt quickly to changing priorities, debug effectively, and escalate when needed with clarity. Commitment & Engagement Ensure timesheet compliance, attend meetings regularly, follow company policies, and actively participate in org-wide initiatives. Teamwork & Collaboration Share knowledge, support colleagues, and contribute to talent retention and team success. Professionalism & Continuous Improvement Maintain a professional demeanour and commit to ongoing learning and self-improvement. Mentoring & Knowledge Sharing Guide and support junior team members, fostering a culture of continuous learning and professional growth. Advocacy & Organizational Citizenship Represent JMAN positively, uphold company values, respect others, and honour commitments, including punctuality and timely delivery.

Posted 4 days ago

Apply

6.0 - 10.0 years

0 Lacs

noida, uttar pradesh

On-site

As a Data Pipeline Architect at our company, you will be responsible for designing, developing, and maintaining optimal data pipeline architecture. You will monitor incidents, perform root cause analysis, and implement appropriate actions to ensure smooth operations. Additionally, you will troubleshoot issues related to abnormal job execution and data corruption, and automate jobs, notifications, and reports for efficiency. Your role will also involve optimizing existing queries, reverse engineering for data research and analysis, and calculating the impact of issues on downstream processes for effective communication. You will support failures, address data quality issues, and ensure the overall health of the environment. Maintaining ingestion and pipeline runbooks, portfolio summaries, and DBAR will be part of your responsibilities. Furthermore, you will enable infrastructure changes, enhancements, and updates roadmap, and build the infrastructure for optimal extraction, transformation, and loading of data from various sources using big data technologies, python, or Web-based APIs. Conducting and participating in code reviews with peers, ensuring effective communication, and understanding requirements will be essential in this role. To qualify for this position, you should hold a Bachelor's degree in Engineering/Computer Science or a related quantitative field. You must have a minimum of 8 years of programming experience with python and SQL, as well as hands-on experience with GCP, BigQuery, Dataflow, Data Warehousing, Apache Beam, and Cloud Storage. Experience with massively parallel processing systems like Spark or Hadoop, source code control systems (GIT), and CI/CD processes is required. Involvement in designing, prototyping, and delivering software solutions within the big data ecosystem, developing generative AI models, and ensuring code quality through reviews are key aspects of this role. Experience with Agile development methodologies, improving data governance and quality, and increasing data reliability are also important. Joining our team at EXL Analytics offers you the opportunity to work in a dynamic and innovative environment alongside experienced professionals. You will gain insights into various business domains, develop teamwork and time-management skills, and receive training in analytics tools and techniques. Our mentoring program and growth opportunities ensure that you have the support and guidance needed to excel in your career. Sky is the limit for our team members, and the experiences gained at EXL Analytics pave the way for personal and professional development within our company and beyond.,

Posted 6 days ago

Apply

8.0 - 13.0 years

20 - 35 Lacs

Hyderabad

Hybrid

Job Title: Scrum Master Enterprise Data & Analytics Location: Hyderabad, India Experience: 8+ years Job Type: Full Time Industry: IT / Software Services Functional Area: Project Management / Data & Analytics Role Category: Scrum Master / Agile Coach Job Description We are looking for an experienced Scrum Master to lead Agile execution within a SAFe (Scaled Agile Framework) environment for our Enterprise Data & Analytics team. You will collaborate with cross-functional teams including Product Owners, Data Engineers, and Analysts to deliver scalable data solutions aligned with enterprise goals. Key Responsibilities Facilitate Scrum and SAFe ceremonies including stand-ups, sprint planning, reviews, and retrospectives. Manage and optimize program increment (PI) planning sessions and cross-team dependencies. Collaborate with Product Owners to maintain and prioritize backlogs with actionable user stories. Track and remove impediments, risks, and blockers to ensure smooth delivery. Drive Agile maturity across teams by coaching and mentoring on Agile principles and SAFe best practices. Coordinate with stakeholders, release train engineers (RTEs), and leadership to align deliverables with business outcomes. Use Agile metrics (e.g. velocity, burn-down charts) to monitor and report team performance. Promote DevOps integration and continuous improvement initiatives. Requirements Bachelors in Computer Science, Business, or related field (MBA is a plus). 8+ years experience as a Scrum Master, Agile Coach, or Data Product Manager. SAFe certification (e.g. SAFe Scrum Master or SAFe Product Management) preferred. Proficient in JIRA / JIRA Align and other Agile tools. Strong experience working in data environments (BigQuery, GCP preferred). Exposure to BI tools like Tableau, ThoughtSpot, or Cognos Analytics. Knowledge of data governance and tools (Collibra is a plus). Excellent communication, stakeholder management, and facilitation skills. Strong problem-solving and conflict-resolution capabilities. Comfortable working in a global and cross-functional environment. To Apply, Send Your Resume To: krishnanjali.m@technogenindia.com

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

You will be joining a global IT product team that specializes in implementing and maintaining services for tax decision and reporting on a global scale. Our products are built on SaaS cloud solutions from vendors like Vertex and Edicom. As a Specialty Development Senior, your responsibilities will involve managing configuration, integration, and implementation of these solutions using Informatica Cloud Middleware (IICS) and developing Python & BASH scripts on Linux servers and Google Cloud Platform. Your role will require independent development of software using Informatica Cloud Middleware and Python / BASH scripts to deliver user stories that enhance our software product. You will be expected to drive application development, deployment, and testing standards for Informatica Cloud Middleware, design and implement Data Integration, Application Integration, and BPEL Service concepts in IICS and the Informatica Process Developer tool. Your experience in software development will be crucial in executing and evaluating tests to ensure correct application functionality and addressing any software deficiencies. Key Skills: - ETL.Informatica - SOAP - Extensible Markup Language (XML) - Linux - Python - SQL - Communications Preferred Skills: - Big Query - Agile Software Development - GitHub - Tekton - GCP Cloud Run Required Experience: - Minimum 3 years of experience in software development and maintenance with Informatica Cloud Middleware (IICS) - Proficiency in Python, Linux, and Google Cloud Platform (BigQuery, Cloud Run) - Strong multitasking abilities with advanced communication skills in English - Experience collaborating in a global team environment Preferred Experience: - Familiarity with other tools related to software development and deployment such as GitHub and Tekton Education Requirement: Bachelor's Degree Join us at TekWissen Group, where we value workforce diversity and provide equal opportunities for all.,

Posted 1 week ago

Apply

1.0 - 5.0 years

0 Lacs

chennai, tamil nadu

On-site

You should have at least 3 years of hands-on experience in data modeling, ETL processes, developing reporting systems, and data engineering using tools such as ETL, Big Query, SQL, Python, or Alteryx. Additionally, you should possess advanced knowledge in SQL programming and database management. Moreover, you must have a minimum of 3 years of solid experience working with Business Intelligence reporting tools like Power BI, Qlik Sense, Looker, or Tableau, along with a good understanding of data warehousing concepts and best practices. Excellent problem-solving and analytical skills are essential for this role, as well as being detail-oriented with strong communication and collaboration skills. The ability to work both independently and as part of a team is crucial for success in this position. Preferred skills include experience with GCP cloud services such as BigQuery, Cloud Composer, Dataflow, CloudSQL, Looker, Looker ML, Data Studio, and GCP QlikSense. Strong SQL skills and proficiency in various BI/Reporting tools to build self-serve reports, analytic dashboards, and ad-hoc packages leveraging enterprise data warehouses are also desired. Moreover, having at least 1 year of experience in Python and Hive/Spark/Scala/JavaScript is preferred. Additionally, you should have a solid understanding of consuming data models, developing SQL, addressing data quality issues, proposing BI solution architecture, articulating best practices in end-user visualizations, and development delivery experience. Furthermore, it is important to have a good grasp of BI tools, architectures, and visualization solutions, coupled with an inquisitive and proactive approach to learning new tools and techniques. Strong oral, written, and interpersonal communication skills are necessary, and you should be comfortable working in a dynamic environment where problems are not always well-defined.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

noida, uttar pradesh

On-site

The ideal candidate for the position will have the responsibility of designing, developing, and maintaining an optimal data pipeline architecture. You will be required to monitor incidents, perform root cause analysis, and implement appropriate actions to solve issues related to abnormal job execution and data corruption conditions. Additionally, you will automate jobs, notifications, and reports to improve efficiency. You should possess the ability to optimize existing queries, reverse engineer for data research and analysis, and calculate the impact of issues on the downstream side for effective communication. Supporting failures, data quality issues, and ensuring environment health will also be part of your role. Furthermore, you will maintain ingestion and pipeline runbooks, portfolio summaries, and DBAR, while enabling infrastructure changes, enhancements, and updates roadmap. Building the infrastructure for optimal extraction, transformation, and loading data from various sources using big data technologies, python, or web-based APIs will be essential. You will participate in code reviews with peers, have excellent communication skills for understanding and conveying requirements effectively. As a candidate, you are expected to have a Bachelor's degree in Engineering/Computer Science or a related quantitative field. Technical skills required include a minimum of 8 years of programming experience with python and SQL, experience with massively parallel processing systems like Spark or Hadoop, and a minimum of 6-7 years of hands-on experience with GCP, BigQuery, Dataflow, Data Warehousing, Data modeling, Apache Beam, and Cloud Storage. Proficiency in source code control systems (GIT) and CI/CD processes, involvement in designing, prototyping, and delivering software solutions within the big data ecosystem, and hands-on experience in generative AI models are also necessary. You should be able to perform code reviews to ensure code meets acceptance criteria, have experience with Agile development methodologies and tools, and work towards improving data governance and quality to enhance data reliability. EXL Analytics offers a dynamic and innovative environment where you will collaborate with experienced analytics consultants. You will gain insights into various business aspects, develop effective teamwork and time-management skills, and receive training in analytical tools and techniques. Our mentoring program provides guidance and coaching to every employee, fostering personal and professional growth. The opportunities for growth and development at EXL Analytics are limitless, setting the stage for a successful career within the company and beyond.,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

chennai, tamil nadu

On-site

You are an experienced software engineer who will be joining our growing software engineering product team at Ford Motor Company in Chennai. In this role, you will be responsible for supporting finished vehicle logistics by developing and maintaining a global logistics data warehouse solution on the GCP platform. This solution will provide visibility into the shipment of finished vehicles from the plant to dealers. Your main responsibilities will include working on Java, Full Stack Java Developer, Spring Boot, GCP, Big Query, GCP Cloud Run, Microservices, REST APIs, Pub/Sub, KAFKA, AI, and TERRAFORM technologies. Your expertise in these areas will be crucial in ensuring the successful development and maintenance of the logistics data warehouse solution. To be successful in this role, you should have a minimum of 8 years of experience as a Java and Spring full stack engineer. Additionally, experience with AI Agent is preferred but not mandatory. If you are someone who is passionate about software engineering, has a strong background in Java and Spring technologies, and is looking to work on cutting-edge solutions in the automotive industry, then we encourage you to apply for this position. Immediate joiners will be given preference.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Product Analyst at Landmark Digital, you will play a crucial role in championing data-driven decision-making for the digital function. Your responsibilities will include owning the Ecommerce KPIs of the squad, preparing, monitoring, and reporting them regularly to relevant teams. You will also be tasked with tracking and measuring the incremental value derived from every feature launched by the squad. Your role will involve proactively engaging with product owners to identify high-value impact items on the backlog through data-driven analysis. Additionally, you will interact with business stakeholders to troubleshoot data issues and collaborate with both business and technical teams for quick resolutions. You will lead hypothesis testing of business impact decisions, conduct AB tests for feature launches, and analyze and report their performance regularly. A key aspect of your role will be to deep dive into metrics and issues, present clear recommendations based on data discovery, and collate data from various sources to generate insights for different audience groups, ranging from senior management to tech teams. You will work closely with squads to address any data-related impediments and ensure smooth workflow. To be successful in this role, you should have at least 5+ years of experience in a Product Analyst role within the consumer goods ecommerce sector. You must possess a thorough understanding of customer journeys on ecommerce platforms and be adept at analyzing user funnels, dropouts, conversions, traffic, NPS, reviews, and ratings. Your track record should demonstrate the ability to drive value and influence key business metrics through data-informed product innovation. Proficiency in data analysis tools such as SQL, Big Query, MS Excel Advanced, and Power BI is essential. Experience in data mining, scripting with R/Python, and familiarity with data science and analytics platforms like SAS and Azure Data Bricks will be beneficial. Your communication skills should be top-notch, enabling you to effectively educate stakeholders and motivate them to act on your data-driven recommendations. As a part of the Landmark Digital team, you will work in a dynamic environment where collaboration, innovation, and continuous learning are encouraged. Your role will involve working with cross-functional teams, requiring excellent organizational, time management, analytical, and problem-solving skills. Attention to detail, the ability to prioritize tasks, and meet deadlines will be crucial for success in this position.,

Posted 1 week ago

Apply

5.0 - 8.0 years

10 - 20 Lacs

Hyderabad, Pune

Hybrid

Develop and maintain PLSQL procedures functions packages and triggers Knowledge of Batch Scheduling, Familiarity with ETL Tools Prior experience in using Oracle PL SQL and Unix for test automation purposes Design and optimize complex SQL queries for data extraction and reporting Perform data modelling schema design and database tuning Collaborate with application developers to integrate backend logic with frontend applications Conduct unit testing and support system integration and user acceptance testing Monitor and troubleshoot database performance issues Maintain documentation for database structures and processes Participate in code reviews and ensure adherence to best practices PL/SQL, Big Query,- Google Unix Shell Scripting. Design and optimize complex SQL queries for data extraction and reporting

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

As a Big Data - Data Modeller at our organization, you will be responsible for leading moderately complex initiatives and deliverables within technical domain environments. Your role will involve contributing to large-scale planning of strategies, designing, coding, testing, debugging, and documenting projects and programs associated with the technology domain, including upgrades and deployments. You will also review moderately complex technical challenges that require an in-depth evaluation of technologies and procedures, as well as resolve issues and lead a team to meet client needs while leveraging a solid understanding of the function, policies, procedures, or compliance requirements. Collaboration and consultation with peers, colleagues, and mid-level managers to resolve technical challenges and achieve goals will be a key aspect of your responsibilities. Additionally, you will lead projects, act as an escalation point, provide guidance and direction to less experienced staff, and collaborate with scrum stakeholders to implement modernized and sustainable technology roadmaps. The ideal candidate for this role should possess strong Data Modeling Skills, expertise in Big Data, ETL, Hadoop, Google Big Query, BigQuery, Kafka event streaming, API development, and CI/CD. A minimum of 6 years of hands-on experience in Big Data Software Enterprise Application development, with the use of continuous integration and delivery when developing code, is required. Experience in the Banking/Financial technology domain will be preferred. You must have a good understanding of current and future trends and practices in Technology, and be able to proactively manage risk through the implementation of the right controls and escalate where required. Your responsibilities will also include working with the Engineering manager, product owner, and Team to ensure that the product is delivered with quality, on time, and within budget. Strong verbal and written communication skills are essential, as you will be required to work in a global development environment. This is a full-time position with a day shift schedule, and the work location is in person in Bangalore. The application deadline for this role is 08/08/2025.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

As a Lead Data Engineer, you will be responsible for leading cloud modernization initiatives, developing scalable data pipelines, and enabling real-time data processing for enterprise-level systems. Your expertise in Google Cloud Platform (GCP) and BigQuery will be crucial in driving the transformation of legacy infrastructure into a robust, cloud-native data ecosystem. Your key responsibilities will include analyzing legacy on-premises and hybrid cloud data warehouse environments, leading the migration of large-scale datasets to Google BigQuery, and designing data migration strategies to ensure data quality, integrity, and performance. You will also be responsible for integrating data from various structured and unstructured sources, building real-time streaming pipelines for large-scale ingestion and processing of IoT and telemetry data, and modernizing legacy SSIS packages into cloud-native ETL pipelines. To excel in this role, you should have at least 5 years of experience in Data Engineering with a strong focus on cloud and big data technologies, along with a minimum of 2 years of hands-on experience with GCP, specifically BigQuery. Your experience in migrating on-premise data systems to the cloud, development with Apache Airflow, Python, and Apache Spark, and expertise in streaming data ingestion will be highly valuable. Additionally, your strong SQL development skills and understanding of cloud architecture, data modeling, and data warehouse design will be essential for this role. Preferred qualifications include a GCP Professional Data Engineer certification, experience with modern data stack tools like dbt, Kafka, or Terraform, and exposure to ML pipelines, analytics engineering, or DataOps/DevOps methodologies. Joining us will provide you with the opportunity to work with cutting-edge technologies in a fast-paced, collaborative environment, lead cloud transformation initiatives at scale, and benefit from competitive compensation and remote flexibility with growth opportunities.,

Posted 1 week ago

Apply

5.0 - 8.0 years

1 Lacs

Chennai

Hybrid

Overview: TekWissen is a global workforce management provider throughout India and many other countries in the world. The below clientis a global company with shared ideals and a deep sense of family. From our earliest days as a pioneer of modern transportation, we have sought to make the world a better place one that benefits lives, communities and the planet Job Title: Software Engineer Practitioner Location: Chennai Work Type: Hybrid Position Description: This is a Data Engineer position for the CVDE GDIA team to support Data Products in software development of high priority products. The successful candidate will work with a variety of technical and business teams at the client including GDIA to build, enable, transform and migrate data products, processes and applications. Our Engineers are involved in all aspects of the product life cycle, Conceptual design, development, deployment, and DevOps. Skills Required: Big Query,, Python, GCP, TERRAFORM, GCP Cloud Run, GitHub Skills Preferred: Java, API, Spring Boot Experience Required: Bachelor's or masters degree in a Computer Science, Engineering or a related or related field of study Must have Data Engineering Competency in the Google Cloud Platform (Big Query, DBT, Dataproc, Airflow DAGs/ Cloud Composer and Terraform/Tekton) Highly Proficient in SQL & Python programming Strong understating of Database concepts and experience with multiple database technologies optimizing query and data processing performance. Knowledge in Agile (Scrum) Methodology, experience in writing user stories Ability to work effectively across organizations, product teams and business partners. Able to effectively communicate both internally and externally (with stakeholders) Experience with Data Warehouse/ ETL processes will be an added advantage Strong process discipline and thorough understating of IT processes (ISP, Data Security). Education Required: Bachelor's Degree TekWissen Group is an equal opportunity employer supporting workforce diversity.

Posted 1 week ago

Apply

8.0 - 10.0 years

1 - 1 Lacs

Chennai

Hybrid

Overview: TekWissen is a global workforce management provider throughout India and many other countries in the world. The below clientis a global company with shared ideals and a deep sense of family. From our earliest days as a pioneer of modern transportation, we have sought to make the world a better place one that benefits lives, communities and the planet Job Title: Software Engineer Consultant/Expert Location: Chennai Work Type: Hybrid Position Description: We are looking for an experienced software engineer to join our growing software engineering product team responsible for supporting finished vehicle logistics at the client's Company. As a team member of this product team at the client's IS IT (Industrial Systems IT), the resource will be responsible for developing and maintaining a global logistics data warehouse solution on GCP platform that provides visibility to the shipment of finished vehicles from plant to dealers Skills Required: Java, Full Stack Java Developer, Spring Boot, GCP, Big Query,, GCP Cloud Run, Microservices, REST APIs, Pub/Sub, KAFKA, AI, TERRAFORM Experience Required: 8 years of experience as Java and Spring full stack engineer Experience Preferred: AI Agent Education Required: Bachelor's Degree TekWissen Group is an equal opportunity employer supporting workforce diversity.

Posted 1 week ago

Apply

4.0 - 7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Our vision is to transform how the world uses information to enrich life for . Micron Technology is a world leader in innovating memory and storage solutions that accelerate the transformation of information into intelligence, inspiring the world to learn, communicate and advance faster than ever. Responsibilities and Tasks Understand the Business Problem and the Relevant Data Maintain and understanding of company and department strategy Translate analysis requirements into data requirements Identify and understand the data sources that are relevant to the business problem Develop conceptual models that capture relationships within the data Define the data-quality objectives for the solution Be a subject matter expert in data sources and reporting options Architect Data Management Systems Use understanding of the business problem and the nature of the data to select appropriate data management systems (Big Data, OLTP, OLAP, etc.) Design and implement optimum data structures in the appropriate data management system (GCP BQ, Snowflake, SQL server etc.) to satisfy the data requirements Plan methods for archiving/deleting of information Develop, Automate, and Orchestrate an Ecosystem of ETL Processes for Varying Volumes of Data Identify and select the optimum methods of access for each data source (real-time/streaming, batch, delayed, static) Determine transformation requirements and develop processes to bring structured and unstructured data from the source to a new physical data model Develop processes to efficiently load the transformed data into the data management system Prepare Data to Meet Analysis Requirements Work with the data scientist to implement strategies for cleaning and preparing data for analysis (e.g., outliers, missing data, etc.) Develop and code data extracts Follow standard methodologies to ensure data quality and data integrity Ensure that the data is fit to use for data science applications Qualifications and Experience: 4-7 years of experience developing, delivering, and/or supporting data engineering, advanced analytics or business intelligence solutions Ability to work with multiple operating systems and generic tools. Experienced in developing ETL/ELT processes using Apache Ni-Fi, Cloud solutions like GCP, Big Query and Snowflake or any equivalent etc. Significant experience with big data processing and/or developing applications and data sources using different cloud services etc. Experienced in integration with different Ingestion, Scheduling, logging, Alerting and Monitoring cloud services. Understanding of how distributed systems work Familiarity with software architecture (data structures, data schemas, etc.) Strong working knowledge of databases (Cloud DBs like BQ, Snowflake, AlloyDB or any equivalents etc.) including SQL and NoSQL. Strong mathematics background, analytical, problem solving, and organizational skills Strong communication skills (written, verbal and presentation) Experience working in a global, multi-functional environment Minimum of 2 years experience in any of the following: At least one high-level client, object-oriented language (e.g., C#, C++, JAVA, Python, Perl, etc.) at least one or more web programming language (PHP, MySQL, Python, Perl, JavaScript, ASP, etc.) one or more Data Extraction Tools (SSIS, Informatica etc.) Software development Ability to travel as needed Education: B.S. degree in Computer Science, Software Engineering, Electrical Engineering, Applied Mathematics or related field of study. M.S. degree preferred

Posted 1 week ago

Apply

5.0 - 7.0 years

5 - 11 Lacs

Bengaluru, Karnataka, India

On-site

Responsibilities: Design, developing and maintaining the data architecture, data models and standards for various Data Integration & Data Warehousing projects in GCP cloud, combined with other technologies Ensure the use of Big Query SQL, Java/Python/Scala and Spark reduces lead time to delivery and aligns to overall group strategic direction so that cross-functional development is usable Ownership of technical solutions from design and architecture perspective, ensure the right direction and propose resolution to potential data pipeline-related problems. Expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. Provide technical guidance and support to a vibrant engineering team. Coaching and teaching your teammates how to do great data engineering. A deep understanding of data architecture principles and data warehouse methodologies specifically Kimball or Data Vault. Requirements An expert in GCP, with at least 5-7 years of delivery experience with: Dataproc, Dataflow, Big Query, Compute, Pub/Sub, and Cloud Storage Highly knowledgeable in industry best practices for ETL Design, Principles, and Concepts Equipped with 3 years of experience with programming languages Python A DevOps and Agile engineering practitioner with experience in a test-driven development Experienced in the following technologies: Google Cloud Platform, Dataproc, Dataflow, Spark SQL, Big Query SQL, PySpark and Python/Scala Experienced in the following BigData technologies: Spark, Hadoop, Kafka etc.. Technologies Big Data Spark Python/Scala GCP

Posted 1 week ago

Apply

2.0 - 5.0 years

2 - 5 Lacs

Bengaluru, Karnataka, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm's growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Responsibilities: Job title: Data Engineer About The Role: As a Junior/Senior Data Engineer, you'll be taking the lead in designing and maintaining complex data ecosystems. Your experience will be instrumental in optimizing data processes, ensuring data quality, and driving data-driven decision-making within the organization. Responsibilities: Architecting and designing complex data systems and pipelines. Leading and mentoring junior data engineers and team members. Collaborating with cross-functional teams to define data requirements. Implementing advanced data quality checks and ensuring data integrity. Optimizing data processes for efficiency and scalability. Overseeing data security and compliance measures. Evaluating and recommending new technologies to enhance data infrastructure. Providing technical expertise and guidance for critical data projects. Required Skills & Experience: Proficiency in designing and building complex data pipelines and data processing systems. Leadership and mentorship capabilities to guide junior data engineers and foster skill development. Strong expertise in data modeling and database design for optimal performance. Skill in optimizing data processes and infrastructure for efficiency, scalability, and cost-effectiveness. Knowledge of data governance principles, ensuring data quality, security, and compliance. Familiarity with big data technologies like Hadoop, Spark, or NoSQL. Expertise in implementing robust data security measures and access controls. Effective communication and collaboration skills for cross-functional teamwork and defining data requirements. Skills: Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc., Mandatory Skill Sets: Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc. Preferred Skill Sets: Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc. Years Of Experience Required: 2-4 years Education Qualification: BE/BTECH, ME/MTECH, MBA, MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Engineering, Master of Business Administration Degrees/Field Of Study Preferred: Certifications (if blank, certifications not specified) Required Skills AWS Glue, Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Apache Hadoop, Azure Data Factory, Communication, Data Anonymization, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline, Data Quality, Data Transformation, Data Validation, Data Warehouse, Data Warehouse Indexing + 13 more Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship Government Clearance Required

Posted 1 week ago

Apply
Page 1 of 4
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies