Home
Jobs

962 Bigquery Jobs - Page 7

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 4.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Naukri logo

Overview We are an integral part of Annalect Global and Omnicom Group, one of the largest media and advertising agency holding companies in the world. Omnicom’s branded networks and numerous specialty firms provide advertising, strategic media planning and buying, digital and interactive marketing, direct and promotional marketing, public relations, and other specialty communications services. Our agency brands are consistently recognized as being among the world’s creative best. Annalect India plays a key role for our group companies and global agencies by providing stellar products and services in areas of Creative Services, Technology, Marketing Science (data & analytics), Market Research, Business Support Services, Media Services, Consulting & Advisory Services. We currently have 4000+ awesome colleagues (in Annalect India) who are committed to solve our clients’ pressing business issues. We are growing rapidly and looking for talented professionals like you to be part of this journey. Let us build this, together. Responsibilities Collaborate internally between departments and act as a data facilitator to identify potential erroneous data and report and fix identified issues. Act as a data entry specialist while maintaining speed and accuracy in day-to-day operation. Provide support to internal members with the agency’s Hyperlocal platform. Ensure the security, integrity, and data governance of all stored information. Possess and maintain awareness of best practices related to data acumen, business trends, and evolving technologies. Develop a strong understanding of internal and external data sources. Must be a strong, honest, and proactive communicator, acting as a collaborative liaison between business and technology teams. Assist the Retail Tech Data team in regular data audits Knowledge of AdTech, MarTech, CRM metrics, and related business concepts is a big plus. Understand best data practices, normalization, and data governance. Effectively and efficiently explain and understand the agency’s basic data needs. Qualifications B.A./B.S. degree or equivalent in Information Systems, Statistics, or a comparable field of study. Hands-on experience working with data, data integration technologies, and databases. Experience with data governance rules and models. Comfortable with new technologies and iterating quickly. Able to balance multiple concurrent projects. Experience with Bigquery, ETL pipelines, API requirements, and BI tools is a plus Strong attention to detail and communication skills when validating data and reporting on data quality and integrity

Posted 1 week ago

Apply

6.0 - 9.0 years

12 - 16 Lacs

Bengaluru

Work from Office

Naukri logo

Overview We are looking for a experienced GCP BigQuery Lead to architect, develop, and optimize data solutions on Google Cloud Platform, with a strong focus on Big Query . role involves leading warehouse setup initiatives, collaborating with stakeholders, and ensuring scalable, secure, and high-performance data infrastructure. Responsibilities Lead the design and implementation of data pipelines using BigQuery , Datorama , Dataflow , and other GCP services. Architect and optimize data models and schemas to support analytics, reporting use cases. Implement best practices for performance tuning , partitioning , and cost optimization in BigQuery. Collaborate with business stakeholders to translate requirements into scalable data solutions. Ensure data quality, governance, and security across all big query data assets. Automate workflows using orchestration tools. Mentor junior resource and lead script reviews, documentation, and knowledge sharing. Qualifications 6+ years of experience in data analytics, with 3+ years on GCP and BigQuery. Strong proficiency in SQL , with experience in writing complex queries and optimizing performance. Hands-on experience with ETL/ELT tools and frameworks. Deep understanding of data warehousing , dimensional modeling , and data lake architectures . Good Exposure with data governance , lineage , and metadata management . GCP data engineer certification is a plus. Experience with BI tools (e.g., Looker, Power BI). Good communication and team lead skills.

Posted 1 week ago

Apply

4.0 - 8.0 years

6 - 10 Lacs

Hyderabad

Hybrid

Naukri logo

Key Responsibilities: 1. Cloud Infrastructure Management: o Design, deploy, and manage scalable and secure infrastructure on Google Cloud Platform (GCP). o Implement best practices for GCP IAM, VPCs, Cloud Storage, Clickhouse, Superset Apache tools onboarding and other GCP services. 2. Kubernetes and Containerization: o Manage and optimize Google Kubernetes Engine (GKE) clusters for containerized applications. o Implement Kubernetes best practices, including pod scaling, resource allocation, and security policies. 3. CI/CD Pipelines: o Build and maintain CI/CD pipelines using tools like Cloud Build, Stratus, GitLab CI/CD, or ArgoCD. o Automate deployment workflows for containerized and serverless applications. 4. Security and Compliance: o Ensure adherence to security best practices for GCP, including IAM policies, network security, and data encryption. o Conduct regular audits to ensure compliance with organizational and regulatory standards. 5. Collaboration and Support: o Work closely with development teams to containerize applications and ensure smooth deployment on GCP. o Provide support for troubleshooting and resolving infrastructure-related issues. 6. Cost Optimization: o Monitor and optimize GCP resource usage to ensure cost efficiency. o Implement strategies to reduce cloud spend without compromising performance. ________________________________________ Required Skills and Qualifications: 1. Certifications: o Must hold a Google Cloud Professional DevOps Engineer certification or Google Cloud Professional Cloud Architect certification. 2. Cloud Expertise: o Strong hands-on experience with Google Cloud Platform (GCP) services, including GKE, Cloud Functions, Cloud Storage, BigQuery, and Cloud Pub/Sub. 3. DevOps Tools: o Proficiency in DevOps tools like Terraform, Ansible, Stratus, GitLab CI/CD, or Cloud Build. o Experience with containerization tools like Docker. 4. Kubernetes Expertise: o In-depth knowledge of Kubernetes concepts such as pods, deployments, services, ingress, config maps, and secrets. o Familiarity with Kubernetes tools like kubectl, Helm, and Kustomize. 5. Programming and Scripting: o Strong scripting skills in Python, Bash, or Go. o Familiarity with YAML and JSON for configuration management. 6. Monitoring and Logging: o Experience with monitoring tools like Prometheus, Grafana, or Google Cloud Operations Suite. 7. Networking: o Understanding of cloud networking concepts, including VPCs, subnets, firewalls, and load balancers. 8. Soft Skills: o Strong problem-solving and troubleshooting skills. o Excellent communication and collaboration abilities. o Ability to work in an agile, fast-paced environment.

Posted 1 week ago

Apply

5.0 - 7.0 years

6 - 10 Lacs

Chennai

Work from Office

Naukri logo

Role Summary: This role demands experienced developers to create detailed member level information supporting the MART application development which involves Application (UX) development and integration with data sources , ETL pipeline development. Responsibilities: Design, develop, and maintain the user interface (UI) of the MART application using R Shiny, JavaScript, and CSS, ensuring a seamless and intuitive user experience. Develop and maintain efficient and scalable ETL pipelines using GCP Dataform and BigQuery to extract, transform, and load data from various on-premise (Oracle) and cloud-based sources. This includes leveraging Big R query for accessing on-premise Oracle data. Develop and implement data manipulation and transformation logic in R, creating a longitudinal data format with unique member identifiers. Develop and implement comprehensive logging and monitoring using Splunk. Collaborate with other developers, data scientists, and stakeholders to ensure the timely delivery of high-quality software. Participate in all phases of the software development lifecycle, from requirements gathering and design to testing and deployment. Contribute to the maintenance and improvement of existing application functionality. Work within a Git-based version control system. Manage data in a dedicated GCP project, adhering to best practices for cloud security and scalability. Contribute to the creation of summary statistics for groups via the Population Assessment Tool (PAT). Required Skillsets: Strong proficiency in R Shiny for UI development. Strong proficiency in JavaScript, CSS, and HTML for front-end development. Proven experience in designing, developing, and maintaining ETL pipelines, preferably using GCP Dataform and BigQuery. Experience with data manipulation and transformation in R, including creating longitudinal datasets. Experience working with on-premise databases (Oracle), preferably using Big R query for data access. Experience with Git for version control. Experience with Splunk for logging and monitoring. Experience working with cloud platforms, specifically Google Cloud Platform (GCP). Strong analytical and problem-solving skills. Excellent communication and collaboration skills. Good to Have Skillsets: Experience with Tableau for dashboarding and data visualization. Experience with advanced data visualization techniques. Experience working in an Agile development environment. Shift Requirement: 3PM to 12 midnight

Posted 1 week ago

Apply

5.0 - 7.0 years

13 - 17 Lacs

Bengaluru

Work from Office

Naukri logo

A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and Al journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat. In your role, you will be responsible forSkilled Multiple GCP services - GCS, BigQuery, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer etc. Must have Python and SQL work experience & Proactive, collaborative and ability to respond to critical situation Ability to analyse data for functional business requirements & front face customer Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 5 to 7 years of relevant experience working as technical analyst with Big Query on GCP platform. Skilled in multiple GCP services - GCS, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer Ambitious individual who can work under their own direction towards agreed targets/goals and with creative approach to work You love collaborative environments that use agile methodologies to encourage creative design thinking and find innovative ways to develop with cutting edge technologies. End to End functional knowledge of the data pipeline/transformation implementation that the candidate has done, should understand the purpose/KPIs for which data transformation was done Preferred technical and professional experience Experience with AEM Core Technologies OSGI Services, Apache Sling ,Granite Framework., Java Content Repository API, Java 8+, Localization Familiarity with building tools, Jenkin and Maven , Knowledge of version control tools, especially Git, Knowledge of Patterns and Good Practices to design and develop quality and clean code, Knowledge of HTML, CSS, and JavaScript , jQuery Familiarity with task management, bug tracking, and collaboration tools like JIRA and Confluence

Posted 1 week ago

Apply

3.0 - 8.0 years

9 - 13 Lacs

Gurugram

Work from Office

Naukri logo

About the Role: Grade Level (for internal use): 10 The Team As a member of the Data Transformation team you will work on building ML powered products and capabilities to power natural language understanding, data extraction, information retrieval and data sourcing solutions for S&P Global Market Intelligence and our clients. You will spearhead development of production-ready AI products and pipelines while leading-by-example in a highly engaging work environment. You will work in a (truly) global team and encouraged for thoughtful risk-taking and self-initiative. The Impact The Data Transformation team has already delivered breakthrough products and significant business value over the last 3 years. In this role you will be developing our next generation of new products while enhancing existing ones aiming at solving high-impact business problems. What’s in it for you Be a part of a global company and build solutions at enterprise scale Collaborate with a highly skilled and technically strong team Contribute to solving high complexity, high impact problems Key Responsibilities Build production ready data acquisition and transformation pipelines from ideation to deployment Being a hands-on problem solver and developer helping to extend and manage the data platforms Apply best practices in data modeling and building ETL pipelines (streaming and batch) using cloud-native solutions What We’re Looking For 3-5 years of professional software work experience Expertise in Python and Apache Spark OOP Design patterns, Test-Driven Development and Enterprise System design Experience building data processing workflows and APIs using frameworks such as FastAPI, Flask etc. Proficiency in API integration, experience working with REST APIs and integrating external & internal data sources SQL (any variant, bonus if this is a big data variant) Linux OS (e.g. bash toolset and other utilities) Version control system experience with Git, GitHub, or Azure DevOps. Problem-solving and debugging skills Software craftsmanship, adherence to Agile principles and taking pride in writing good code Techniques to communicate change to non-technical people Nice to have Core Java 17+, preferably Java 21+, and associated toolchain DevOps with a keen interest in automation Apache Avro Apache Kafka Kubernetes Cloud expertise (AWS and GCP preferably) Other JVM based languages - e.g. Kotlin, Scala C# - in particular .NET Core Data warehouses (e.g., Redshift, Snowflake, BigQuery) What’s In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Were more than 35,000 strong worldwide—so were able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all.From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the worlds leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Flexible DowntimeGenerous time off helps keep you energized for your time on. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIt’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email toEEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning)

Posted 1 week ago

Apply

4.0 - 8.0 years

12 - 16 Lacs

Ahmedabad

Work from Office

Naukri logo

About the Role: Grade Level (for internal use): 11 The Team: The usage reporting team gathers raw usage data from disparate products and produces unified datasets across Market Intelligence departmental lines. We deliver essential intelligence for both public and internal reporting purposes. The Impact As Lead Developer for the usage reporting team you will play a key role in delivering essential insights for both public and private users of the S&P Global Market Intelligence platforms. Our data provides the basis for strategy and insight that our team members depend on to deliver essential intelligence for our clients across the world. What’s in it for y ou Work with a variety of subject matter experts to develop and improve data offerings Exposure to a wide variety of datasets and stakeholders when tackling daily challenges Oversee the complete SDLC pipeline from initial architecture, design, development, and support for data pipelines. Responsibilities Produce technical design documents and conduct technical walkthroughs. Build and maintain data pipelines in T-SQL, Python, Java, Spark, and SSIS. Be part of an agile team that designs, develops, and maintains the enterprise data systems and other related software applications Participate in design sessions for new product features, data models, and capabilities Collaborate with key stakeholders to develop system architectures, API specifications, and implementation requirements. What We’re Looking For 4-8 years of experience as a Senior Developer with strong experience in Python, Java, Spark, and T-SQL. 4-10 years of experience with public cloud platforms (AWS, GCP). Experience with frameworks such as Apache Spark, SSIS, Kafka, and Kubernetes. 4-10 years of data warehousing experience (Redshift, SSAS Cube, BigQuery ). Strong self-starter and independent self-motivated software engineer. Strong leadership skills and proven ability to collaborate effectively with engineering leadership and key stakeholders. About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction.For more information, visit www.spglobal.com/marketintelligence . What’s In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Were more than 35,000 strong worldwide—so were able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all.From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the worlds leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Flexible DowntimeGenerous time off helps keep you energized for your time on. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIt’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email toEEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.2 - Middle Professional Tier II (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning)

Posted 1 week ago

Apply

8.0 - 12.0 years

35 - 45 Lacs

Chennai

Work from Office

Naukri logo

STAFF ENGINEER (Accounts Payable) Toast is a technology company that specializes in providing a comprehensive all-in-one SaaS product and financial technology solutions tailored for the restaurant industry. Toast offers a suite of tools to help restaurants manage their operations, including point of sale, payment processing, supplier management, digital ordering and delivery, marketing and loyalty, employee scheduling and team management. The platform is designed to streamline operations, enhance customer experiences, and improve overall efficiency for the restaurants. Are you bready* for a change? As a Staff Engineer on the Accounts Payable team you will be responsible for developing and maintaining back-end systems that support AP operations, automating processes, enhancing user interfaces, and integrating various systems. In this role, you will work on architecting, developing, and maintaining backend systems and services that support our business and technical goals. You will collaborate closely with product managers, frontend engineers, and other stakeholders to deliver high-quality, scalable, and reliable backend solutions. Join us to improve our platform and add the next generation of products. About this roll* (Responsibilities) As a Staff Engineer on our team, you will: Be part of a team working collaboratively with UX, PM, QA and other engineers designing, building and maintaining high performance, flexible and highly scalable Saas applications Lead technical initiatives, mentor junior engineers, and provide guidance on best practices for backend development. Champion design reviews and help drive the technical direction of the team. Develop automated workflows for invoice processing, payment approvals, and vendor management. Optimize query performance and ensure data integrity within large datasets. Implement machine learning or Optical Character Recognition (OCR) to streamline data extraction from invoices and minimize manual intervention. Lead, mentor and coach engineers on best in class industry standard development best practices Collaborate with other engineering teams to ensure that developed solutions are scalable, reliable, and secure. Use cutting-edge technologies and best practices to optimize for performance and usability, ultimately enhancing the overall restaurant management experience. Advocate best coding practices to raise the bar for you, your team and the company Dedicated to building a high-quality, reliable, and high-performing framework for reporting, analytics, and insights on toast platform Document solution design, write & review code, test and rollout solutions to production, Work with PM in capturing & actioning customer feedback to iteratively enhance customer experience Propose and implement improvements to enhance system efficiency, scalability, and user experience. Present findings and insights to senior leadership and stakeholders. Passionate about making users happy and seeing people use your product in the wild. Do you have the right ingredients*? (Requirements) 8+ years of hands on experience delivering high quality, reliable services / software development using C#, Java, Kotlin or other object-oriented languages Build and maintain RESTful APIs, GraphQL endpoints, or other integrations with internal and external services. Design, optimize, and maintain relational (SQL) and NoSQL databases (SQL Server, Postgres, DynamoDB). Work on data modeling, query optimization, and performance tuning. Identify bottlenecks, optimize application performance, and scale backend systems to handle high traffic and large data volumes. Strong experience with automated testing (unit, integration, end-to-end tests) and test-driven development (TDD). Proficient with data warehousing solutions such as Snowflake, Redshift, or BigQuery. Experience working in a team with Agile/Scrum methodology Must have experience supporting and debugging large distributed applications. Experience in monitoring, troubleshooting, and improve system performance through logging and metrics Familiarity with data platforms to process large datasets for scalable data processing will be a plus Strong problem-solving skills, with the ability to identify, diagnose, and resolve complex technical issues. Excellent communication skills to work with both technical and non-technical stakeholders. Self-motivated, with a passion for learning and staying current with new technologies. A minimum of a bachelor's degree is required. This role follows a hybrid work model, requiring a minimum of two days per week in the office

Posted 1 week ago

Apply

5.0 - 10.0 years

17 - 30 Lacs

Noida

Remote

Naukri logo

JD - Required skills: 5+ years of industry experience in the field of Data Engineering support and enhancement. Proficient in Google Cloud Platform (GCP) services such as Dataflow, BigQuery, Cloud Storage and Pub/Sub. Strong understanding of data pipeline architectures and ETL processes. Experience with Python programming language in terms of data processing. Knowledge of SQL and experience with relational databases. Familiarity with version control systems like Git.Ability to analyze, troubleshoot, and complex data pipeline issues. Software engineering experience in optimizing data pipelines to improve performance and reliability. Continuously optimize data pipeline efficiency and reduce operational costs and reduce number of issues/failures. Automate repetitive tasks in data processing and management Experience in monitoring and alerting for Data Pipelines. Continuously improve data pipeline reliability through analysis and testing Perform SLA-oriented monitoring for critical pipelines and provide suggestions as well implement post business approval for SLA adherence if needed. Monitor performance and reliability of GCP data pipelines, Informatica ETL workflows, MDM and Control-M jobs. Maintain infrastructure reliability for GCP data pipelines, Informatica ETL workflows, MDM and Control-M jobs. Conduct post-incident reviews and implement improvements for data pipelines. Develop and maintain documentation for data pipeline systems and processes. Excellent communication and documentation skills. Strong problem-solving and analytical skills. Open to work in a 24X7 shift.

Posted 1 week ago

Apply

6.0 - 9.0 years

0 - 3 Lacs

Pune, Chennai, Bengaluru

Hybrid

Naukri logo

Preferred candidate profile GCP Cloud Data Engineers having strong experience in cloud migrations and pipelines with 5+ years of experience • Good understanding of Database and Data Engineering concepts. • Experience in cloud migration is must • Experience in Data Ingestion and processing from difference resources • Conceptual knowledge of understanding data, building ETL pipelines, data integrations, and ODS/DW. • Hands-on experience in SQL and Python • Experience in java development is required • Hands-on working experience in Google Cloud Platform Data Flow, Data Transfer services, AirFlow • Hands-on Working experience in Data Preprocessing techniques using DataFlow, DataProc, DataPrep • Hands-on working experience in BigQuery • Knowledge in Kafka, PubSub, GCS & Schedulers are required • Proficient with PostgreSQL is preferred • Experience with both real time and scheduled pipelines are preferred • Cloud certification is a plus • Experience in implementing ETL pipelines • Familiar with MicroServices or Enterprise Application Integration Patterns is a plus"

Posted 1 week ago

Apply

3.0 - 5.0 years

7 - 8 Lacs

Hyderabad, Pune, Chennai

Hybrid

Naukri logo

Candidates must have relevant experience of max 2 years in Big query & Python

Posted 1 week ago

Apply

14.0 - 19.0 years

45 - 55 Lacs

Pune

Hybrid

Naukri logo

Key Skills: GCP, Bigquery, Devops Roles and Responsibilities: Build, Enhance & Provide production support for GCP based application, using Technologies like Google BigQuery, Python, workflows. Own E2E delivery - requirement analysis, Planning design, development, environment build, data load, testing, deployment, code reviews, etc. for all the technical & functional changes in a release. Work with Ops, Dev and Test Engineers to ensure operational issues (performance, operational issues, alerting, design defect related issues, etc.) are identified and addressed at all stages of a product or service release / change. Be responsible for automating the continuous integration / continuous delivery pipeline within a DevOps Product/Service team driving a culture of continuous improvement. Also, should be aware of the various code scanning findings and provide meaningful remediation solutions. Ensure all the IT General Controls (e.g. SDLC, DEPL, etc) are followed and be able to provide evidence when required by audit reviews from time to time. Ensure high quality outcomes that are performant and in line with the Business SLAs and within the agreed costs. Be responsible for upskilling and management of pod teams. Lead by example on various organizational initiatives including Engineering Excellence, Test Automation, Service Excellence etc & promote the adoption/behaviours within the team Skills Required: Knowledge of Google Cloud, BigQuery, Shell scripting, SQL scripts, any batch scheduling Tool such as Control M. Previous Experience of working in any data warehouse\ Regulatory Reporting Project. Risk and Compliance. Knowledge of Production Support processes and procedures (Change, Incident, Problem & Service management). Basic understanding of DevOps operating model and tools such as Jenkins/Ansible/Github/CI CD pipeline. Work independently to analyse and resolve production problems and work with global team across time zones to resolve them. Knowledge of Agile delivery processes and Devops practices Candidate will be required to continually enhance the skills within a few specialisms which include gaining business knowledge, understanding of interfaces, Impact Assessments, Development, Code Scanning and Deployments along with operational / business support. Education: Bachelor's Degree in related field

Posted 1 week ago

Apply

1.0 - 4.0 years

10 - 14 Lacs

Pune

Work from Office

Naukri logo

Overview Design, develop, and maintain data pipelines and ETL/ELT processes using PySpark/Databricks/bigquery/Airflow/composer. Optimize performance for large datasets through techniques such as partitioning, indexing, and Spark optimization. Collaborate with cross-functional teams to resolve technical issues and gather requirements. Responsibilities Ensure data quality and integrity through data validation and cleansing processes. Analyze existing SQL queries, functions, and stored procedures for performance improvements. Develop database routines like procedures, functions, and views/MV. Participate in data migration projects and understand technologies like Delta Lake/warehouse/bigquery. Debug and solve complex problems in data pipelines and processes. Qualifications Bachelor’s degree in computer science, Engineering, or a related field. Strong understanding of distributed data processing platforms like Databricks and BigQuery. Proficiency in Python, PySpark, and SQL programming languages. Experience with performance optimization for large datasets. Strong debugging and problem-solving skills. Fundamental knowledge of cloud services, preferably Azure or GCP. Excellent communication and teamwork skills. Nice to Have: Experience in data migration projects. Understanding of technologies like Delta Lake/warehouse. What we offer you Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing. Flexible working arrangements, advanced technology, and collaborative workspaces. A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results. A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients. Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development. Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles. We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups. All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Women’s Leadership Forum. At MSCI we are passionate about what we do, and we are inspired by our purpose – to power better investment decisions. You’ll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers. This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry. MSCI is a leading provider of critical decision support tools and services for the global investment community. With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios. We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process. MSCI Inc. is an equal opportunity employer. It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law. MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities. If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability.Assistance@msci.com and indicate the specifics of the assistance needed. Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries. To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes. Please do not forward CVs/Resumes to any MSCI employee, location, or website. MSCI is not responsible for any fees related to unsolicited CVs/Resumes. Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers. Read our full note on careers.msci.com

Posted 1 week ago

Apply

6.0 - 11.0 years

10 - 20 Lacs

Hyderabad, Delhi / NCR, Mumbai (All Areas)

Work from Office

Naukri logo

GCP(Google Cloud Platform) expert with Skills GCP,Devops,CI/CD,Docker,Kubernetes,Python,Bash/Powershell Scripting,Git,Github/Gitlab,GCP services like Compute Engine, App Engine, Cloud Storage, and BigQuery,Infrastructure as Code (IaC)

Posted 1 week ago

Apply

2.0 - 4.0 years

6 - 11 Lacs

Bengaluru

Work from Office

Naukri logo

Zeta Global is looking for an experienced Machine Learning Engineer with industry-proven hands-on experience of delivering machine learning models to production to solve business problems. To be a good fit to join our AI/ML team, you should ideally: Be a thought leader that can work with cross-functional partners to foster a data-driven organisation. Be a strong team player, have experience contributing to a large project as part of a collaborative team effort. Have extensive knowledge and expertise with machine learning engineering best-practices and industry standards. Empower the product and engineering teams to make data-driven decisions. What you need to succeed: 2 to 4 years of proven experience as a Machine Learning Engineer in a professional setting. Proficiency in any programming language (Python preferable). Prior experience in building and deploying Machine learning systems. Experience with containerization: Docker & Kubernetes. Experience with AWS cloud services like EKS, ECS, EMR, Lambda, and others. Fluency with workflow management tools like Airflow or dbt. Familiarity with distributed batch compute technologies such as Spark. Experience with modern data warehouses like Snowflake or BigQuery. Knowledge of MLFlow, Feast, and Terraform is a plus.

Posted 2 weeks ago

Apply

7.0 - 10.0 years

9 - 13 Lacs

Mangaluru, Udupi

Hybrid

Naukri logo

Cloud Leader (Jr. Data Architect) 7+ yrs of IT experience Should have worked on any two Structural (SQL/Oracle/Postgres) and one NoSQL Database Should be able to work with the Presales team, proposing the best solution/architecture Should have design experience on BQ/Redshift/Synapse Manage end-to-end product life cycle, from proposal to delivery, and regularly check with delivery on architecture improvement Should be aware of security protocols for in-transit data, encryption/decryption of PII data Good understanding of analytics tools for effective analysis of data Should have been part of the production deployment team and the Production Support team. Experience with Big Data tools- Hadoop, Spark, Apache Beam, Kafka etc. Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc. Experience in ETL and Data Warehousing. Experience and firm understanding of relational and non-relational databases like MySQL, MS SQL Server, Postgres, MongoDB, Cassandra, etc. Experience with cloud platforms like AWS, GCP, and Azure. Experience with workflow management using tools like Apache Airflow. Preferred: Need to be Aware of Design Best Practices for OLTP and OLAP Systems Should be part of the team designing the DB and pipeline Should be able to propose the right architecture, Data Warehouse/Datamesh approaches Should be aware of data sharing and multi-cloud implementation Should have exposure to Load testing methodologies, Debugging pipelines, and Delta load handling Worked on heterogeneous migration projects Experience on multiple Cloud platforms Should have exposure to Load testing methodologies, Debugging pipelines, and Delta load handling Roles and Responsibilities Develop high-performance and scalable solutions using GCP that extract, transform, and load big data. Designing and building production-grade data solutions from ingestion to consumption using Java / Python Design and optimize data models on the GCP cloud using GCP data stores such as BigQuery Optimizing data pipelines for performance and cost for large-scale data lakes. Writing complex, highly optimized queries across large data sets and creating data processing layers. Closely interact with Data Engineers to identify the right tools to deliver product features by performing POC Collaborative team player that interacts with business, BAs and other Data/ML engineers Research new use cases for existing data.

Posted 2 weeks ago

Apply

2.0 - 5.0 years

5 - 9 Lacs

Noida

Work from Office

Naukri logo

About Paytm Group: Paytm is India's leading mobile payments and financial services distribution company. Pioneer of the mobile QR payments revolution in India, Paytm builds technologies that help small businesses with payments and commerce. Paytm’s mission is to serve half a billion Indians and bring them to the mainstream economy with the help of technology. Our Story: Join the Recharges & Bill Payments Product team at Paytm, which touches millions of users daily across 26+ categories, including Mobile Recharges, Electricity Bills, DTH, Credit Card Bills, and FASTag. As a team, we operate at the intersection of scale, reliability, and innovation, constantly reimagining how India pays its bills. You'll be part of a fast-paced, data-driven environment with end-to-end ownership, where every experiment and product improvement directly impacts user experience for millions. If you're excited about solving complex problems at scale and building for Bharat, this is the place for you. About the Role: Product Analyst focuses on data, statistical analysis and reporting to help investigate and analyze business performance, provide insights, and drive recommendations to improve performance. Expectations/ : - Derive business insights from data with a focus on driving business-level metrics. - Ability to interact and convince business stakeholders. - Developing insightful analysis about business and their strategic and operational implications. - Partner with stakeholders at all levels to establish current and ongoing data support and reporting needs. - Analyze data from multiple angles, looking for trends that highlight areas of concern or opportunities. - Design, create, and deliver data reports, dashboards, extract and/or deliver presentations to address strategic questions. - Identifying data needs and driving data quality improvement projects. Key Skills Required: - Ideally, have 2-5 years of experience working on data analytics and business intelligence. Candidates from b2c consumer internet product companies are preferred. - Proven work experience on MS Excel, Google Analytics, SQL, Data Studio, any BI Tool, business analyst or similar role. - Should be comfortable working in a fast-changing environment and ambiguous. - Critical thinking and very detail-oriented. - In-depth understanding of datasets, data, and business understanding. - Capable of demonstrating good business judgement. Education: Applicants must have an engineering academic background with specialization in data science. Why join us: We aim to bring half a billion Indians into the mainstream economy, and everyone working here is striving to achieve that goal. Our success is rooted in our people’s collective energy and unwavering focus on the customers, and that’s how it will always be. We are the largest merchant acquirer in India Compensation: If you are the right fit, we believe in creating wealth for you. With enviable 500 mn+ registered users, 21 mn+ merchants, and depth of data in our ecosystem, we are in a unique position to democratize credit for deserving consumers & merchants – and we are committed to it. India’s largest digital lending story is brewing here. It is your opportunity to be a part of the story!

Posted 2 weeks ago

Apply

6.0 - 11.0 years

18 - 25 Lacs

Hyderabad

Work from Office

Naukri logo

SUMMARY Data Modeling Professional Location Hyderabad/Pune Experience: The ideal candidate should possess at least 6 years of relevant experience in data modeling with proficiency in SQL, Python, Pyspark, Hive, ETL, Unix, Control-M (or similar scheduling tools). Key Responsibilities: Develop and configure data pipelines across various platforms and technologies. Write complex SQL queries for data analysis on databases such as SQL Server, Oracle, and HIVE. Create solutions to support AI/ML models and generative AI. Work independently on specialized assignments within project deliverables. Provide solutions and tools to enhance engineering efficiencies. Design processes, systems, and operational models for end-to-end execution of data pipelines. Preferred Skills: Experience with GCP, particularly Airflow, Dataproc, and Big Query, is advantageous. Requirements Requirements: Strong problem-solving and analytical abilities. Excellent communication and presentation skills. Ability to deliver high-quality materials against tight deadlines. Effective under pressure with rapidly changing priorities. Note: The ability to communicate efficiently at a global level is paramount. --- Minimum 6 years of experience in data modeling with SQL, Python, Pyspark, Hive, ETL, Unix, Control-M (or similar scheduling tools). Proficiency in writing complex SQL queries for data analysis. Experience with GCP, particularly Airflow, Dataproc, and Big Query, is an advantage. Strong problem-solving and analytical abilities. Excellent communication and presentation skills. Ability to work effectively under pressure with rapidly changing priorities.

Posted 2 weeks ago

Apply

12.0 - 16.0 years

45 - 55 Lacs

Hyderabad

Hybrid

Naukri logo

Key Skills: GCP, Bigquery, ETL, SQL, Regulatory Reporting. Roles and Responsibilities: Own the end-to-end Engineering Delivery of the Liquidity Reporting product. Responsible for planning and delivering releases (including both Functional and Technical book of work items) per the agreed release cadence and committed scope. Ensure high-quality outcomes that are performant, aligned with the Business SLAs, and within the agreed costs. Ensure compliance with end-to-end controls and HSBC standard processes. Lead pods/teams of cross-functional engineers to drive product success. Responsible for upskilling and managing pod teams to enhance technical expertise. Actively drive continuous improvement by focusing on automation, process improvement, and reuse, aiming to meet or exceed the DevSecOps and EEI metrics for the SVS. People management: Line management of both permanent and contractual resources. Experience Requirements: 12-16 years experience in implementing complex and data-intensive SQL-based products. Experience delivering products/solutions on the Cloud platform, particularly GCP and BigQuery. Expertise and experience managing large data volumes, ETL (Extract, Transform, Load), and Reporting aspects. Exposure to the Regulatory Reporting functional domain, preferably Liquidity Reporting. Proven track record of leading and managing complex products on Cloud platforms. Experience managing global teams across diverse geographic locations and timezones. Excellent communication and interpersonal skills, with the ability to work effectively as a team player. Education: B.Tech M.Tech (Dual), B.Tech, M. Tech.

Posted 2 weeks ago

Apply

1.0 - 4.0 years

10 - 14 Lacs

Pune

Work from Office

Naukri logo

Overview Design, develop, and maintain data pipelines and ETL/ELT processes using PySpark/Databricks/bigquery/Airflow/composer. Optimize performance for large datasets through techniques such as partitioning, indexing, and Spark optimization. Collaborate with cross-functional teams to resolve technical issues and gather requirements. Responsibilities Ensure data quality and integrity through data validation and cleansing processes. Analyze existing SQL queries, functions, and stored procedures for performance improvements. Develop database routines like procedures, functions, and views/MV. Participate in data migration projects and understand technologies like Delta Lake/warehouse/bigquery. Debug and solve complex problems in data pipelines and processes. Qualifications Bachelor’s degree in computer science, Engineering, or a related field. Strong understanding of distributed data processing platforms like Databricks and BigQuery. Proficiency in Python, PySpark, and SQL programming languages. Experience with performance optimization for large datasets. Strong debugging and problem-solving skills. Fundamental knowledge of cloud services, preferably Azure or GCP. Excellent communication and teamwork skills. Nice to Have: Experience in data migration projects. Understanding of technologies like Delta Lake/warehouse. What we offer you Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing. Flexible working arrangements, advanced technology, and collaborative workspaces. A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results. A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients. Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development. Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles. We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups. All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Women’s Leadership Forum. At MSCI we are passionate about what we do, and we are inspired by our purpose – to power better investment decisions. You’ll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers. This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry. MSCI is a leading provider of critical decision support tools and services for the global investment community. With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios. We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process. MSCI Inc. is an equal opportunity employer. It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law. MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities. If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability.Assistance@msci.com and indicate the specifics of the assistance needed. Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries. To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes. Please do not forward CVs/Resumes to any MSCI employee, location, or website. MSCI is not responsible for any fees related to unsolicited CVs/Resumes. Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers. Read our full note on careers.msci.com

Posted 2 weeks ago

Apply

4.0 - 8.0 years

6 - 10 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Naukri logo

We specialize in delivering high-quality human-curated data and AI-first scaled operations services Based in San Francisco and Hyderabad, we are a fast-moving team on a mission to build AI for Good, driving innovation and societal impact Role Overview: We are looking for a Data Scientist to join and build intelligent, data-driven solutions for our client that enable impactful decisions This role requires contributions across the data science lifecycle from data wrangling and exploratory analysis to building and deploying machine learning models Whether youre just getting started or have years of experience, were looking for individuals who are curious, analytical, and driven to make a difference with data Responsibilities: Design, develop, and deploy machine learning models and analytical solutions Conduct exploratory data analysis and feature engineering Own or contribute to the end-to-end data science pipeline: data cleaning, modeling, validation, and deployment Collaborate with cross-functional teams (engineering, product, business) to define problems and deliver measurable impact Translate business challenges into data science problems and communicate findings clearly Implement A/B tests, statistical tests, and experimentation strategies Support model monitoring, versioning, and continuous improvement in production environments Evaluate new tools, frameworks, and best practices to improve model accuracy and scalability Required Skills: Strong programming skills in Python including libraries such as pandas, NumPy, scikit-learn, matplotlib, seaborn Proficient in SQL, comfortable querying large, complex datasets Sound understanding of statistics, machine learning algorithms, and data modeling Experience building end-to-end ML pipelines Exposure to or hands-on experience with model deployment tools like FastAPI, Flask, MLflow Experience with data visualization and insight communication Familiarity with version control tools (eg, Git) and collaborative workflows Ability to write clean, modular code and document processes clearly Nice to Have: Experience with deep learning frameworks like TensorFlow or PyTorch Familiarity with data engineering tools like Apache Spark, Kafka, Airflow, dbt Exposure to MLOps practices and managing models in production environments Working knowledge of cloud platforms like AWS, GCP, or Azure (e, SageMaker, BigQuery, Vertex AI) Experience designing and interpreting A/B tests or causal inference models Prior experience in high-growth startups or cross-functional leadership roles Educational Qualifications: Bachelors or Masters degree in Computer Science, Data Science, Mathematics, Engineering, or a related field Location : - Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune,India

Posted 2 weeks ago

Apply

0.0 - 2.0 years

0 Lacs

Mumbai

Work from Office

Naukri logo

At Johnson & Johnson, we believe health is everything. Our strength in healthcare innovation empowers us to build a world where complex diseases are prevented, treated, and cured, where treatments are smarter and less invasive, and solutions are personal. Through our expertise in Innovative Medicine and MedTech, we are uniquely positioned to innovate across the full spectrum of healthcare solutions today to deliver the breakthroughs of tomorrow, and profoundly impact health for humanity. Learn more at Job Function: Career Programs Job Sub Function: Non-LDP Intern/Co-Op Job Category: Career Program All Job Posting Locations: Mumbai, India Job Description: This job has been posted to onboard pre-identified candidates. Please do not apply if not invited. This job has been posted to onboard pre-identified candidates. Please do not apply if not invited.

Posted 2 weeks ago

Apply

3.0 - 6.0 years

22 - 25 Lacs

Hyderabad

Remote

Naukri logo

Company Overview: We are a fast-growing startup revolutionizing the contact center industry with GenAI-powered solutions. Our innovative platform is designed to enhance customer engagement Job Description: We are looking for a skilled and experienced Data Engineer to design, build, and optimize scalable data pipelines and architectures that power data-driven decision-making across the organization. Candidate with a proven track record of writing complex stored procedures and optimizing query performance on large datasets. Requirement: Architect, develop, and maintain scalable and secure data pipelines to process structured and unstructured data from diverse sources. Collaborate with data scientists, BI analysts and business stakeholders to understand data requirements. Optimize data workflows and processing for performance, ensure data quality, reliability and governance Hands-on experience with modern data platforms such as Snowflake, Redshift, BigQuery, or Databricks. Strong knowledge of T-SQL and SQL Server Management Studio (SSMS) Experience in writing complex stored procedures, Views and query performance tuning on large datasets Strong understanding of database management systems (SQL,NoSQL) and data warehousing concepts. Good knowledge and hands on experience in tuning the Database at Memory level, able to tweak SQL queries. In-depth knowledge of data modeling principles and methodologies (e.g., relational, dimensional, NoSQL). Excellent analytical and problem-solving skills with a meticulous attention to detail. Hands-on experience with data transformation techniques, including data mapping, cleansing, and validation. Proven ability to work independently and manage multiple priorities in a fast-paced environment. Work closely with cross-functional teams to gather and analyse requirements, develop database solutions, and support application development efforts Knowledge of cloud database solutions (e.g., Azure SQL Database, AWS RDS).

Posted 2 weeks ago

Apply

5.0 - 8.0 years

7 - 12 Lacs

Pune

Hybrid

Naukri logo

We are looking for a highly skilled Senior Python Developer for a 6-month contractual role. The position involves designing and implementing data-oriented and scalable backend solutions using Python and related technologies. The candidate must have 5-8 years of experience and be well-versed in distributed systems, cloud platforms (AWS/GCP), and data pipelines. Strong expertise in Airflow, Kafka, SQL, and modern software development practices (TDD, CI/CD, DevSecOps) is essential. Exposure to AdTech, ML/AI, SaaS, and container technologies (Docker/Kubernetes) is a strong plus. The position is hybrid, based in Pune, and only immediate joiners are eligible.

Posted 2 weeks ago

Apply

15.0 - 20.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : PySpark Good to have skills : Python (Programming Language), Apache Spark, Google BigQueryMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing innovative solutions, and ensuring that applications are aligned with business objectives. You will engage in problem-solving activities, participate in team meetings, and contribute to the overall success of projects by leveraging your expertise in application development. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Continuously evaluate and improve application performance and user experience. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Good To Have Skills: Experience with Apache Spark, Python (Programming Language), Google BigQuery.- Strong understanding of data processing frameworks and distributed computing.- Experience in developing and deploying scalable applications.- Familiarity with cloud platforms and services. Additional Information:- The candidate should have minimum 5 years of experience in PySpark.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies