Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
8 - 12 years
18 - 25 Lacs
Noida
Remote
Job Title: Data Modeler Enterprise Data Platform (BigQuery, Retail Domain) Location: Remote Duration: Contract Timing: US EST Hours We have an immediate need for an IT Data Modeler Enterprise Data Platform (BigQuery, Retail Domain) reporting to Enterprise Data Platform (EDP) Architecture Team. Job Summary: We are seeking an experienced Data Modeler to support the Enterprise Data Platform (EDP) initiative, focusing on building and optimizing curated data assets on Google BigQuery. This role requires expertise in data modeling, strong knowledge of retail data, and an ability to collaborate with data engineers, business analysts, and architects to create scalable and high-performing data structures. Key Responsibilities: 1. Data Modeling & Curated Layer Design Design logical, conceptual, and physical data models for EDPs curated layer in BigQuery. Develop fact and dimension tables, ensuring adherence to dimensional modeling best practices (Kimball methodology). Optimize data models for performance, scalability, and query efficiency in a cloud-native environment. Work closely with data engineers to translate models into efficient BigQuery implementations (partitioning, clustering, materialized views). 2. Data Standardization & Governance Define and maintain data definitions, relationships, and business rules for curated assets. Ensure data integrity, consistency, and governance across datasets. Work with Data Governance teams to align models with enterprise data standards and metadata management policies. 3. Collaboration with Business & Technical Teams Engage with business analysts and product teams to understand data needs, ensuring models align with business requirements. Partner with data engineers and architects to implement best practices for data ingestion and transformation. Support BI & analytics teams by ensuring curated models are optimized for downstream consumption (e.g., Looker, Tableau, Power BI, AI/ML models, APIs). Required Qualifications: 7+ years of experience in data modeling and architecture in cloud data platforms (BigQuery preferred). Expertise in dimensional modeling (Kimball), data vault, and normalization/denormalization techniques. Strong SQL skills, with hands-on experience in BigQuery performance tuning (partitioning, clustering, query optimization). Understanding of retail data models (e.g., sales, inventory, pricing, supply chain, customer analytics). Experience working with data engineering teams to implement models in ETL/ELT pipelines. Familiarity with data governance, metadata management, and data cataloging. Excellent communication skills and ability to translate business needs into structured data models. Prior experience working in retail or e-commerce data environments. Exposure to modern data architectures (data lakehouse, event-driven data processing). Familiarity with GCP ecosystem (Dataflow, Pub/Sub, Cloud Storage) and BigQuery security best practices. Thanks & Regards: Abhinav Krishna Srivastava Mob : +91- 9667680709 Email: asrivastava@fcsltd.com
Posted 1 month ago
7 - 10 years
8 - 14 Lacs
Ahmedabad
Work from Office
We are looking for a highly skilled and experienced Senior Data Engineer to join our dynamic team. The ideal candidate will have a strong background in data engineering, with specific expertise in Oracle to BigQuery data warehouse migration and modernization. This role requires proficiency in various data engineering tools and technologies, including BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. Key Responsibilities : - Oracle to BigQuery Migration: Lead the migration and modernization of data warehouses from Oracle to BigQuery, ensuring seamless data transfer and integration. - Data Engineering: Utilize BigQuery, DataProc, GCS, PySpark, Airflow, and Hadoop ecosystem to design, develop, and maintain scalable data pipelines and workflows. - Data Management: Ensure data integrity, accuracy, and consistency across various systems and platforms. - SQL Writing: Write and optimize complex SQL queries to extract, transform, and load data efficiently. - Collaboration: Work closely with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions that meet business needs. - Performance Optimization: Monitor and optimize data processing performance to ensure efficient and reliable data operations. Skills and Qualifications : - Proven experience as a Data Engineer or similar role. - Strong knowledge of Oracle to BigQuery data warehouse migration and modernization. - Proficiency in BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. - In-depth knowledge of Oracle DB and PL/SQL. - Excellent SQL writing skills. - Strong analytical and problem-solving abilities. - Ability to work collaboratively with cross-functional teams. - Excellent communication and interpersonal skills. Preferred Qualifications : - Experience with other data management tools and technologies. - Knowledge of cloud-based data solutions. - Certification in data engineering or related fields.
Posted 1 month ago
7 - 10 years
8 - 14 Lacs
Kolkata
Work from Office
We are looking for a highly skilled and experienced Senior Data Engineer to join our dynamic team. The ideal candidate will have a strong background in data engineering, with specific expertise in Oracle to BigQuery data warehouse migration and modernization. This role requires proficiency in various data engineering tools and technologies, including BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. Key Responsibilities :- Oracle to BigQuery Migration: Lead the migration and modernization of data warehouses from Oracle to BigQuery, ensuring seamless data transfer and integration.- Data Engineering: Utilize BigQuery, DataProc, GCS, PySpark, Airflow, and Hadoop ecosystem to design, develop, and maintain scalable data pipelines and workflows.- Data Management: Ensure data integrity, accuracy, and consistency across various systems and platforms.- SQL Writing: Write and optimize complex SQL queries to extract, transform, and load data efficiently.- Collaboration: Work closely with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions that meet business needs.- Performance Optimization: Monitor and optimize data processing performance to ensure efficient and reliable data operations. Skills and Qualifications :- Proven experience as a Data Engineer or similar role.- Strong knowledge of Oracle to BigQuery data warehouse migration and modernization.- Proficiency in BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem.- In-depth knowledge of Oracle DB and PL/SQL.- Excellent SQL writing skills.- Strong analytical and problem-solving abilities.- Ability to work collaboratively with cross-functional teams.- Excellent communication and interpersonal skills. Preferred Qualifications :- Experience with other data management tools and technologies.- Knowledge of cloud-based data solutions.- Certification in data engineering or related fields.
Posted 1 month ago
7 - 10 years
8 - 14 Lacs
Jaipur
Work from Office
We are looking for a highly skilled and experienced Senior Data Engineer to join our dynamic team. The ideal candidate will have a strong background in data engineering, with specific expertise in Oracle to BigQuery data warehouse migration and modernization. This role requires proficiency in various data engineering tools and technologies, including BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. Key Responsibilities : - Oracle to BigQuery Migration: Lead the migration and modernization of data warehouses from Oracle to BigQuery, ensuring seamless data transfer and integration. - Data Engineering: Utilize BigQuery, DataProc, GCS, PySpark, Airflow, and Hadoop ecosystem to design, develop, and maintain scalable data pipelines and workflows. - Data Management: Ensure data integrity, accuracy, and consistency across various systems and platforms. - SQL Writing: Write and optimize complex SQL queries to extract, transform, and load data efficiently. - Collaboration: Work closely with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver solutions that meet business needs. - Performance Optimization: Monitor and optimize data processing performance to ensure efficient and reliable data operations. Skills and Qualifications : - Proven experience as a Data Engineer or similar role. - Strong knowledge of Oracle to BigQuery data warehouse migration and modernization. - Proficiency in BigQuery, DataProc, GCS, PySpark, Airflow, and the Hadoop ecosystem. - In-depth knowledge of Oracle DB and PL/SQL. - Excellent SQL writing skills. - Strong analytical and problem-solving abilities. - Ability to work collaboratively with cross-functional teams. - Excellent communication and interpersonal skills. Preferred Qualifications : - Experience with other data management tools and technologies. - Knowledge of cloud-based data solutions. - Certification in data engineering or related fields.
Posted 1 month ago
4 - 9 years
5 - 12 Lacs
Pune
Work from Office
Night Shift: 9:00PM to 6:00AM Hybrid Mode: 3 days WFO & 2 Days WFH Job Overview We are looking for a savvy Data Engineer to manage in-progress and upcoming data infrastructure projects. The candidate will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder using Python and data wrangler who enjoys optimizing data systems and building them from the ground up. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. Responsibilities for Data Engineer * Create and maintain optimal data pipeline architecture, assemble large, complex data sets that meet functional / non-functional business requirements using Python and SQL / AWS / Snowflakes. * Identify, design, and implement internal process improvements through: automating manual processes using Python, optimizing data delivery, re-designing infrastructure for greater scalability, etc. * Build the infrastructure required for optimal extraction, transformation, and loading ofdata from a wide variety of data sources using SQL / AWS / Snowflakes technologies. * Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. * Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. * Keep our data separated and secure across national boundaries through multiple data centers and AWS regions. * Work with data and analytics experts to strive for greater functionality in our data systems. Qualifications for Data Engineer * Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. * Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. * Strong analytic skills related to working with unstructured datasets. * Build processes supporting data transformation, data structures, metadata, dependency and workload management. * A successful history of manipulating, processing and extracting value from large disconnected datasets. Desired Skillset:- * 2+ years of experience in a Python Scripting and Data specific role, with Bachelor degree. * Experience with data processing and cleaning libraries e.g. Pandas, numpy, etc., web scraping/ web crawling for automation of processes, APIs and how they work. * Debugging code if it fails and find the solution. Should have basic knowledge of SQL server job activity monitoring and of Snowflake. * Experience with relational SQL and NoSQL databases, including PostgreSQL and Cassandra. * Experience with most or all the following cloud services: AWS, Azure, Snowflake,Google Strong project management and organizational skills. * Experience supporting and working with cross-functional teams in a dynamic environment.
Posted 1 month ago
3 - 7 years
14 - 19 Lacs
Hyderabad
Work from Office
We are looking for a highly skilled and experienced Data Engineer with 3 to 7 years of experience to join our team in Bengaluru. The ideal candidate will have a strong background in building data pipelines using Google Cloud Platform (GCP) and hands-on experience with BigQuery, GCP SDK, and API scripting. ### Roles and Responsibility Design, develop, and implement data pipelines using BigQuery and GCP SDK. Build and orchestrate data fusion pipelines for data migration from various databases. Develop scripts in Python and write technical architecture documentation. Implement monitoring architecture and test GCP services. Participate in Agile and DevOps concepts, including CI/CD pipelines and test-driven frameworks. Collaborate with cross-functional teams to deliver high-quality solutions. ### Job Requirements Bachelor's degree in Computer Science, Information Technology, or related field. Minimum 3 years of experience in data engineering, preferably with GCP. Strong understanding of application architecture and programming languages. Experience with AWS and/or Azure and/or GCP, along with a proven track record of building complex infrastructure programmatically. Strong scripting and programming skills in Python and Linux Shell. Experience with Agile and DevOps concepts, as well as CI/CD pipelines and test-driven frameworks. Certification in Google Professional Cloud Data Engineer is desirable. Proactive team player with good English communication skills. Good understanding of Application Architecture. Experience working on CMMI / Agile / SAFE methodologies. Experience working with AWS and/or Azure and/or GCP and a proven track record of building complex infrastructure programmatically with IaC tooling or vendor libraries. Strong communication and written skills. Experience creating technical architecture documentation. Experience in Linux OS internals, administration, and performance optimization.
Posted 1 month ago
3 - 8 years
15 - 30 Lacs
Pune, Gurugram, Bengaluru
Hybrid
Salary: 15 to 30 LPA Exp: 3 to 8 years Location : Gurgaon/Bangalore/Pune/Chennai Notice: Immediate to 30 days..!! Key Responsibilities & Skillsets: Common Skillsets : 3+ years of experience in analytics, Pyspark, Python, Spark, SQL and associated data engineering jobs. Must have experience with managing and transforming big data sets using pyspark, spark-scala, Numpy pandas Excellent communication & presentation skills Experience in managing Python codes and collaborating with customer on model evolution Good knowledge of data base management and Hadoop/Spark, SQL, HIVE, Python (expertise). Superior analytical and problem solving skills Should be able to work on a problem independently and prepare client ready deliverable with minimal or no supervision Good communication skill for client interaction Data Management Skillsets: Ability to understand data models and identify ETL optimization opportunities. Exposure to ETL tools is preferred Should have strong grasp of advanced SQL functionalities (joins, nested query, and procedures). Strong ability to translate functional specifications / requirements to technical requirements
Posted 1 month ago
11 - 12 years
25 - 30 Lacs
Hyderabad
Work from Office
Job Description Lead Data Engineer Position: Lead Data Engineer Location: Hyderabad (Work from Office Mandatory) Experience: 10+ years overall | 8+ years relevant in Data Engineering Notice Period: Immediate to 30 days. About the Role We are looking for a strategic and hands-on Lead Data Engineer to architect and lead cutting-edge data platforms that empower business intelligence, analytics, and AI initiatives. This role demands a deep understanding of cloud-based big data ecosystems, excellent leadership skills, and a strong inclination toward driving data quality and governance at scale. You will define the data engineering roadmap, architect scalable data systems, and lead a team responsible for building and optimizing pipelines across structured and unstructured datasets in a secure and compliant environment. Key Responsibilities 1. Technical Strategy & Architecture Define the vision and technical roadmap for enterprise-grade data platforms (Lakehouse, Warehouse, Real-Time Pipelines). Lead evaluation of data platforms and tools, making informed build vs. buy decisions. Design solutions for long-term scalability, cost-efficiency, and performance. 2. Team Leadership Mentor and lead a high-performing data engineering team. Conduct performance reviews, technical coaching, and participate in hiring/onboarding. Instill engineering best practices and a culture of continuous improvement. 3. Platform & Pipeline Engineering Build and maintain data lakes, warehouses, and lakehouses using AWS, Azure, GCP, or Databricks. Architect and optimize data models and schemas tailored for analytics/reporting. Manage large-scale ETL/ELT pipelines for batch and streaming use cases. 4. Data Quality, Governance & Security Enforce data quality controls: automated validation, lineage, anomaly detection. Ensure compliance with data privacy and governance frameworks (GDPR, HIPAA, etc.). Manage metadata and documentation for transparency and discoverability. 5. Cross-Functional Collaboration Partner with Data Scientists, Product Managers, and Business Teams to understand requirements. Translate business needs into scalable data workflows and delivery mechanisms. Support self-service analytics and democratization of data access. 6. Monitoring, Optimization & Troubleshooting Implement monitoring frameworks to ensure data reliability and latency SLAs. Proactively resolve bottlenecks, failures, and optimize system performance. Recommend platform upgrades and automation strategies. 7. Technical Leadership & Community Building Lead code reviews, define development standards, and share reusable components. Promote innovation, experimentation, and cross-team knowledge sharing. Encourage open-source contributions and thought leadership. Required Skills & Experience 10+ years of experience in data engineering or related domains. Expert in PySpark, Python, and SQL . Deep expertise in Apache Spark and other distributed processing frameworks. Hands-on experience with cloud platforms (AWS, Azure, or GCP) and services like S3, EMR, Glue, Databricks, Data Factory . Proficient in data warehouse solutions (e.g., Snowflake, Redshift, BigQuery) and RDBMS like PostgreSQL or SQL Server. Knowledge of orchestration tools (Airflow, Dagster, or cloud-native schedulers). Familiarity with CI/CD tools , Git, and Infrastructure as Code (Terraform, CloudFormation). Strong data modeling and lifecycle management understanding.
Posted 1 month ago
8 - 12 years
25 - 40 Lacs
Hyderabad
Remote
Senior GCP Cloud Administrator Experience: 8 - 12 Years Exp Salary : Competitive Preferred Notice Period : Within 30 Days Shift : 10:00AM to 7:00PM IST Opportunity Type: Remote Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : GCP, Identity and Access Management (IAM), BigQuery, SRE, GKE, GCP certification Good to have skills : Terraform, Cloud Composer, Dataproc, Dataflow, AWS Forbes Advisor (One of Uplers' Clients) is Looking for: Senior GCP Cloud Administrator who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description Senior GCP Cloud Administrator Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. We are looking for an experienced GCP Administrator to join our team. The ideal candidate will have strong hands-on experience with IAM Administration, multi-account management, Big Query administration, performance optimization, monitoring and cost management within Google Cloud Platform (GCP). Responsibilities: Manages and configures roles/permissions in GCP IAM by following the principle of least privileged access Manages Big Query service by way of optimizing slot assignments and SQL Queries, adopting FinOps practices for cost control, troubleshooting and resolution of critical data queries, etc. Collaborate with teams like Data Engineering, Data Warehousing, Cloud Platform Engineering, SRE, etc. for efficient Data management and operational practices in GCP Create automations and monitoring mechanisms for GCP Data-related services, processes and tasks Work with development teams to design the GCP-specific cloud architecture Provisioning and de-provisioning GCP accounts and resources for internal projects. Managing, and operating multiple GCP subscriptions Keep technical documentation up to date Proactively being up to date on GCP announcements, services and developments. Requirements: Must have 5+ years of work experience on provisioning, operating, and maintaining systems in GCP Must have a valid certification of either GCP Associate Cloud Engineer or GCP Professional Cloud Architect. Must have hands-on experience on GCP services such as Identity and Access Management (IAM), BigQuery, Google Kubernetes Engine (GKE), etc. Must be capable to provide support and guidance on GCP operations and services depending upon enterprise needs Must have a working knowledge of docker containers and Kubernetes. Must have strong communication skills and the ability to work both independently and in a collaborative environment. Fast learner, Achiever, sets high personal goals Must be able to work on multiple projects and consistently meet project deadlines Must be willing to work on shift-basis based on project requirements. Good to Have: Experience in Terraform Automation over GCP Infrastructure provisioning Experience in Cloud Composer, Dataproc, Dataflow Storage and Monitoring services Experience in building and supporting any form of data pipeline. Multi-Cloud experience with AWS. New-Relic monitoring. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Paid paternity and maternity leaves How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Forbes Advisor is a global platform dedicated to helping consumers make the best financial choices for their individual lives. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 month ago
3 - 5 years
10 - 12 Lacs
Bengaluru
Work from Office
Overview About us We are an integral part of Annalect Global and Omnicom Group, one of the largest media and advertising agency holding companies in the world. Omnicom’s branded networks and numerous specialty firms provide advertising, strategic media planning and buying, digital and interactive marketing, direct and promotional marketing, public relations, and other specialty communications services. Our agency brands are consistently recognized as being among the world’s creative best. Annalect India plays a key role for our group companies and global agencies by providing stellar products and services in areas of Creative Services, Technology, Marketing Science (data & analytics), Market Research, Business Support Services, Media Services, Consulting & Advisory Services. We currently have 2500+ awesome colleagues (in Annalect India) who are committed to solve our clients’ pressing business issues. We are growing rapidly and looking for talented professionals like you to be part of this journey. Let us build this, together . Responsibilities This is an exciting role and would entail you to Partner with internal and external client in their desire to create ‘best-in-class’ data & analytics to support their business decisions. Be a passionate champion of data-driven marketing and create a data and insight-led culture across teams. Requirement gathering and evaluation of clients’ business situations in order to implement appropriate analytic solutions. Data management and reporting using different tools and techniques like Alteryx. Strong knowledge on the media metrics, custom calculations, and metrics co-relation. Good to have (not mandatory) data visualization using excel Ability to identify and determine key performance indicators for the clients. QA process: Maintain, create and re-view QA plans for deliverables to align with the requirements, identify discrepancies if any and troubleshoot issues. Responsible for maintaining the reporting requirements as per the delivery cadence defined by the client. • Create and maintain project specific documents such as process / quality / learning documents. Able to work successfully with teams, handling multiple projects and meeting client expectations. Qualifications You will be working closely with Our global marketing agency teams. You will also be closely collaborating with Manager and colleagues within the Performance Reporting function. This may be the right role for you if you have Bachelor’s Degree required. 4-6 years' experience in data management and analysis in Media or relevant domain with strong problem-solving ability Good analytical ability and logical reasoning Strong working knowledge of MS Excel and Advanced Excel Strong working knowledge and hands on experience in data visualization and report generation using Power BI is mandatory. Proficiency in PPT and SharePoint Experience in data processing tools like SQL, Python, Alteryx, etc. would be beneficial Knowledge of media/advertising is beneficial but not mandatory Strong written and verbal communication Familiarity working with large data sets and creating cohesive stories Understanding of media domain and channels like Display, Search, Social, Competitive Experience of creating tables in database like AWS, Google Big Query etc Knowledge of scripting language like Python or SQL is preferred Familiarity with data platforms like Double Click Campaign Manager, DV360, SA360, MOAT, IAS, Facebook Business Manager, Twitter, Innovid, Sizmek, Kenshoo, Nielsen, Kantar, MediaMath, Prisma, AppNexus
Posted 1 month ago
1 - 4 years
8 - 15 Lacs
Bengaluru
Work from Office
Job Title: Data Engineer (1- 4 Years Experience) Location: Bangalore Company: Lenskart About the Role We are looking for a hands-on Data Engineer to help us scale our data infrastructure and platforms. In this role, youll work closely with engineering, analytics, and product teams to build reliable data pipelines and deliver high-quality datasets for analytics and reporting. If you're passionate about cloud data engineering, writing efficient code in Python, and working with technologies like BigQuery and GCP, this is the perfect role for you. Key Responsibilities 1. Build and maintain scalable ETL/ELT data pipelines using Python and cloud-native tools. 2. Design and optimize data models and queries on Google BigQuery for analytical workloads. 3. Develop, schedule, and monitor workflows using orchestration tools like Apache Airflow or Cloud Composer. 4. Ingest and integrate data from multiple structured and semi-structured sources, including MySQL , MongoDB , APIs, and cloud storage. 5. Ensure data integrity, security, and quality through validation, logging, and monitoring systems. 6. Collaborate with analysts and data consumers to understand requirements and deliver clean, usable datasets. 7. Implement data governance, lineage tracking, and documentation as part of platform hygiene. Must-Have Skills 1. 1-4 years of experience in data engineering or backend development. 2. Strong experience with Google BigQuery and GCP (Google Cloud Platform). 3. Proficiency in Python for scripting, automation, and data manipulation. 4. Solid understanding of SQL and experience with relational databases like MySQL. 5. Experience working with MongoDB and semi-structured data (e.g., JSON, nested formats). 6.Exposure to data warehousing, data modeling, and performance tuning. 7. Familiarity with Git-based version control and CI/CD practices.
Posted 1 month ago
3 - 8 years
15 - 30 Lacs
Pune, Gurugram, Bengaluru
Hybrid
Salary: 15 to 30 LPA Exp: 3 to 8 years Location : Gurgaon/Bangalore/Pune/Chennai Notice: Immediate to 30 days..!! Key Responsibilities & Skillsets: Common Skillsets : 3+ years of experience in analytics, Pyspark, Python, Spark, SQL and associated data engineering jobs. Must have experience with managing and transforming big data sets using pyspark, spark-scala, Numpy pandas Excellent communication & presentation skills Experience in managing Python codes and collaborating with customer on model evolution Good knowledge of data base management and Hadoop/Spark, SQL, HIVE, Python (expertise). Superior analytical and problem solving skills Should be able to work on a problem independently and prepare client ready deliverable with minimal or no supervision Good communication skill for client interaction Data Management Skillsets: Ability to understand data models and identify ETL optimization opportunities. Exposure to ETL tools is preferred Should have strong grasp of advanced SQL functionalities (joins, nested query, and procedures). Strong ability to translate functional specifications / requirements to technical requirements
Posted 1 month ago
3 - 8 years
5 - 9 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Google BigQuery Good to have skills : React.js, Cloud Network Operations Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. Your typical day will involve collaborating with team members to develop innovative solutions and ensure seamless application functionality. Roles & Responsibilities: Expected to perform independently and become an SME. Required active participation/contribution in team discussions. Contribute in providing solutions to work-related problems. Develop and implement scalable applications using Google BigQuery. Collaborate with cross-functional teams to ensure application functionality. Conduct code reviews and provide technical guidance to junior developers. Stay updated on industry trends and best practices in application development. Troubleshoot and resolve application issues in a timely manner. Professional & Technical Skills: (Project specific) BQ, BQ Geospatial, Python, Dataflow, Composer , Secondary skill -Geospatial Domain Knowledge" Must To Have Skills: Proficiency in Google BigQuery. Strong understanding of statistical analysis and machine learning algorithms. Experience with data visualization tools such as Tableau or Power BI. Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information: The candidate should have a minimum of 3 years of experience in Google BigQuery. This position is based at our Bengaluru office. A 15 years full-time education is required. Qualification 15 years full time education
Posted 1 month ago
3 - 6 years
12 - 14 Lacs
Hyderabad
Work from Office
Overview Lead – Biddable (Reporting) This exciting role of a Lead – Biddable (Reporting) requires you to creatively manage Biddable media campaigns for our global brands. Your expertise of DSPs and knowledge of the Digital Market Cycle would make you a great fit for this position. This is a great opportunity to work closely with the Top Global brands and own large and reputed accounts. About us We are an integral part of Annalect Global and Omnicom Group, one of the largest media and advertising agency holding companies in the world. Omnicom’s branded networks and numerous specialty firms provide advertising, strategic media planning and buying, digital and interactive marketing, direct and promotional marketing, public relations, and other specialty communications services. Our agency brands are consistently recognized as being among the world’s creative best. Annalect India plays a key role for our group companies and global agencies by providing stellar products and services in areas of Creative Services, Technology, Marketing Science (data & analytics), Market Research, Business Support Services, Media Services, Consulting & Advisory Services. We are growing rapidly and looking for talented professionals like you to be part of this journey. Let us build this, together > Responsibilities Work with clients and stakeholders on gathering requirements around reporting. Design solutions and mock-ups of reports based on requirements that define every detail. Develop reporting based on marketing data in Excel, Power BI Collaborate with other members of the reporting design team and data & automation team to build and manage complex data lakes that support reporting. Extract reports from the platforms like, DV360, LinkedIn, TikTok, Reddit, Snapchat, Meta, Twitter Organize the reports into easily readable tables Offer comprehensive insights based on the reporting data. Assess data against previous benchmarks and provide judgments/recommendations Direct communication with agencies for projects Managed communication for end clients Preferred: Experience in digital marketing such as paid-search, paid-social, or programmatic display is extremely helpful. Qualifications A full-time graduate degree (Mandatory) A proven history of 6+ years as a marketing reporting Analyst or experience in a similar role with opportunities in this. A solid understanding of paid digital marketing functions is essential to this job. Strong experience working with data such as Excel (Vlookups, SUMIFS, advanced functions are a must). Experience in working with web-based reporting platforms such as Looker is preferred. Strong communication skills with a strong preference on having collaborated with teams in the United States or United Kingdom including gathering requirements or collaborating with teams on solution design.
Posted 1 month ago
5 - 7 years
27 - 30 Lacs
Pune
Work from Office
Must-Have Skills: 5+ years of experience as a Big Data Engineer 3+ years of experience with Apache Spark, Hive, HDFS, and Beam (optional) Strong proficiency in SQL and either Scala or Python Experience with ETL processes and working with structured and unstructured data 2+ years of experience with Cloud Platforms (GCP, AWS, or Azure) Hands-on experience with software build management tools like Maven or Gradle Experience in automation, performance tuning, and optimizing data pipelines Familiarity with CI/CD, serverless computing, and infrastructure-as-code practices Good-to-Have Skills: Experience with Google Cloud Services (BigQuery, Dataproc, Dataflow, Composer, DataStream) Strong knowledge of data pipeline development and optimization Familiarity with source control tools (SVN/Git, GitHub) Experience working in Agile environments (Scrum, XP, etc.) Knowledge of relational databases (SQL Server, Oracle, MySQL) Experience with Atlassian tools (JIRA, Confluence, GitHub) Key Responsibilities: Extract, transform, and load (ETL) data from multiple sources using Big Data technologies Develop, enhance, and support data ingestion jobs using GCP services like Apache Spark, Dataproc, Dataflow, BigQuery, and Airflow Work closely with senior engineers and cross-functional teams to improve data accessibility Automate manual processes, optimize data pipelines, and enhance infrastructure for scalability Modify data extraction pipelines to follow standardized, reusable approaches Optimize query performance and data access techniques in collaboration with senior engineers Follow modern software development practices, including microservices, CI/CD, and infrastructure-as-code Participate in Agile development teams, ensuring best practices for software engineering and data management Preferred Qualifications: Bachelor's degree in Computer Science, Systems Engineering, or a related field Self-starter with strong problem-solving skills and adaptability to shifting priorities Cloud certifications (GCP, AWS, or Azure) are a plus Skills GCP Services, ,Dataproc,DataFlow.
Posted 1 month ago
5 - 8 years
15 - 25 Lacs
Pune
Hybrid
Role & responsibilities Data Pipeline Development: Design, develop, and maintain data pipelines utilizing Google Cloud Platform (GCP) services like Dataflow, Dataproc, and Pub/Sub. Data Ingestion & Transformation: Build and implement data ingestion and transformation processes using tools such as Apache Beam and Apache Spark. Data Storage Management: Optimize and manage data storage solutions on GCP, including BigQuery, Cloud Storage, and Cloud SQL. Security Implementation: Implement data security protocols and access controls with GCP's Identity and Access Management (IAM) and Cloud Security Command Center. System Monitoring & Troubleshooting: Monitor and troubleshoot data pipelines and storage solutions using GCP's Stackdriver and Cloud Monitoring tools. Generative AI Systems: Develop and maintain scalable systems for deploying and operating generative AI models, ensuring efficient use of computational resources. Gen AI Capability Building: Build generative AI capabilities among engineers, covering areas such as knowledge engineering, prompt engineering, and platform engineering. Knowledge Engineering: Gather and structure domain-specific knowledge to be utilized by large language models (LLMs) effectively. Prompt Engineering: Design effective prompts to guide generative AI models, ensuring relevant, accurate, and creative text output. Collaboration: Work with data experts, analysts, and product teams to understand data requirements and deliver tailored solutions. Automation: Automate data processing tasks using scripting languages such as Python. Best Practices: Participate in code reviews and contribute to establishing best practices for data engineering within GCP. Continuous Learning: Stay current with GCP service innovations and advancements. Core data services (GCS, BigQuery, Cloud Storage, Dataflow, etc.). Skills and Experience: Experience: 5+ years of experience in Data Engineering or similar roles. Proficiency in GCP: Expertise in designing, developing, and deploying data pipelines, with strong knowledge of GCP core data services (GCS, BigQuery, Cloud Storage, Dataflow, etc.). Generative AI & LLMs: Hands-on experience with Generative AI models and large language models (LLMs) such as GPT-4, LLAMA3, and Gemini 1.5, with the ability to integrate these models into data pipelines and processes. Experience in Webscraping Technical Skills: Strong proficiency in Python and SQL for data manipulation and querying. Experience with distributed data processing frameworks like Apache Beam or Apache Spark is a plus. Security Knowledge: Familiarity with data security and access control best practices. • Collaboration: Excellent communication and problem-solving skills, with a demonstrated ability to collaborate across teams. Project Management: Ability to work independently, manage multiple projects, and meet deadlines. Preferred Knowledge: Familiarity with Sustainable Finance, ESG Risk, CSRD, Regulatory Reporting, cloud infrastructure, and data governance best practices. Bonus Skills: Knowledge of Terraform is a plus. Education: Degree: Bachelors or masters degree in computer science, Information Technology, or a related field. Experience: 3-5 years of hands-on experience in data engineering. Certification: Google Professional Data Engineer
Posted 1 month ago
7 - 9 years
25 - 27 Lacs
Pune
Work from Office
We are seeking a highly experienced Senior Java & GCP Engineer to lead the design, development, and deployment of innovative batch and data processing solutions. This role requires strong technical expertise, leadership abilities, and hands-on experience with Java and Google Cloud Platform (GCP). The ideal candidate will collaborate with cross-functional teams, mentor developers, and ensure the delivery of high-quality, scalable solutions. Key Responsibilities: Lead the design, development, and deployment of batch and data processing solutions. Provide technical direction for Java and GCP-based implementations. Mentor and guide a team of developers and engineers. Work with cross-functional teams to translate business requirements into technical solutions. Implement robust testing strategies and optimize performance. Maintain technical documentation and ensure compliance with industry standards. Required Skills & Experience: Bachelors or Masters degree in Computer Science, Software Engineering, or a related field. Expertise in Java and its ecosystems. Extensive experience with GCP (Google Kubernetes Engine, Cloud Storage, Dataflow, BigQuery). 7+ years of experience in software development, with a focus on batch processing and data-driven applications. Strong knowledge of secure data handling (PII/PHI). Proven ability to write clean, defect-free code. 3+ years of leadership experience, mentoring and guiding teams. Excellent communication and teamwork skills. This is a fantastic opportunity for a technical leader who is passionate about scalable cloud-based data solutions and eager to drive innovation in a collaborative environment. Skills Java,Microservices,GCP.
Posted 1 month ago
5 - 8 years
25 - 27 Lacs
Pune
Work from Office
We are seeking a highly experienced Senior Java & GCP Engineer to lead the design, development, and deployment of innovative batch and data processing solutions. This role requires strong technical expertise, leadership abilities, and hands-on experience with Java and Google Cloud Platform (GCP). The ideal candidate will collaborate with cross-functional teams, mentor developers, and ensure the delivery of high-quality, scalable solutions. Key Responsibilities: Lead the design, development, and deployment of batch and data processing solutions. Provide technical direction for Java and GCP-based implementations. Mentor and guide a team of developers and engineers. Work with cross-functional teams to translate business requirements into technical solutions. Implement robust testing strategies and optimize performance. Maintain technical documentation and ensure compliance with industry standards. Required Skills & Experience: Bachelors or Masters degree in Computer Science, Software Engineering, or a related field. Expertise in Java and its ecosystems. Extensive experience with GCP (Google Kubernetes Engine, Cloud Storage, Dataflow, BigQuery). 7+ years of experience in software development, with a focus on batch processing and data-driven applications. Strong knowledge of secure data handling (PII/PHI). Proven ability to write clean, defect-free code. 3+ years of leadership experience, mentoring and guiding teams. Excellent communication and teamwork skills. This is a fantastic opportunity for a technical leader who is passionate about scalable cloud-based data solutions and eager to drive innovation in a collaborative environment. Skills Java,Microservices,GCP.
Posted 1 month ago
5 - 8 years
25 - 27 Lacs
Pune
Work from Office
We are seeking a highly experienced Senior Java & GCP Engineer to lead the design, development, and deployment of innovative batch and data processing solutions. This role requires strong technical expertise, leadership abilities, and hands-on experience with Java and Google Cloud Platform (GCP). The ideal candidate will collaborate with cross-functional teams, mentor developers, and ensure the delivery of high-quality, scalable solutions. Key Responsibilities: Lead the design, development, and deployment of batch and data processing solutions. Provide technical direction for Java and GCP-based implementations. Mentor and guide a team of developers and engineers. Work with cross-functional teams to translate business requirements into technical solutions. Implement robust testing strategies and optimize performance. Maintain technical documentation and ensure compliance with industry standards. Required Skills & Experience: Bachelors or Masters degree in Computer Science, Software Engineering, or a related field. Expertise in Java and its ecosystems. Extensive experience with GCP (Google Kubernetes Engine, Cloud Storage, Dataflow, BigQuery). 7+ years of experience in software development, with a focus on batch processing and data-driven applications. Strong knowledge of secure data handling (PII/PHI). Proven ability to write clean, defect-free code. 3+ years of leadership experience, mentoring and guiding teams. Excellent communication and teamwork skills. This is a fantastic opportunity for a technical leader who is passionate about scalable cloud-based data solutions and eager to drive innovation in a collaborative environment. Skills Java,Microservices,GCP.
Posted 1 month ago
7 - 9 years
25 - 27 Lacs
Pune
Work from Office
We are seeking a highly experienced Senior Java & GCP Engineer to lead the design, development, and deployment of innovative batch and data processing solutions. This role requires strong technical expertise, leadership abilities, and hands-on experience with Java and Google Cloud Platform (GCP). The ideal candidate will collaborate with cross-functional teams, mentor developers, and ensure the delivery of high-quality, scalable solutions. Key Responsibilities: Lead the design, development, and deployment of batch and data processing solutions. Provide technical direction for Java and GCP-based implementations. Mentor and guide a team of developers and engineers. Work with cross-functional teams to translate business requirements into technical solutions. Implement robust testing strategies and optimize performance. Maintain technical documentation and ensure compliance with industry standards. Required Skills & Experience: Bachelors or Masters degree in Computer Science, Software Engineering, or a related field. Expertise in Java and its ecosystems. Extensive experience with GCP (Google Kubernetes Engine, Cloud Storage, Dataflow, BigQuery). 7+ years of experience in software development, with a focus on batch processing and data-driven applications. Strong knowledge of secure data handling (PII/PHI). Proven ability to write clean, defect-free code. 3+ years of leadership experience, mentoring and guiding teams. Excellent communication and teamwork skills. This is a fantastic opportunity for a technical leader who is passionate about scalable cloud-based data solutions and eager to drive innovation in a collaborative environment. Skills Java,Microservices,GCP.
Posted 1 month ago
4 - 7 years
10 - 19 Lacs
Indore, Gurugram, Bengaluru
Work from Office
We need GCP engineers for capacity building; - The candidate should have extensive production experience (1-2 Years ) in GCP, Other cloud experience would be a strong bonus. - Strong background in Data engineering 2-3 Years of exp in Big Data technologies including, Hadoop, NoSQL, Spark, Kafka etc. - Exposure to enterprise application development is a must Roles and Responsibilities 4-7 years of IT experience range is preferred. Able to effectively use GCP managed services e.g. Dataproc, Dataflow, pub/sub, Cloud functions, Big Query, GCS - At least 4 of these Services. Good to have knowledge on Cloud Composer, Cloud SQL, Big Table, Cloud Function. Strong experience in Big Data technologies – Hadoop, Sqoop, Hive and Spark including DevOPs. Good hands on expertise on either Python or Java programming. Good Understanding of GCP core services like Google cloud storage, Google compute engine, Cloud SQL, Cloud IAM. Good to have knowledge on GCP services like App engine, GKE, Cloud Run, Cloud Built, Anthos. Ability to drive the deployment of the customers’ workloads into GCP and provide guidance, cloud adoption model, service integrations, appropriate recommendations to overcome blockers and technical road-maps for GCP cloud implementations. Experience with technical solutions based on industry standards using GCP - IaaS, PaaS and SaaS capabilities. Extensive, real-world experience designing technology components for enterprise solutions and defining solution architectures and reference architectures with a focus on cloud technologies. Act as a subject-matter expert OR developer around GCP and become a trusted advisor to multiple teams. Technical ability to become certified in required GCP technical certifications.
Posted 1 month ago
5 - 7 years
0 - 0 Lacs
Kolkata
Work from Office
Role Proficiency: This role requires proficiency in data pipeline development including coding and testing data pipelines for ingesting wrangling transforming and joining data from various sources. Must be skilled in ETL tools such as Informatica Glue Databricks and DataProc with coding expertise in Python PySpark and SQL. Works independently and has a deep understanding of data warehousing solutions including Snowflake BigQuery Lakehouse and Delta Lake. Capable of calculating costs and understanding performance issues related to data solutions. Outcomes: Act creatively to develop pipelines and applications by selecting appropriate technical options optimizing application development maintenance and performance using design patterns and reusing proven solutions.rnInterpret requirements to create optimal architecture and design developing solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code adhering to best coding standards debug and test solutions to deliver best-in-class quality. Perform performance tuning of code and align it with the appropriate infrastructure to optimize efficiency. Validate results with user representatives integrating the overall solution seamlessly. Develop and manage data storage solutions including relational databases NoSQL databases and data lakes. Stay updated on the latest trends and best practices in data engineering cloud technologies and big data tools. Influence and improve customer satisfaction through effective data solutions. Measures of Outcomes: Adherence to engineering processes and standards Adherence to schedule / timelines Adhere to SLAs where applicable # of defects post delivery # of non-compliance issues Reduction of reoccurrence of known defects Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirements Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). Average time to detect respond to and resolve pipeline failures or data issues. Number of data security incidents or compliance breaches. Outputs Expected: Code Development: Develop data processing code independently ensuring it meets performance and scalability requirements. Define coding standards templates and checklists. Review code for team members and peers. Documentation: Create and review templates checklists guidelines and standards for design processes and development. Create and review deliverable documents including design documents architecture documents infrastructure costing business requirements source-target mappings test cases and results. Configuration: Define and govern the configuration management plan. Ensure compliance within the team. Testing: Review and create unit test cases scenarios and execution plans. Review the test plan and test strategy developed by the testing team. Provide clarifications and support to the testing team as needed. Domain Relevance: Advise data engineers on the design and development of features and components demonstrating a deeper understanding of business needs. Learn about customer domains to identify opportunities for value addition. Complete relevant domain certifications to enhance expertise. Project Management: Manage the delivery of modules effectively. Defect Management: Perform root cause analysis (RCA) and mitigation of defects. Identify defect trends and take proactive measures to improve quality. Estimation: Create and provide input for effort and size estimation for projects. Knowledge Management: Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release Management: Execute and monitor the release process to ensure smooth transitions. Design Contribution: Contribute to the creation of high-level design (HLD) low-level design (LLD) and system architecture for applications business components and data models. Customer Interface: Clarify requirements and provide guidance to the development team. Present design options to customers and conduct product demonstrations. Team Management: Set FAST goals and provide constructive feedback. Understand team members' aspirations and provide guidance and opportunities for growth. Ensure team engagement in projects and initiatives. Certifications: Obtain relevant domain and technology certifications to stay competitive and informed. Skill Examples: Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning of data processes. Expertise in designing and optimizing data warehouses for cost efficiency. Ability to apply and optimize data models for efficient storage retrieval and processing of large datasets. Capacity to clearly explain and communicate design and development aspects to customers. Ability to estimate time and resource requirements for developing and debugging features or components. Knowledge Examples: Knowledge Examples Knowledge of various ETL services offered by cloud providers including Apache PySpark AWS Glue GCP DataProc/DataFlow Azure ADF and ADLF. Proficiency in SQL for analytics including windowing functions. Understanding of data schemas and models relevant to various business contexts. Familiarity with domain-related data and its implications. Expertise in data warehousing optimization techniques. Knowledge of data security concepts and best practices. Familiarity with design patterns and frameworks in data engineering. Additional Comments: Required Skills & Qualifications: - A degree (preferably an advanced degree) in Computer Science, Engineering or a related field - Senior developer having 8+ years of hands on development experience in Azure using ASB and ADF: Extensive experience in designing, developing, and maintaining data solutions/pipelines in the Azure ecosystem, including Azure Service Bus, & ADF. - Familiarity with MongoDB and Python is added advantage. Required Skills Azure Data Factory,Azure Service Bus,Azure,Mongodb
Posted 1 month ago
10 - 15 years
30 - 35 Lacs
Noida
Remote
SR. DATA MODELER FULL-TIME ROLE REMOTE OR ONSITE Job Summary: We are seeking an experienced Data Modeler to support the Enterprise Data Platform (EDP) initiative, focusing on building and optimizing curated data assets on Google BigQuery. This role requires expertise in data modeling, strong knowledge of retail data, and an ability to collaborate with data engineers, business analysts, and architects to create scalable and high-performing data structures. Required Qualifications: 5+ years of experience in data modeling and architecture in cloud data platforms (BigQuery preferred). Expertise in dimensional modeling (Kimball), data vault, and normalization/denormalization techniques. Strong SQL skills, with hands-on experience in BigQuery performance tuning (partitioning, clustering, query optimization). Understanding of retail data models (e.g., sales, inventory, pricing, supply chain, customer analytics). Experience working with data engineering teams to implement models in ETL/ELT pipelines. Familiarity with data governance, metadata management, and data cataloging. Excellent communication skills and ability to translate business needs into structured data models. Key Responsibilities: 1. Data Modeling & Curated Layer Design Design logical, conceptual, and physical data models for the EDPs curated layer in BigQuery. Develop fact and dimension tables, ensuring adherence to dimensional modeling best practices (Kimball methodology). Optimize data models for performance, scalability, and query efficiency in a cloud-native environment. Work closely with data engineers to translate models into efficient BigQuery implementations (partitioning, clustering, materialized views). 2. Data Standardization & Governance Define and maintain data definitions, relationships, and business rules for curated assets. Ensure data integrity, consistency, and governance across datasets. Work with Data Governance teams to align models with enterprise data standards and metadata management policies. 3. Collaboration with Business & Technical Teams Engage with business analysts and product teams to understand data needs, ensuring models align with business requirements. Partner with data engineers and architects to implement best practices for data ingestion and transformation. Support BI & analytics teams by ensuring curated models are optimized for downstream consumption (e.g., Looker, Tableau, Power BI, AI/ML models, APIs). Please share the following details along with the most updated resume to geeta.negi@compunnel.com if you are interested in the opportunity: Total Experience Relevant experience Current CTC Expected CTC Notice Period (Last working day if you are serving the notice period) Current Location SKILL 1 RATING OUT OF 5 SKILL 2 RATING OUT OF 5 SKILL 3 RATING OUT OF 5 (Mention the skill)
Posted 1 month ago
3 - 5 years
8 - 10 Lacs
Mumbai
Work from Office
.Net Full Stack Lead Job Description Job Summary This position provides leadership in full systems life cycle management (e.g., analyses, technical requirements, design, coding, testing, implementation of systems and applications software, etc.) to ensure delivery is on time and within budget. He/She directs component and data architecture design, technology planning, and testing for Applications Development (AD) initiatives to meet business requirements and ensure compliance. This position develops and leads AD project activities and integrations. He/She guides teams to ensure effective communication and achievement of objectives. This position researches and supports the integration of emerging technologies. He/She provides knowledge and support for applications development, integration, and maintenance. This position leads junior team members with project related activities and tasks. He/She guides and influences department and project teams. This position facilitates collaboration with stakeholders. Key Responsibilities: Lead the team in designing, developing, and maintaining applications using the .Net framework. Collaborate with cross-functional teams to define, design, and deploy new features. Develops and ensures creation of application documents. Defines and produces integration builds. Leads maintenance, production support & development work. Identify bottlenecks and bugs, and devise solutions to mitigate and address these issues. Mentor junior developers and provide technical guidance and support. Conduct code reviews and ensure adherence to coding standards and best practices. Stay updated with the latest industry trends and technologies to ensure continuous improvement. Primary Skills (must have): Strong knowledge of in C#, .NET Proven experience in Web-Development using Angular, ExtJs Strong hands-on experience in Database - SQL Server (preferred) Experience with RESTful services and APIs Expert in frameworks - MVC, Entity Framework, MVVM Framework, XUnit/NUnit Unit Testing framework Proficient in designing native cloud apps, legacy apps migration/modernization in GCP (preferred) / Azure Strong hands-on experience DevOps (CI/CD) implementation, Version control using Git Secondary skills (good to have): Google Cloud Platform - GKE, Apigee, BigQuery, Spanner, etc. Agile (Scrum) PowerBI Qualifications: Education : Bachelors or masters degree in computer science, Information Technology, or a related field Experience : Minimum of 3-5 years of experience in leading multi-disciplinary team. Soft Skills: Strong analytical and Problem-solving skills, Excellent communication, and Teamwork abilities. Certifications: GCP Architect, Azure architect certification, CSM are preferred. Employee Type: Permanent
Posted 1 month ago
10 - 14 years
12 - 16 Lacs
Bengaluru
Work from Office
Skill required: Tech for Operations - Artificial Intelligence (AI) Designation: AI/ML Computational Science Assoc Mgr Qualifications: Any Graduation Years of Experience: 10 to 14 years What would you do? You will be part of the Technology for Operations team that acts as a trusted advisor and partner to Accenture Operations. The team provides innovative and secure technologies to help clients build an intelligent operating model, driving exceptional results. We work closely with the sales, offering and delivery teams to identify and build innovative solutions.The Tech For Operations (TFO) team provides innovative and secure technologies to help clients build an intelligent operating model, driving exceptional results. Works closely with the sales, offering and delivery teams to identify and build innovative solutions. Major sub deals include AHO(Application Hosting Operations), ISMT (Infrastructure Management), Intelligent AutomationIn Artificial Intelligence, you will be enhancing business results by using AI tools and techniques to performs tasks such as visual perception, speech recognition, decision-making, and translation between languages etc. that requires human intelligence. What are we looking for? Artificial Neural Networks (ANNS) Machine Learning Results orientation Problem-solving skills Ability to perform under pressure Strong analytical skills Written and verbal communication Roles and Responsibilities: In this role you are required to do analysis and solving of moderately complex problems Typically creates new solutions, leveraging and, where needed, adapting existing methods and procedures The person requires understanding of the strategic direction set by senior management as it relates to team goals Primary upward interaction is with direct supervisor or team leads Generally interacts with peers and/or management levels at a client and/or within Accenture The person should require minimal guidance when determining methods and procedures on new assignments Decisions often impact the team in which they reside and occasionally impact other teams Individual would manage medium-small sized teams and/or work efforts (if in an individual contributor role) at a client or within Accenture Please note that this role may require you to work in rotational shifts Qualifications Any Graduation
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
BigQuery, a powerful cloud-based data warehouse provided by Google Cloud, is in high demand in the job market in India. Companies are increasingly relying on BigQuery to analyze and manage large datasets, driving the need for skilled professionals in this area.
The average salary range for BigQuery professionals in India varies based on experience level. Entry-level positions may start at around INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 15-20 lakhs per annum.
In the field of BigQuery, a typical career progression may include roles such as Junior Developer, Developer, Senior Developer, Tech Lead, and eventually moving into managerial positions such as Data Architect or Data Engineering Manager.
Alongside BigQuery, professionals in this field often benefit from having skills in SQL, data modeling, data visualization tools like Tableau or Power BI, and cloud platforms like Google Cloud Platform or AWS.
As you explore opportunities in the BigQuery job market in India, remember to continuously upskill and stay updated with the latest trends in data analytics and cloud computing. Prepare thoroughly for interviews by practicing common BigQuery concepts and showcase your hands-on experience with the platform. With dedication and perseverance, you can excel in this dynamic field and secure rewarding career opportunities. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.