Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
15.0 - 20.0 years
4 - 8 Lacs
pune
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Snowflake Data Warehouse, Core Banking, PySpark Good to have skills : AWS BigDataMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting any issues that arise in the data flow and processing stages. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Snowflake Data Warehouse, Core Banking, PySpark.- Good To Have Skills: Experience with AWS BigData.- Strong understanding of data modeling and database design principles.- Experience with data integration tools and ETL processes.- Familiarity with data governance and data quality frameworks. Additional Information:- The candidate should have minimum 5 years of experience in Snowflake Data Warehouse.- This position is based in Pune.- A 15 years full time education is required. Qualification 15 years full time education
Posted 3 weeks ago
6.0 - 9.0 years
9 - 13 Lacs
mumbai
Work from Office
About the job : Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.
Posted 3 weeks ago
3.0 - 5.0 years
5 - 8 Lacs
chennai
Work from Office
" PLEASE READ THE JOB DESTCRIPTION AND APPLY" Data Engineer Job Description Position Overview Yesterday is history, tomorrow is a mystery, but today is a gift. That's why we call it the present. - Master Oogway Join CustomerLabs' dynamic data team as a Data Engineer and play a pivotal role in transforming raw marketing data into actionable insights that power our digital marketing platform. As a key member of our data infrastructure team, you will design, develop, and maintain robust data pipelines, data warehouses, and analytics platforms that serve as the backbone of our digital marketing product development. Sometimes the hardest choices require the strongest wills. - Thanos (but we promise, our data decisions are much easier! ) In this role, you will collaborate with cross-functional teams including Data Scientists, Product Managers, and Marketing Technology specialists to ensure seamless data flow from various marketing channels, ad platforms, and customer touchpoints to our analytics dashboards and reporting systems. You'll be responsible for building scalable, reliable, and efficient data solutions that can handle high-volume marketing data processing and real-time campaign analytics. What You'll Do: - Design and implement enterprise-grade data pipelines for marketing data ingestion and processing - Build and optimize data warehouses and data lakes to support digital marketing analytics - Ensure data quality, security, and compliance across all marketing data systems - Create data models and schemas that support marketing attribution, customer journey analysis, and campaign performance tracking - Develop monitoring and alerting systems to maintain data pipeline reliability for critical marketing operations - Collaborate with product teams to understand digital marketing requirements and translate them into technical solutions Why This Role Matters: I can do this all day. - Captain America (and you'll want to, because this role is that rewarding!) You'll be the backbone behind the data infrastructure that powers CustomerLabs' digital marketing platform, making marketers' lives easier and better. Your work directly translates to smarter automation, clearer insights, and more successful campaigns - helping marketers focus on what they do best while we handle the complex data heavy lifting. Sometimes you gotta run before you can walk. - Iron Man (and sometimes you gotta build the data pipeline before you can analyze the data! ) Our Philosophy: We believe in the power of data to transform lives, just like the Dragon Warrior transformed the Valley of Peace. Every line of code you write, every pipeline you build, and every insight you enable has the potential to change how marketers work and succeed. We're not just building data systems - we're building the future of digital marketing, one insight at a time. Your story may not have such a happy beginning, but that doesn't make you who you are. It is the rest of your story, who you choose to be. - Soothsayer What Makes You Special: We're looking for someone who embodies the spirit of both Captain America's unwavering dedication and Iron Man's innovative genius. You'll need the patience to build robust systems (like Cap's shield ) and the creativity to solve complex problems (like Tony's suit). Most importantly, you'll have the heart to make a real difference in marketers' lives. Inner peace... Inner peace... Inner peace... - Po (because we know data engineering can be challenging, but we've got your back! ) Key Responsibilities Data Pipeline Development - Design, build, and maintain robust, scalable data pipelines and ETL/ELT processes - Develop data ingestion frameworks to collect data from various sources (databases, APIs, files, streaming sources) - Implement data transformation and cleaning processes to ensure data quality and consistency - Optimize data pipeline performance and reliability Data Infrastructure Management - Design and implement data warehouse architectures - Manage and optimize database systems (SQL and NoSQL) - Implement data lake solutions and data governance frameworks - Ensure data security, privacy, and compliance with regulatory requirements Data Modeling and Architecture - Design and implement data models for analytics and reporting - Create and maintain data dictionaries and documentation - Develop data schemas and database structures - Implement data versioning and lineage tracking Data Quality, Security, and Compliance - Ensure data quality, integrity, and consistency across all marketing data systems - Implement and monitor data security measures to protect sensitive information - Ensure privacy and compliance with regulatory requirements (e.g., GDPR, CCPA) - Develop and enforce data governance policies and best practices Collaboration and Support - Work closely with Data Scientists, Analysts, and Business stakeholders - Provide technical support for data-related issues and queries Monitoring and Maintenance - Implement monitoring and alerting systems for data pipelines - Perform regular maintenance and optimization of data systems - Troubleshoot and resolve data pipeline issues - Conduct performance tuning and capacity planning Required Qualifications Experience - 2+ years of experience in data engineering or related roles - Proven experience with ETL/ELT pipeline development - Experience with cloud data platform (GCP) - Experience with big data technologies Technical Skills - Programming Languages : Python, SQL, Golang (preferred) - Databases: PostgreSQL, MySQL, Redis - Big Data Tools: Apache Spark, Apache Kafka, Apache Airflow, DBT, Dataform - Cloud Platforms: GCP (BigQuery, Dataflow, Cloud run, Cloud SQL, Cloud Storage, Pub/Sub, App Engine, Compute Engine etc.) - Data Warehousing: Google BigQuery - Data Visualization: Superset, Looker, Metabase, Tableau - Version Control: Git, GitHub - Containerization: Docker Soft Skills - Strong problem-solving and analytical thinking - Excellent communication and collaboration skills - Ability to work independently and in team environments - Strong attention to detail and data quality - Continuous learning mindset Preferred Qualifications Additional Experience - Experience with real-time data processing and streaming - Knowledge of machine learning pipelines and MLOps - Experience with data governance and data catalog tools - Familiarity with business intelligence tools (Tableau, Power BI, Looker, etc.) - Experience using AI-powered tools (such as Cursor, Claude, Copilot, ChatGPT, Gemini, etc.) to accelerate coding, automate tasks, or assist in system design ( We belive run with machine, not against machine ) Interview Process 1. Initial Screening: Phone/video call with HR 2. Technical Interview: Deep dive into data engineering concepts 3. Final Interview: Discussion with senior leadership Note: This job description is intended to provide a general overview of the position and may be modified based on organizational needs and candidate qualifications. Our Team Culture We are Groot. - We work together, we grow together, we succeed together. We believe in: - Innovation First - Like Iron Man, we're always pushing the boundaries of what's possible - Team Over Individual - Like the Avengers, we're stronger together than apart - Continuous Learning - Like Po learning Kung Fu, we're always evolving and improving - Making a Difference - Like Captain America, we fight for what's right (in this case, better marketing!) Growth Journey There is no charge for awesomeness... or attractiveness. - Po Your journey with us will be like Po's transformation from noodle maker to Dragon Warrior: - Level 1 : Master the basics of our data infrastructure - Level 2: Build and optimize data pipelines - Level 3 : Lead complex data projects and mentor others - Level 4: Become a data engineering legend (with your own theme music! ) What We Promise I am Iron Man. - We promise you'll feel like a superhero every day! - Work that matters - Every pipeline you build helps real marketers succeed - Growth opportunities - Learn new technologies and advance your career - Supportive team - We've got your back, just like the Avengers - Work-life balance - Because even superheroes need rest! Apply : https://customerlabs.freshteam.com/jobs
Posted 3 weeks ago
0.0 - 1.0 years
8 - 10 Lacs
hyderabad
Work from Office
Google Cloud Platform o GCS, DataProc, Big Query, Data Flow Programming Languages o Java, Scripting Languages like Python, Shell Script, SQL Google Cloud Platform o GCS, DataProc, Big Query, Data Flow 5+ years of experience in IT application delivery with proven experience in agile development methodologies 1 to 2 years of experience in Google Cloud Platform (GCS, DataProc, Big Query, Composer, Data Processing like Data Flow)
Posted 3 weeks ago
3.0 - 6.0 years
6 - 8 Lacs
noida
Work from Office
3+ years experienced engineer who has worked on GCP environment and its relevant tools/services. (Big Query, Data Proc, Data flow, Cloud Storage, Terraform, Tekton , Cloudrun , Cloud scheduler, Astronomer/Airflow, Pub/sub, Kafka, Cloud Spanner streaming etc) 1 or 2 + years of strong experience in Python development (Object oriented/Functional Programming, Pandas, Pyspark etc) 1 or 2 + years of strong experience in SQL Language (CTEs, Window functions, Aggregate functions etc)
Posted 3 weeks ago
4.0 - 6.0 years
10 - 14 Lacs
bengaluru
Work from Office
Google Cloud Platform o GCS, DataProc, Big Query, Data Flow Programming Languages o Java, Scripting Languages like Python, Shell Script, SQL Google Cloud Platform o GCS, DataProc, Big Query, Data Flow 5+ years of experience in IT application delivery with proven experience in agile development methodologies 1 to 2 years of experience in Google Cloud Platform (GCS, DataProc, Big Query, Composer, Data Processing like Data Flow)
Posted 3 weeks ago
5.0 - 9.0 years
12 - 22 Lacs
hyderabad, pune
Hybrid
-Experience : - 5 to 8 Yrs Job Location : - Pune, Hyderabad Must Have -Strong proficiency in Java and related frameworks eg Apache Flink Spring Spring Boot -Familiarity with containerization tools like Docker and orchestration tools like Kubernetes GKE -Hands-on experience with Google Cloud Platform GCP services like GKE GCP Buckets BigQuery -Good understanding of SQL and experience with relational databases -Familiarity with RESTful APIs and microservices architecture -Knowledge of version control systems like Git -Experience with CICD pipelines and tools -Strong problemsolving and analytical skills. Interested candidates share your CV at himani.girnar@alikethoughts.com with below details Candidate's name- Email and Alternate Email ID- Contact and Alternate Contact no- Total exp- Relevant experience- Current Org- Notice period- CCTC- ECTC- Current Location- Preferred Location- Pancard No-
Posted 3 weeks ago
7.0 - 12.0 years
8 - 13 Lacs
bengaluru
Work from Office
Your future role Take on a new challenge and apply your data engineering expertise in a cutting-edge field. Youll work alongside collaborative and innovative teammates. You'll play a key role in enabling data-driven decision-making across the organization by ensuring data availability, quality, and accessibility. Day-to-day, youll work closely with teams across the business (e.g., Data Scientists, Analysts, and ML Engineers), mentor junior engineers, and contribute to the architecture and design of our data platforms and solutions. Youll specifically take care of designing and developing scalable data pipelines, but also managing and optimizing object storage systems. Well look to you for: Designing, developing, and maintaining scalable and efficient data pipelines using tools like Apache NiFi and Apache Airflow. Creating robust Python scripts for data ingestion, transformation, and validation. Managing and optimizing object storage systems such as Amazon S3, Azure Blob, or Google Cloud Storage. Collaborating with Data Scientists and Analysts to understand data requirements and deliver production-ready datasets. Implementing data quality checks, monitoring, and alerting mechanisms. Ensuring data security, governance, and compliance with industry standards. Mentoring junior engineers and promoting best practices in data engineering. All about you We value passion and attitude over experience. Thats why we dont expect you to have every single skill. Instead, weve listed some that we think will help you succeed and grow in this role: Bachelors or Masters degree in Computer Science, Engineering, or a related field. 7+ years of experience in data engineering or a similar role. Strong proficiency in Python and data processing libraries (e.g., Pandas, PySpark). Hands-on experience with Apache NiFi for data flow automation. Deep understanding of object storage systems and cloud data architectures. Proficiency in SQL and experience with both relational and NoSQL databases. Familiarity with cloud platforms (AWS, Azure, or GCP). Exposure to the Data Science ecosystem, including tools like Jupyter, scikit-learn, TensorFlow, or MLflow. Experience working in cross-functional teams with Data Scientists and ML Engineers. Cloud certifications or relevant technical certifications are a plus.
Posted 3 weeks ago
5.0 - 7.0 years
3 - 6 Lacs
hyderabad
Work from Office
Required Skills: Bachelors degree in Computer Science, Information Systems, or a related field. Minimum 3 years of hands-on experience with SnapLogic or similar iPaaS tools (e.g., MuleSoft, Dell Boomi, Informatica). Strong integration skills with various databases (SQL, NoSQL) and enterprise systems. Proficiency in working with REST/SOAP APIs, JSON, XML, and data transformation techniques. Experience with cloud platforms (AWS, Azure, or GCP). Solid understanding of data flow, ETL/ELT processes, and integration patterns. Excellent analytical, problem-solving, and communication skills. Exposure to DevOps tools and CI/CD pipelines. Experience integrating with Enterprise platforms (e.g., Salesforce) Key Responsibilities: Design, develop, and maintain scalable integration pipelines using SnapLogic. Integrate diverse systems including relational databases, cloud platforms, SaaS applications, and on-premise systems. Collaborate with cross-functional teams to gather requirements and deliver robust integration solutions. Monitor, troubleshoot, and optimize SnapLogic pipelines for performance and reliability. Ensure data consistency, quality, and security across integrated systems. Maintain technical documentation and follow best practices in integration development.
Posted 3 weeks ago
5.0 - 7.0 years
13 - 17 Lacs
bengaluru
Work from Office
A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and Al journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat. In your role, you will be responsible for: Skilled Multiple GCP services - GCS, BigQuery, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer etc. Must have Python and SQL work experience & Proactive, collaborative and ability to respond to critical situation Ability to analyse data for functional business requirements & front face customer Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 5 to 7 years of relevant experience working as technical analyst with Big Query on GCP platform. Skilled in multiple GCP services - GCS, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer Ambitious individual who can work under their own direction towards agreed targets/goals and with creative approach to work You love collaborative environments that use agile methodologies to encourage creative design thinking and find innovative ways to develop with cutting edge technologies. End to End functional knowledge of the data pipeline/transformation implementation that the candidate has done, should understand the purpose/KPIs for which data transformation was done Preferred technical and professional experience Experience with AEM Core Technologies : OSGI Services, Apache Sling ,Granite Framework., Java Content Repository API, Java 8+, Localization Familiarity with building tools, Jenkin and Maven , Knowledge of version control tools, especially Git, Knowledge of Patterns and Good Practices to design and develop quality and clean code, Knowledge of HTML, CSS, and JavaScript , jQuery Familiarity with task management, bug tracking, and collaboration tools like JIRA and Confluence
Posted 3 weeks ago
7.0 - 9.0 years
32 - 35 Lacs
hyderabad, chennai, bengaluru
Work from Office
Job Description: We are looking for an experienced GCP Data Engineer to join our team. The ideal candidate will have strong expertise in designing and developing scalable data pipelines and real-time processing solutions on Google Cloud Platform. Key Skills Required: Hands-on experience in GCP Dataflow and Realtime Data Processing Strong programming skills in Java Expertise in Spanner and BigQuery (BQ) Solid understanding of data modeling, ETL processes, and performance optimization Ability to work in a fast-paced, collaborative environment Preferred Qualifications: Strong problem-solving and analytical skills Excellent communication and teamwork abilities Prior experience in handling large-scale data systems
Posted 3 weeks ago
4.0 - 8.0 years
15 - 25 Lacs
pune, gurugram, bengaluru
Hybrid
Salary: 15 to 25 LPA Exp: 4 to 7 years Location: Gurgaon/Pune/Bengalore Notice: Immediate to 30 days..!! Job Profile: Experienced Data Engineer with a strong foundation in designing, building, and maintaining scalable data pipelines and architectures. Skilled in transforming raw data into clean, structured formats for analytics and business intelligence. Proficient in modern data tools and technologies such as SQL, T-SQL, Python, Databricks, and cloud platforms (Azure). Adept at data wrangling, modeling, ETL/ELT development, and ensuring data quality, integrity, and security. Collaborative team player with a track record of enabling data-driven decision-making across business units. As a Data engineer, Candidate will work on the assignments for one of our Utilities clients. Collaborating with cross-functional teams and stakeholders involves gathering data requirements, aligning business goals, and translating them into scalable data solutions. The role includes working closely with data analysts, scientists, and business users to understand needs, designing robust data pipelines, and ensuring data is accessible, reliable, and well-documented. Regular communication, iterative feedback, and joint problem-solving are key to delivering high-impact, data-driven outcomes that support organizational objectives. This position requires a proven track record of transforming processes, driving customer value, cost savings with experience in running end-to-end analytics for large-scale organizations. Design, build, and maintain scalable data pipelines to support analytics, reporting, and advanced modeling needs. Collaborate with consultants, analysts, and clients to understand data requirements and translate them into effective data solutions. Ensure data accuracy, quality, and integrity through validation, cleansing, and transformation processes. Develop and optimize data models, ETL workflows, and database architectures across cloud and on-premises environments. Support data-driven decision-making by delivering reliable, well-structured datasets and enabling self-service analytics. Provides seamless integration with cloud platforms (Azure), making it easy to build and deploy end-to-end data pipelines in the cloud Scalable clusters for handling large datasets and complex computations in Databricks, optimizing performance and cost management. Must to have Client Engagement Experience and collaboration with cross-functional teams Data Engineering background in Databricks Capable of working effectively as an individual contributor or in collaborative team environments Effective communication and thought leadership with proven record. Candidate Profile: Bachelors/masters degree in economics, mathematics, computer science/engineering, operations research or related analytics areas 3+ years experience must be in Data engineering. Hands on experience on SQL, Python, Databricks, cloud Platform like Azure etc. Prior experience in managing and delivering end to end projects Outstanding written and verbal communication skills Able to work in fast pace continuously evolving environment and ready to take up uphill challenges Is able to understand cross cultural differences and can work with clients across the globe.
Posted 3 weeks ago
4.0 - 8.0 years
22 - 25 Lacs
bengaluru
Work from Office
3+ years experienced engineer who has worked on GCP environment and its relevant tools/services. (Big Query, Data Proc, Data flow, Cloud Storage, Terraform, Tekton , Cloudrun , Cloud scheduler, Astronomer/Airflow, Pub/sub, Kafka, Cloud Spanner streaming etc) 1 or 2 + years of strong experience in Python development (Object oriented/Functional Programming, Pandas, Pyspark etc) 1 or 2 + years of strong experience in SQL Language (CTEs, Window functions, Aggregate functions etc)
Posted 3 weeks ago
4.0 - 8.0 years
22 - 25 Lacs
chennai
Work from Office
3+ years experienced engineer who has worked on GCP environment and its relevant tools/services. (Big Query, Data Proc, Data flow, Cloud Storage, Terraform, Tekton , Cloudrun , Cloud scheduler, Astronomer/Airflow, Pub/sub, Kafka, Cloud Spanner streaming etc) 1 or 2 + years of strong experience in Python development (Object oriented/Functional Programming, Pandas, Pyspark etc) 1 or 2 + years of strong experience in SQL Language (CTEs, Window functions, Aggregate functions etc)
Posted 3 weeks ago
4.0 - 8.0 years
22 - 25 Lacs
hyderabad
Work from Office
3+ years experienced engineer who has worked on GCP environment and its relevant tools/services. (Big Query, Data Proc, Data flow, Cloud Storage, Terraform, Tekton , Cloudrun , Cloud scheduler, Astronomer/Airflow, Pub/sub, Kafka, Cloud Spanner streaming etc) 1 or 2 + years of strong experience in Python development (Object oriented/Functional Programming, Pandas, Pyspark etc) 1 or 2 + years of strong experience in SQL Language (CTEs, Window functions, Aggregate functions etc)
Posted 3 weeks ago
5.0 - 8.0 years
17 - 25 Lacs
hyderabad, pune
Hybrid
Urgent Hiring Java GCP Professionals (Top MNC) Role: Java GCP Developer Experience: 5 to 8 Years Work Location: Pune / Hyderabad Budget: Up to 24 LPA Notice Period: Immediate to 30 Days Must-Have Skills Strong experience in Java, Spring Boot, Microservices Hands-on expertise in Google Cloud Platform (GCP) Experience with GCP Pub/Sub, Dataflow Secondary Skills Exposure to Cloud-based application development Good knowledge of CI/CD pipelines, API development, and deployment Strong problem-solving & debugging skills Responsibilities Design, develop, and maintain scalable applications using Java and GCP services Implement Microservices-based architecture with Spring Boot Work with GCP services such as Pub/Sub and Dataflow for data streaming and processing Collaborate with cross-functional teams to deliver high-quality solutions Ensure best practices in coding, performance, and security Why Join Us? Opportunity to work on cutting-edge GCP cloud projects Attractive package up to 24 LPA Fast-track career growth with a leading MNC Interview Mode: Virtual
Posted 3 weeks ago
6.0 - 11.0 years
6 - 11 Lacs
hyderabad, pune
Work from Office
Job Responsibilities: Configure/customize NetSuite application to meet customers business requirements. Conduct personalization sessions and document with meeting minute summaries. Demonstrated experience in participating and translating customer business requirements into Business solutions, either as a software solution or a re-engineering initiative Collaborate with technical team member(s) to help guide the development of customized solutions, or data extracts using SQL queries Identify test scenarios, establish test cases and support SIT, UAT with core client stakeholders to ensure system configuration objectives have been met Create training/support documentation, and drive end-user training to promote user adoption Documentation of requirement, Process and User documentation Design business process and application configuration for application based on industry best practices. Support the Go Live deployment processes, ensuring a seamless software launch and continuity of business operations during cutover Responsible for owning and delivering complex solutions using Oracle NetSuite platform. Software-testing and Conduct testing of all kinds and prepare test cases of the modules implemented and developed. Suggest process improvements based on application capability and industry best practices. Responsible for NetSuite Setups: Customer, Vendor, and Item Department, Class, Locations NetSuite Processes: Order to Cash Procure to Pay Bank Reconciliation Accounting Advanced Revenue management Fixed Asset Intercompany Management Call to Resolution (Case Management) Form Customizations & Fields Creation Custom Records CSV Imports Work-Flows setup Saved Searches & Report Customization Integration process mapping Skills & Experience Required: 6+ yrs of hands on experience in NetSuite Implementation & Enhancement projects Thorough knowledge of NetSuite functionalities and architecture Hands-on experience on NetSuite Integration with 3rd party applications. Should have min 4 end to end implementation experience. Strong communication skills to Work closely with customers and partners to gather requirements and design solutions. Strong NetSuite ERP Knowledge and experience. Setups and Configurations, Saved Searches and reports. The mandatory requirement is to have functional experience in Receivables, OrderManagement ,case management and billing operations within NetSuite Excellent command on flowcharts, Data flow Diagrams Strong analytical and problem-solving skills, Good team player and collaborate with other team Ready to be on-call on a rotational basis. Excellent command on google sheet, google apps, word, excel, PowerPoint.
Posted 3 weeks ago
1.0 - 4.0 years
3 - 6 Lacs
noida
Work from Office
The ideal candidate will be responsible for conceptualizing, designing, and producing engaging video and visual content that aligns with our brand strategy and marketing goals. Key Responsibilities: Develop and execute high-quality video content for social media, marketing campaigns, websites, and internal communications. Design visually compelling graphics, layouts, and illustrations for digital and print media. Collaborate with cross-functional teams to understand project objectives and translate ideas into visual storytelling. Edit raw video footage into polished final products including sound, voiceover, special effects, and graphics. Maintain consistency in visual branding across all design and video outputs. Stay updated with the latest design trends, tools, and technologies. Manage multiple projects with tight deadlines in a fast-paced environment.
Posted 3 weeks ago
1.0 - 3.0 years
2 - 3 Lacs
bengaluru
Work from Office
Google Cloud Platform o GCS, DataProc, Big Query, Data Flow Programming Languages o Java, Scripting Languages like Python, Shell Script, SQL Google Cloud Platform o GCS, DataProc, Big Query, Data Flow 5+ years of experience in IT application delivery with proven experience in agile development methodologies 1 to 2 years of experience in Google Cloud Platform (GCS, DataProc, Big Query, Composer, Data Processing like Data Flow)
Posted 3 weeks ago
3.0 - 5.0 years
5 - 15 Lacs
pune
Hybrid
Responsibilities: Design, implement, and manage ETL pipelines on Google Cloud Platform (BigQuery, Dataflow, Pub/Sub, Composer) . Write complex SQL queries and optimize for BigQuery performance. Work with structured/unstructured data from multiple sources (databases, APIs, streaming). Build reusable data frameworks for transformation, validation, and quality checks. Collaborate with stakeholders to understand business requirements and deliver analytics-ready datasets. Implement best practices in data governance, security, and cost optimization . Requirements: Bachelors in Computer Science, IT, or related field. experience in ETL/Data Engineering . Strong Python & SQL skills. Hands-on with GCP (BigQuery, Dataflow, Composer, Pub/Sub, Dataproc) . Experience with orchestration tools (Airflow preferred). Knowledge of data modeling and data warehouse design. Exposure to CI/CD, Git, DevOps practices is a plus.
Posted 3 weeks ago
4.0 - 9.0 years
15 - 20 Lacs
hyderabad, chennai, bengaluru
Work from Office
Role & responsibilities Design, build, and optimize real-time and batch data pipelines using Google Cloud Dataflow (Apache Beam). 36 years of experience as a Data Engineer with strong knowledge of Google Cloud Platform (GCP) . Hands-on experience in Dataflow / Apache Beam for real-time and batch processing. Experience with Google Cloud Spanner (schema design, query optimization, replication). Proficiency in BigQuery for large-scale data analysis and optimization. Solid understanding of streaming data architectures (Pub/Sub, Kafka, or similar) .
Posted 3 weeks ago
13.0 - 18.0 years
13 - 17 Lacs
bengaluru
Work from Office
Skill required: Tech for Operations - Product Development Management Designation: AI/ML Computational Science Manager Qualifications: Any Graduation Years of Experience: 13 to 18 years About Accenture Combining unmatched experience and specialized skills across more than 40 industries, we offer Strategy and Consulting, Technology and Operations services, and Accenture Song all powered by the worlds largest network of Advanced Technology and Intelligent Operations centers. Our 699,000 people deliver on the promise of technology and human ingenuity every day, serving clients in more than 120 countries. Visit us at www.accenture.com What would you do You will be part of the Technology for Operations team that acts as a trusted advisor and partner to Accenture Operations. The team provides innovative and secure technologies to help clients build an intelligent operating model, driving exceptional results. We work closely with the sales, offering and delivery teams to identify and build innovative solutions.The Tech For Operations (TFO) team provides innovative and secure technologies to help clients build an intelligent operating model, driving exceptional results. Works closely with the sales, offering and delivery teams to identify and build innovative solutions. Major sub deals include AHO(Application Hosting Operations), ISMT (Infrastructure Management), Intelligent AutomationManage the end-to-end product development process from conception to design and production start-up, including the product structure design, engineering requirement process, multi-function resources collaboration and the engineering and supply chain integration. What are we looking for Results orientationProblem-solving skillsAbility to perform under pressureStrong analytical skillsWritten and verbal communication Roles and Responsibilities: In this role you are required to identify and assess complex problems for area of responsibility The person would create solutions in situations in which analysis requires an in-depth evaluation of variable factors Requires adherence to strategic direction set by senior management when establishing near-term goals Interaction of the individual is with senior management at a client and/or within Accenture, involving matters that may require acceptance of an alternate approach Some latitude in decision-making in involved you will act independently to determine methods and procedures on new assignments Decisions individual at this role makes have a major day to day impact on area of responsibility The person manages large - medium sized teams and/or work efforts (if in an individual contributor role) at a client or within Accenture Please note that this role may require you to work in rotational shifts Qualification Any Graduation
Posted 3 weeks ago
15.0 - 20.0 years
3 - 7 Lacs
navi mumbai
Work from Office
Project Role : Business and Integration Practitioner Project Role Description : Assists in documenting the integration strategy endpoints and data flows. Is familiar with the entire project life-cycle, including requirements analysis, coding, testing, deployment, and operations to ensure successful integration. Under the guidance of the Architect, ensure the integration strategy meets business goals. Must have skills : ALIP Product Configuration Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Business and Integration Practitioner, you will play a crucial role in assisting with the documentation of integration strategies, endpoints, and data flows. Your typical day will involve collaborating with various teams to ensure that the integration processes align with business objectives. You will engage in the entire project life-cycle, from requirements analysis to deployment, ensuring that all aspects of integration are effectively managed and executed. Your contributions will be vital in facilitating seamless integration and supporting the overall success of the project. Roles & Responsibilities:- Expected to be an SME, collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate communication between stakeholders to ensure alignment on integration strategies.- Monitor and evaluate integration processes to identify areas for improvement. Professional & Technical Skills: - Must To Have Skills: Proficiency in ALIP Product Configuration.- Strong understanding of integration strategies and data flow documentation.- Experience with project life-cycle management, including requirements analysis and deployment.- Ability to collaborate effectively with cross-functional teams.- Familiarity with testing methodologies and operational processes. Additional Information:- The candidate should have minimum 5 years of experience in ALIP Product Configuration.- This position is based in Mumbai.- A 15 years full time education is required. Qualification 15 years full time education
Posted 3 weeks ago
4.0 - 8.0 years
15 - 25 Lacs
chennai
Work from Office
Role Description Provides leadership for the overall architecture, design, development, and deployment of a full-stack cloud native data analytics platform. Designing & Augmenting Solution architecture for Data Ingestion, Data Preparation, Data Transformation, Data Load, ML & Simulation Modelling, Java BE & FE, State Machine, API Management & Intelligence consumption using data products, on cloud Understand Business Requirements and help in developing High level and Low-level Data Engineering and Data Processing Documentation for the cloud native architecture Developing conceptual, logical and physical target-state architecture, engineering and operational specs. Work with the customer, users, technical architects, and application designers to define the solution requirements and structure for the platform Model and design the application data structure, storage, and integration Lead the database analysis, design, and build effort Work with the application architects and designers to design the integration solution Ensure that the database designs fulfill the requirements, including data volume, frequency needs, and long-term data growth Able to perform Data Engineering tasks using Spark Knowledge of developing efficient frameworks for development and testing using (Sqoop/Nifi/Kafka/Spark/Streaming/ WebHDFS/Python) to enable seamless data ingestion processes on to the Hadoop/BigQuery platforms. Enabling Data Governance and Data Discovery Exposure of Job Monitoring framework along validations automation Exposure of handling structured, Un Structured and Streaming data. Technical Skills Experience with building data platform on cloud (Data Lake, Data Warehouse environment, Databricks) Strong technical understanding of data modeling, design and architecture principles and techniques across master data, transaction data and derived/analytic data Proven background of designing and implementing architectural solutions which solve strategic and tactical business needs Deep knowledge of best practices through relevant experience across data-related disciplines and technologies, particularly for enterprise-wide data architectures, data management, data governance and data warehousing Highly competent with database design Highly competent with data modeling Strong Data Warehousing and Business Intelligence skills or including: Handling ELT and scalability issues for enterprise level data warehouse Creating ETLs/ELTs to handle data from various data sources and various formats Strong hands-on experience of programming language like Python, Scala with Spark and Beam. Solid hands-on and Solution Architecting experience in Cloud Technologies Aws, Azure and GCP (GCP preferred) Hands on working experience of data processing at scale with event driven systems, message queues (Kafka/ Flink/Spark Streaming) Hands on working Experience with GCP Services like BigQuery, DataProc, PubSub, Dataflow, Cloud Composer, API Gateway, Datalake, BigTable, Spark, Apache Beam, Feature Engineering/Data Processing to be used for Model development Experience gathering and processing raw data at scale (including writing scripts, web scraping, calling APIs, write SQL queries, etc.) Experience building data pipelines for structured/unstructured, real-time/batch, events/synchronous/ asynchronous using MQ, Kafka, Steam processing Hands-on working experience in analyzing source system data and data flows, working with structured and unstructured data Must be very strong in writing SparkSQL queries Strong organizational skills, with the ability to work autonomously as well as leading a team Pleasant Personality, Strong Communication & Interpersonal Skills Qualifications A bachelor's degree in computer science, computer engineering, or a related discipline is required to work as a technical lead Certification in GCP would be a big plus Individuals in this field can further display their leadership skills by completing the Project Management Professional certification offered by the Project Management Institute.
Posted 3 weeks ago
5.0 - 9.0 years
9 - 13 Lacs
bengaluru
Work from Office
Role Description The Integration Lead and SSIS Developer manages the redesign and migration of integration jobs, including SSIS packages and integrated systems. Responsibilities: Lead the reconfiguration of all integrations pointing to the new D365 tenant/power apps. Modify SSIS packages to reflect new endpoints and validate data flow. Redesign and test ETL jobs with SQL Server and other data sources. Perform delta loads and full data sync for go-live. Collaborate with Power Platform and Azure teams for end-to-end data movement. Document mapping, transformation logic, and integration patterns. Support incident management during hypercare. Mandatory Skills: 3rd Party System Integrations (like SAP...), DB Integrations (SQL Server), Data validation for ETL Jobs in SSIS Additional Skills: QA Testing and Automation SQL server Integration Service (SSIS) Working Hours: 12:30 PM 9:30 PM IST
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |