Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
10.0 - 15.0 years
12 - 17 Lacs
Bengaluru
Work from Office
Create Solution Outline and Macro Design to describe end to end product implementation in Data Platforms including, System integration, Data ingestion, Data processing, Serving layer, Design Patterns, Platform Architecture Principles for Data platform Contribute to pre-sales, sales support through RfP responses, Solution Architecture, Planning and Estimation Contribute to reusable components / asset / accelerator development to support capability development Participate in Customer presentations as Platform Architects / Subject Matter Experts on Big Data, Azure Cloud and related technologies Participate in customer PoCs to deliver the outcomes Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Candidates must have experience in designing of data products providing descriptive, prescriptive, and predictive analytics to end users or other systems 10 - 15 years of experience in data engineering and architecting data platforms 5 – 8 years’ experience in architecting and implementing Data Platforms Azure Cloud Platform. 5 – 8 years’ experience in architecting and implementing Data Platforms on-prem (Hadoop or DW appliance) Experience on Azure cloud is mandatory (ADLS Gen 1 / Gen2, Data Factory, Databricks, Synapse Analytics, Azure SQL, Cosmos DB, Event hub, Snowflake), Azure Purview, Microsoft Fabric, Kubernetes, Terraform, Airflow. Experience in Big Data stack (Hadoop ecosystem Hive, HBase, Kafka, Spark, Scala PySpark, Python etc.) with Cloudera or Hortonworks Preferred technical and professional experience Exposure to Data Cataloging and Governance solutions like Collibra, Alation, Watson Knowledge Catalog, dataBricks unity Catalog, Apache Atlas, Snowflake Data Glossary etc Candidates should have experience in delivering both business decision support systems (reporting, analytics) and data science domains / use cases
Posted 12 hours ago
6.0 - 11.0 years
8 - 13 Lacs
Hyderabad
Work from Office
As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the world’s technology leader. Come to IBM and make a global impact Responsibilities: Responsible to manage end to end feature development and resolve challenges faced in implementing the same Learn new technologies and implement the same in feature development within the time frame provided Manage debugging, finding root cause analysis and fixing the issues reported on Content Management back end software system fixing the issues reported on Content Management back end software system Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall, more than 6 years of experience with more than 4+ years of Strong Hands on experience in Python and Spark Strong technical abilities to understand, design, write and debug to develop applications on Python and Pyspark. Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure strong problem-solving skill Preferred technical and professional experience Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure
Posted 12 hours ago
5.0 - 7.0 years
7 - 9 Lacs
Bengaluru
Work from Office
As a senior SAP Consultant, you will serve as a client-facing practitioner working collaboratively with clients to deliver high-quality solutions and be a trusted business advisor with deep understanding of SAP Accelerate delivery methodology or equivalent and associated work products. You will work on projects that assist clients in integrating strategy, process, technology, and information to enhance effectiveness, reduce costs, and improve profit and shareholder value. There are opportunities for you to acquire new skills, work across different disciplines, take on new challenges, and develop a comprehensive understanding of various industries. Your primary responsibilities include: Strategic SAP Solution FocusWorking across technical design, development, and implementation of SAP solutions for simplicity, amplification, and maintainability that meet client needs. Comprehensive Solution DeliveryInvolvement in strategy development and solution implementation, leveraging your knowledge of SAP and working with the latest technologies. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS; Exposure to streaming solutions and message brokers like Kafka technologies. Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers
Posted 12 hours ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Hyderabad Budget: 3.5x Notice : Immediate joiners Requirements : • BS degree in computer science, computer engineering or equivalent • 5-9 years of experience delivering enterprise software solutions • Familiar with Spark, Scala, Python, AWS Cloud technologies • 2+ years of experience across multiple Hadoop / Spark technologies such as Hadoop, MapReduce, HDFS, HBase, Hive, Flume, Sqoop, Kafka, Scala • Flair for data, schema, data model, how to bring efficiency in big data related life cycle. • Experience with Agile Development methodologies. • Experience with data ingestion and transformation • Have understanding for secure application development methodologies. • Experience in with Airflow and Python will be preferred. • Understanding of automated QA needs related to Big data technology. • Strong object-oriented design and analysis skills • Excellent written and verbal communication skills Responsibilities • Utilize your software engineering skills including Spark, Python, Scala to Analyze disparate, complex systems and collaboratively design new products and services • Integrate new data sources and tools • Implement scalable and reliable distributed data replication strategies • Collaborate with other teams to design and develop and deploy data tools that support both operations and product use cases • Perform analysis of large data sets using components from the Hadoop ecosystem • Own product features from the development, testing through to production deployment • Evaluate big data technologies and prototype solutions to improve our data processing architecture • Automate different pipelines Show more Show less
Posted 20 hours ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Introduction A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio Your Role And Responsibilities Developer leads the cloud application development/deployment. A developer responsibility is to lead the execution of a project by working with a senior level resource on assigned development/deployment activities and design, build, and maintain cloud environments focusing on uptime, access, control, and network security using automation and configuration management tools Preferred Education Master's Degree Required Technical And Professional Expertise Strong proficiency in Java, Spring Framework, Spring boot, RESTful APIs, excellent understanding of OOP, Design Patterns. Strong knowledge of ORM tools like Hibernate or JPA, Java based Micro-services framework, Hands on experience on Spring boot Microservices Strong knowledge of micro-service logging, monitoring, debugging and testing, In-depth knowledge of relational databases (e.g., MySQL) Experience in container platforms such as Docker and Kubernetes, experience in messaging platforms such as Kafka or IBM MQ, Good understanding of Test-Driven-Development Familiar with Ant, Maven or other build automation framework, good knowledge of base UNIX commands Preferred Technical And Professional Experience Experience in Concurrent design and multi-threading Primary Skills: - Core Java, Spring Boot, Java2/EE, Microservices - Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) - Spark Good to have Python Show more Show less
Posted 1 day ago
0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Primary skills:Domain->User Experience Design->Interaction design,Technology->Cloud Platform->Microservices,Technology->Java->Java - ALL,Technology->Java->Springboot,Technology->Reactive Programming->react JS A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to ensure effective Design, Development, Validation and Support activities, to assure that our clients are satisfied with the high levels of service in the technology domain. You will gather the requirements and specifications to understand the client requirements in a detailed manner and translate the same into system requirements. You will play a key role in the overall estimation of work requirements to provide the right information on project estimations to Technology Leads and Project Managers. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Develop and deliver web-based applications on time and with very high code quality. Work closely with creative team to ensure high fidelity implementations. Analyze and recommend performance optimizations of the code and ensure code complies with security best practices. Debug and troubleshoot problems in live applications. Experience in building scalable and high performance software systems Strong knowledge about HTTP, Web Performance, SEO and Web Standards Strong knowledge about Web Services, REST, MapReduce, Message Queues, Caching Strategies Strong knowledge about database management systems including RDBMS, NoSQL etc. Good knowledge about Web Security, Cryptography and Security Compliance like PCI, PII etc. Scope and draft a Technical Design Document and a Technical Requirements Document. Create detailed technical requirements with architectural guidance on how to implement. Collaborate with the project manager to develop sound technical requirements that clearly outlines the architecture and tasks required to develop a project. Reviewing code written by developers. Define stories & tasks for the implementations, assigning for the closure for the tasks Knowledge of architectural design patterns, performance tuning, database and functional designs Hands-on experience in Service Oriented Architecture Ability to lead solution development and delivery for the design solutions Experience in designing high level and low level documents is a plus Good understanding of SDLC is a pre-requisite Awareness of latest technologies and trends Logical thinking and problem solving skills along with an ability to collaborate Team Management Communication Technical strength in mobile and web technologies Problem-solving Strong front-end development Decision-making Show more Show less
Posted 1 day ago
6.0 years
0 Lacs
Thane, Maharashtra, India
On-site
Job Requirements Job Requirements Role/ Job Title: Senior Data Engineer Business: New Age Function/ Department: Data & Analytics Place of Work: Mumbai/Bangalore Roles & Responsibilities 'Minimum 6 years of Data Engineering experience and 3 years in large scale Data Lake ecosystem Proven expertise in SQL, Spark Python, Scala, Hadoop ecosystem, Have worked on multiple TBs/PBs of data volume from ingestion to consumption Work with business stakeholders to identify and document high impact business problems and potential solutions First-hand experience with the complete software development life cycle including requirement analysis, design, development, deployment, and support Advanced understanding of Data Lake/Lakehouse architecture and experience/exposure to Hadoop (cloudera,hortonworks) and AWS Work on end-to-end data lifecycle from Data Ingestion, Data Transformation and Data Consumption layer. Versed with API and its usability A suitable candidate will also be proficient Spark, Spark Streaming, AWS, and EMR A suitable candidate will also demonstrate machine learning experience and experience with big data infrastructure inclusive of MapReduce, Hive, HDFS, YARN, HBase, Oozie, etc. The candidate will additionally demonstrate substantial experience and a deep knowledge of data mining techniques, relational, and non-relational databases. Advanced skills in technical debugging of the architecture in case of issues Creating Technical Design Documentation (HLD/LLD) of the projects/pipelines Secondary Responsibilities 'Ability to work independently and handle your own development effort. Excellent oral and written communication skills Learn and use internally available analytic technologies Identify key performance indicators and establish strategies on how to deliver on these key points for analysis solutions Use educational background in data engineering and perform data mining analysis Work with BI analysts/engineers to create prototypes, implementing traditional classifiers and determiners, predictive and regressive analysis points Engage in the delivery and presentation of solutions Managerial & Leadership Responsibilities 'Lead moderately complex initiatives within Technology and contribute to large scale data processing framework initiatives related to enterprise strategy deliverables Build and maintain optimized and highly available data pipelines that facilitate deeper analysis and reporting Review and analyze moderately complex business, operational or technical challenges that require an in-depth evaluation of variable factors Oversee the data integration work, including integrating a data model with datalake, maintaining a data warehouse and analytics environment, and writing scripts for data integration and analysis Resolve moderately complex issues and lead teams to meet data engineering deliverables while leveraging solid understanding of data information policies, procedures and compliance requirements Collaborate and consult with colleagues and managers to resolve data engineering issues and achieve strategic goals Key Success Metrics 'Ensure timely deliverables. Spot Data fixes. Lead technical aspects of the projects. Error free deliverables. Show more Show less
Posted 2 days ago
3.0 - 5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
As a Data Engineer , you are required to: Design, build, and maintain data pipelines that efficiently process and transport data from various sources to storage systems or processing environments while ensuring data integrity, consistency, and accuracy across the entire data pipeline. Integrate data from different systems, often involving data cleaning, transformation (ETL), and validation. Design the structure of databases and data storage systems, including the design of schemas, tables, and relationships between datasets to enable efficient querying. Work closely with data scientists, analysts, and other stakeholders to understand their data needs and ensure that the data is structured in a way that makes it accessible and usable. Stay up-to-date with the latest trends and technologies in the data engineering space, such as new data storage solutions, processing frameworks, and cloud technologies. Evaluate and implement new tools to improve data engineering processes. Qualification : Bachelor's or Master's in Computer Science & Engineering, or equivalent. Professional Degree in Data Science, Engineering is desirable. Experience level : At least 3 - 5 years hands-on experience in Data Engineering, ETL. Desired Knowledge & Experience : Spark: Spark 3.x, RDD/DataFrames/SQL, Batch/Structured Streaming Knowing Spark internals: Catalyst/Tungsten/Photon Databricks: Workflows, SQL Warehouses/Endpoints, DLT, Pipelines, Unity, Autoloader IDE: IntelliJ/Pycharm, Git, Azure Devops, Github Copilot Test: pytest, Great Expectations CI/CD Yaml Azure Pipelines, Continuous Delivery, Acceptance Testing Big Data Design: Lakehouse/Medallion Architecture, Parquet/Delta, Partitioning, Distribution, Data Skew, Compaction Languages: Python/Functional Programming (FP) SQL: TSQL/Spark SQL/HiveQL Storage: Data Lake and Big Data Storage Design additionally it is helpful to know basics of: Data Pipelines: ADF/Synapse Pipelines/Oozie/Airflow Languages: Scala, Java NoSQL: Cosmos, Mongo, Cassandra Cubes: SSAS (ROLAP, HOLAP, MOLAP), AAS, Tabular Model SQL Server: TSQL, Stored Procedures Hadoop: HDInsight/MapReduce/HDFS/YARN/Oozie/Hive/HBase/Ambari/Ranger/Atlas/Kafka Data Catalog: Azure Purview, Apache Atlas, Informatica Required Soft skills & Other Capabilities : Great attention to detail and good analytical abilities. Good planning and organizational skills Collaborative approach to sharing ideas and finding solutions Ability to work independently and also in a global team environment. Show more Show less
Posted 2 days ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Join us as a Software Engineer This is an opportunity for a driven Software Engineer to take on an exciting new career challenge Day-to-day, you'll be engineering and maintaining innovative, customer centric, high performance, secure and robust solutions It’s a chance to hone your existing technical skills and advance your career while building a wide network of stakeholders We're offering this role at associate level What you'll do In your new role, you’ll be working within a feature team to engineer software, scripts and tools, as well as liaising with other engineers, architects and business analysts across the platform. You’ll Also Be Producing complex and critical software rapidly and of high quality which adds value to the business Working in permanent teams who are responsible for the full life cycle, from initial development, through enhancement and maintenance to replacement or decommissioning Collaborating to optimise our software engineering capability Designing, producing, testing and implementing our working software solutions Working across the life cycle, from requirements analysis and design, through coding to testing, deployment and operations The skills you'll need To take on this role, you’ll need at least four years of experience in software engineering, software design, and architecture, and an understanding of how your area of expertise supports our customers. You’ll Also Need Experience of working with development and testing tools, bug tracking tools and wikis Experience in AWS native services particularly S3, Glue, Lambda, IAM, and Elastic MapReduce Strong proficiency in Terraform for AWS cloud, Python for developing AWS lambdas, Airflow DAGs and shell scripting Experience with Apache Airflow for workflow orchestration Experience of DevOps and Agile methodology and associated toolsets Show more Show less
Posted 3 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Location - Bengaluru Sigmoid works with a variety of clients from start-ups to fortune 500 companies. We are looking for a detailed oriented self-starter to assist our engineering and analytics teams invarious roles as a Software Development Engineer. This position will be a part of a growing team working towards building world class large scale Big Data architectures. This individual should have a sound understanding of programmingprinciples, experience in programming in Java, Python or similar languages and can expect to spend a majority of their time coding. Responsibilities: Good development practices Hands on coder with good experience in programming languages like Java, Python or Scala. Hands-on experience on the Big Data stack like Hadoop, Mapreduce, Spark, Hbase, and ElasticSearch. Good understanding of programming principles and development practices like checkin policy, unit testing, code deployment Self starter to be able to grasp new concepts and technology and translate them into large scale engineering developments Excellent experience in Application development and support, integration development and data management. Align Sigmoid with key Client initiatives Interface daily with customers across leading Fortune 500 companies to understand strategic requirements Stay up-to-date on the latest technology to ensure the greatest ROI for customer & Sigmoid Hands on coder with good understanding on enterprise level code Design and implement APIs, abstractions and integration patterns to solve challenging distributed computing problems Experience in defining technical requirements, data extraction, data transformation, automating jobs, productionizing jobs, and exploring new big data technologies within a Parallel Processing environment Culture Must be a strategic thinker with the ability to think unconventional out:of:box. Analytical and data driven orientation. Raw intellect, talent and energy are critical. Entrepreneurial and Agile understands the demands of a private, high growth company. Ability to be both a leader and hands on "doer". Qualifications: Years of track record of relevant work experience and a computer Science or related technical discipline is required Experience with functional and object-oriented programming, Java, Python or Scala is a must. Hands-on experience on the Big Data stack like Hadoop, Mapreduce, Spark, Hbase, and ElasticSearch. Good understanding on AWS services and experienced in working with API's, microservices. Effective communication skills (both written and verbal Ability to collaborate with a diverse set of engineers, data scientists and product managers Comfort in a fast-paced start-up environment Preferred Qualification: Experience in agile methodology Experience with database modeling and development, data mining and warehousing. Experience in architecture and delivery of Enterprise scale applications and capable in developing framework, design patterns etc. Should be able to understand and tackle technical challenges, propose comprehensive solutions and guide junior staff Experience working with large, complex data sets from a variety of sources Show more Show less
Posted 3 days ago
15.0 - 20.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Neo4j, Stardog Good to have skills : JavaMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Neo4j.- Good To Have Skills: Experience with Java.- Strong understanding of data modeling and graph database concepts.- Experience with data integration tools and ETL processes.- Familiarity with data quality frameworks and best practices.- Proficient in programming languages such as Python or Scala for data manipulation. Additional Information:- The candidate should have minimum 5 years of experience in Neo4j.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 4 days ago
15.0 - 20.0 years
4 - 8 Lacs
Pune
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Neo4j, Stardog Good to have skills : JavaMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Neo4j.- Good To Have Skills: Experience with Java.- Strong understanding of data modeling and graph database concepts.- Experience with data integration tools and ETL processes.- Familiarity with data quality frameworks and best practices.- Proficient in programming languages such as Python or Scala for data manipulation. Additional Information:- The candidate should have minimum 5 years of experience in Neo4j.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 4 days ago
3.0 - 8.0 years
5 - 9 Lacs
Hyderabad
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Apache Hadoop Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements. Your typical day will involve collaborating with team members to develop innovative solutions and enhance application functionality. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Develop and implement software solutions to meet business requirements.- Collaborate with cross-functional teams to enhance application functionality.- Conduct code reviews and provide technical guidance to team members.- Troubleshoot and debug applications to ensure optimal performance.- Stay updated on industry trends and technologies to drive continuous improvement. Professional & Technical Skills: - Must To Have Skills: Proficiency in Apache Hadoop.- Strong understanding of distributed computing principles.- Experience with data processing frameworks like MapReduce and Spark.- Hands-on experience in designing and implementing scalable data pipelines.- Knowledge of Hadoop ecosystem components such as HDFS, YARN, and Hive. Additional Information:- The candidate should have a minimum of 3 years of experience in Apache Hadoop.- This position is based at our Hyderabad office.- A 15 years full-time education is required. Qualification 15 years full time education
Posted 4 days ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are seeking a Data Solution Architect (Azure; Databricks) . In this role, you will leverage your skills in artificial intelligence and machine learning to design robust data analytics solutions. If you are ready to make an impact, apply today! Responsibilities Design data analytics solutions utilizing the big data technology stack Create and present solution architecture documents with technical details Collaborate with business stakeholders to identify solution requirements and key scenarios Conduct solution architecture reviews and audits while calculating and presenting ROI Lead implementation of solutions from establishing project requirements to go-live Engage in pre-sale activities including customer communications and RFP processing Develop proposals and design solutions while presenting architecture to customers Create and follow a personal education plan in technology stack and solution architecture Maintain knowledge of industry trends and best practices Engage new clients to drive business growth in the big data space Requirements Strong hands-on experience as a Big Data developer with a solid design background Experience delivering data analytics projects and architecture guidelines Experience in big data solutions on premises and in the cloud Production project experience in at least one big data technology Knowledge of batch processing frameworks like Hadoop, MapReduce, Spark, or Hive Familiarity with NoSQL databases such as Cassandra, HBase, or Kudu Understanding of Agile development methodology with emphasis on Scrum Experience in direct customer communications and pre-sales consulting Experience working within a consulting environment would be highly valuable Show more Show less
Posted 4 days ago
8.0 - 10.0 years
7 - 12 Lacs
Bengaluru
Work from Office
What you’ll be doing: Assist in developing machine learning models based on project requirements Work with datasets by preprocessing, selecting appropriate data representations, and ensuring data quality. Performing statistical analysis and fine-tuning using test results. Support training and retraining of ML systems as needed. Help build data pipelines for collecting and processing data efficiently. Follow coding and quality standards while developing AI/ML solutions Contribute to frameworks that help operationalize AI models What we seek in you: 8+ years of experience in IT Industry Strong on programming languages like Python One cloud hands-on experience (GCP preferred) Experience working with Dockers Environments managing (e.g venv, pip, poetry, etc.) Experience with orchestrators like Vertex AI pipelines, Airflow, etc Understanding of full ML Cycle end-to-end Data engineering, Feature Engineering techniques Experience with ML modelling and evaluation metrics Experience with Tensorflow, Pytorch or another framework Experience with Models monitoring Advance SQL knowledge Aware of Streaming concepts like Windowing, Late arrival, Triggers etc Storage: CloudSQL, Cloud Storage, Cloud Bigtable, Bigquery, Cloud Spanner, Cloud DataStore, Vector database Ingest: Pub/Sub, Cloud Functions, AppEngine, Kubernetes Engine, Kafka, Micro services Schedule: Cloud Composer, Airflow Processing: Cloud Dataproc, Cloud Dataflow, Apache Spark, Apache Flink CI/CD: Bitbucket+Jenkins / Gitlab, Infrastructure as a tool: Terraform Life at Next: At our core, we're driven by the mission of tailoring growth for our customers by enabling them to transform their aspirations into tangible outcomes. We're dedicated to empowering them to shape their futures and achieve ambitious goals. To fulfil this commitment, we foster a culture defined by agility, innovation, and an unwavering commitment to progress. Our organizational framework is both streamlined and vibrant, characterized by a hands-on leadership style that prioritizes results and fosters growth. Perks of working with us: Clear objectives to ensure alignment with our mission, fostering your meaningful contribution. Abundant opportunities for engagement with customers, product managers, and leadership. You'll be guided by progressive paths while receiving insightful guidance from managers through ongoing feedforward sessions. Cultivate and leverage robust connections within diverse communities of interest. Choose your mentor to navigate your current endeavors and steer your future trajectory. Embrace continuous learning and upskilling opportunities through Nexversity. Enjoy the flexibility to explore various functions, develop new skills, and adapt to emerging technologies. Embrace a hybrid work model promoting work-life balance. Access comprehensive family health insurance coverage, prioritizing the well-being of your loved ones. Embark on accelerated career paths to actualize your professional aspirations. Who we are? We enable high growth enterprises build hyper personalized solutions to transform their vision into reality. With a keen eye for detail, we apply creativity, embrace new technology and harness the power of data and AI to co-create solutions tailored made to meet unique needs for our customers. Join our passionate team and tailor your growth with us!
Posted 4 days ago
2.0 - 4.0 years
0 Lacs
India
On-site
Description GroundTruth is an advertising platform that turns real-world behavior into marketing that drives in-store visits and other real business results. We use observed real-world consumer behavior, including location and purchase data, to create targeted advertising campaigns across all screens, measure how consumers respond, and uncover unique insights to help optimize ongoing and future marketing efforts. With this focus on media, measurement, and insights, we provide marketers with tools to deliver media campaigns that drive measurable impact, such as in-store visits, sales, and more. Learn more at groundtruth.com. We believe that innovative technology starts with the best talent and have been ranked one of Ad Age’s Best Places to Work in 2021, 2022, 2023 & 2025! Learn more about the perks of joining our team here. You Will Create and maintain various ingestion pipelines for the GroundTruth platform. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies. Work with stakeholders, including the Product, Analytics and Client Services teams to assist with data-related technical issues and support their data infrastructure needs. Prepare detailed specifications and low-level design. Participate in code reviews. Test the product in controlled, real situations before going live. Maintain the application once it is live. Contribute ideas to improve the data platform. You Have Tech./B.E./M.Tech./MCA or equivalent in computer science 2-4 years of experience in Software Engineering Experience with AWS Stack used for Data engineering EC2, S3, EMR, ECS, Lambda, and Step functions Hands on experience with Python/Java for orchestration of data pipelines Experience in writing analytical queries using SQL Experience in Airflow Experience in Docker Proficient in Git How can you impress us? Knowledge of REST APIs. Any experience with big data technologies like Hadoop, MapReduce, and Pig is a plus Knowledge of shell scripting. Experience with BI tools like Looker. Experience with DB maintenance. Experience with Amazon Web Services and Docker. Configuration management and QA practices. Benefits At GroundTruth, we want our employees to be comfortable with their benefits so they can focus on doing the work they love. Parental leave- Maternity and Paternity Flexible Time Offs (Earned Leaves, Sick Leaves, Birthday leave, Bereavement leave & Company Holidays) In Office Daily Catered Breakfast, Lunch, Snacks and Beverages Health cover for any hospitalization. Covers both nuclear family and parents Tele-med for free doctor consultation, discounts on health checkups and medicines Wellness/Gym Reimbursement Pet Expense Reimbursement Childcare Expenses and reimbursements Employee referral program Education reimbursement program Skill development program Cell phone reimbursement (Mobile Subsidy program). Internet reimbursement/Postpaid cell phone bill/or both. Birthday treat reimbursement Employee Provident Fund Scheme offering different tax saving options such as Voluntary Provident Fund and employee and employer contribution up to 12% Basic Creche reimbursement Co-working space reimbursement National Pension System employer match Meal card for tax benefit Special benefits on salary account Show more Show less
Posted 4 days ago
5.0 - 8.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Category: Software Development/ Engineering Main location: India, Karnataka, Bangalore Position ID: J0525-0430 Employment Type: Full Time Position Description: Company Profile: At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve. At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve results for our clients - and for our members. Come grow with us. Learn more at www.cgi.com. This is a great opportunity to join a winning team. CGI offers a competitive compensation package with opportunities for growth and professional development. Benefits for full-time, permanent members start on the first day of employment and include a paid time-off program and profit participation and stock purchase plans. We wish to thank all applicants for their interest and effort in applying for this position, however, only candidates selected for interviews will be contacted. No unsolicited agency referrals please. Job Title: Senior Software Engineer Position: Senior Software Engineer- Node, AWS and Terraform Experience: 5-8 Years Category: Software Development/ Engineering Main location: Hyderabad/ Chennai/Bangalore Position ID: J0525-0430 Employment Type: Full Time Responsibilities: Design, develop, and maintain robust and scalable server-side applications using Node.js and JavaScript/TypeScript. Develop and consume RESTful APIs and integrate with third-party services. In-depth knowledge of AWS cloud including familiarity with services such as S3, Lambda, DynamoDB, Glue, Apache Airflow, SQS, SNS, ECS and Step Functions, EMR, EKS (Elastic Kubernetes Service), Key Management Service, Elastic MapReduce Handon Experience on Terraform Specializing in designing and developing fully automated end-to-end data processing pipelines for large-scale data ingestion, curation, and transformation. Experience in deploying Spark-based ingestion frameworks, testing automation tools, and CI/CD pipelines. Knowledge of unit testing frameworks and best practices. Working experience in databases- SQL and NO-SQL (preferred)-including joins, aggregations, window functions, date functions, partitions, indexing, and performance improvement ideas. Experience with database systems such as Oracle, MySQL, PostgreSQL, MongoDB, or other NoSQL databases. Familiarity with ORM/ODM libraries (e.g., Sequelize, Mongoose). Proficiency in using Git for version control. Understanding of testing frameworks (e.g., Jest, Mocha, Chai) and writing unit and integration tests. Collaborate with front-end developers to integrate user-facing elements with server-side logic. Design and implement efficient database schemas and ensure data integrity. Write clean, well-documented, and testable code. Participate in code reviews to ensure code quality and adherence to coding standards. Troubleshoot and debug issues in development and production environments. Knowledge of security best practices for web applications (authentication, authorization, data validation). Strong communication and collaboration skills. Effective communication skills to interact with technical and non-technical stakeholders. CGI is an equal opportunity employer. In addition, CGI is committed to providing accommodations for people with disabilities in accordance with provincial legislation. Please let us know if you require a reasonable accommodation due to a disability during any aspect of the recruitment process and we will work with you to address your needs. Skills: Node.Js RESTful (Rest-APIs) Terraform What you can expect from us: Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.
Posted 4 days ago
0.0 - 3.0 years
0 Lacs
India
On-site
Description GroundTruth is an advertising platform that turns real-world behavior into marketing that drives in-store visits and other real business results. We use observed real-world consumer behavior, including location and purchase data, to create targeted advertising campaigns across all screens, measure how consumers respond, and uncover unique insights to help optimize ongoing and future marketing efforts. With this focus on media, measurement, and insights, we provide marketers with tools to deliver media campaigns that drive measurable impact, such as in-store visits, sales, and more. Learn more at groundtruth.com. We believe that innovative technology starts with the best talent and have been ranked one of Ad Age’s Best Places to Work in 2021, 2022, 2023 & 2025! Learn more about the perks of joining our team here. About Team GroundTruth seeks an Associate Software Engineer to join our Reporting team. The Reporting Team at GroundTruth is responsible for designing, building, and maintaining data pipelines and dashboards that deliver actionable insights. We ensure accurate and timely reporting to drive data-driven decisions for advertisers and publishers. We take pride in building an Engineering Team composed of strong communicators who collaborate with multiple business and engineering stakeholders to find compromises and solutions. Our engineers are organised and detail-oriented team players who are problem solvers with a maker mindset. As an Associate Software Engineer (ASE) on our Integration Team, you will build solutions that add new capabilities to our platform. You Will Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies. Lead engineering efforts across multiple software components. Write excellent production code and tests, and help others improve in code reviews. Analyse high-level requirements to design, document, estimate, and build systems. Continuously improve the team's practices in code quality, reliability, performance, testing, automation, logging, monitoring, alerting, and build processes You Have B.Tech./B.E./M.Tech./MCA or equivalent in computer science 0-3 years of experience in Data Engineering Experience with AWS Stack used for Data engineering EC2, S3, Athena, Redshift, EMR, ECS, Lambda, and Step functions Experience in MapReduce, Spark, and Glue Hands-on experience with Java/Python for the orchestration of data pipelines and Data engineering tasks Experience in writing analytical queries using SQL Experience in Airflow Experience in Docker Proficient in Git How can you impress us? Knowledge of REST APIs The following skills/certifications: Python, SQL/MySQL, AWS, Git Additional nice-to-have skills/certifications: Flask, Fast API Knowledge of shell scripting. Experience with BI tools like Looker. Experience with DB maintenance Experience with Amazon Web Services and Docker Configuration management and QA practices Benefits At GroundTruth, we want our employees to be comfortable with their benefits so they can focus on doing the work they love. Parental leave- Maternity and Paternity Flexible Time Offs (Earned Leaves, Sick Leaves, Birthday leave, Bereavement leave & Company Holidays) In Office Daily Catered Breakfast, Lunch, Snacks and Beverages Health cover for any hospitalization. Covers both nuclear family and parents Tele-med for free doctor consultation, discounts on health checkups and medicines Wellness/Gym Reimbursement Pet Expense Reimbursement Childcare Expenses and reimbursements Employee referral program Education reimbursement program Skill development program Cell phone reimbursement (Mobile Subsidy program). Internet reimbursement/Postpaid cell phone bill/or both. Birthday treat reimbursement Employee Provident Fund Scheme offering different tax saving options such as Voluntary Provident Fund and employee and employer contribution up to 12% Basic Creche reimbursement Co-working space reimbursement National Pension System employer match Meal card for tax benefit Special benefits on salary account Show more Show less
Posted 5 days ago
12.0 - 15.0 years
35 - 50 Lacs
Hyderabad
Work from Office
Skill : Java, Spark, Kafka Experience : 10 to 16 years Location : Hyderabad As Data Engineer, you will : Support in designing and rolling out the data architecture and infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources Identify data source, design and implement data schema/models and integrate data that meet the requirements of the business stakeholders Play an active role in the end-to-end delivery of AI solutions, from ideation, feasibility assessment, to data preparation and industrialization. Work with business, IT and data stakeholders to support with data-related technical issues, their data infrastructure needs as well as to build the most flexible and scalable data platform. With a strong focus on DataOps, design, develop and deploy scalable batch and/or real-time data pipelines. Design, document, test and deploy ETL/ELT processes Find the right tradeoffs between the performance, reliability, scalability, and cost of the data pipelines you implement Monitor data processing efficiency and propose solutions for improvements. • Have the discipline to create and maintain comprehensive project documentation. • Build and share knowledge with colleagues and coach junior profiles.
Posted 5 days ago
5.0 - 10.0 years
10 - 15 Lacs
Chennai, Bengaluru
Work from Office
job requisition idJR1027452 Overall Responsibilities: Data Pipeline Development: Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy. Data Ingestion: Implement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP. Data Transformation and Processing: Use PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements. Performance Optimization: Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes. Data Quality and Validation: Implement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline. Automation and Orchestration: Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem. Monitoring and Maintenance: Monitor pipeline performance, troubleshoot issues, and perform routine maintenance on the Cloudera Data Platform and associated data processes. Collaboration: Work closely with other data engineers, analysts, product managers, and other stakeholders to understand data requirements and support various data-driven initiatives. Documentation: Maintain thorough documentation of data engineering processes, code, and pipeline configurations. Software : Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Familiarity with Hadoop, Kafka, and other distributed computing tools. Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Strong scripting skills in Linux. Category-wise Technical Skills: PySpark: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Cloudera Data Platform: Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Data Warehousing: Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Big Data Technologies: Familiarity with Hadoop, Kafka, and other distributed computing tools. Orchestration and Scheduling: Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Scripting and Automation: Strong scripting skills in Linux. Experience: 5-12 years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform. Proven track record of implementing data engineering best practices. Experience in data ingestion, transformation, and optimization on the Cloudera Data Platform. Day-to-Day Activities: Design, develop, and maintain ETL pipelines using PySpark on CDP. Implement and manage data ingestion processes from various sources. Process, cleanse, and transform large datasets using PySpark. Conduct performance tuning and optimization of ETL processes. Implement data quality checks and validation routines. Automate data workflows using orchestration tools. Monitor pipeline performance and troubleshoot issues. Collaborate with team members to understand data requirements. Maintain documentation of data engineering processes and configurations. Qualifications: Bachelors or Masters degree in Computer Science, Data Engineering, Information Systems, or a related field. Relevant certifications in PySpark and Cloudera technologies are a plus. Soft Skills: Strong analytical and problem-solving skills. Excellent verbal and written communication abilities. Ability to work independently and collaboratively in a team environment. Attention to detail and commitment to data quality.
Posted 5 days ago
0.0 - 3.0 years
0 Lacs
India
On-site
Description GroundTruth is an advertising platform that turns real-world behavior into marketing that drives in-store visits and other real business results. We use observed real-world consumer behavior, including location and purchase data, to create targeted advertising campaigns across all screens, measure how consumers respond, and uncover unique insights to help optimize ongoing and future marketing efforts. With this focus on media, measurement, and insights, we provide marketers with tools to deliver media campaigns that drive measurable impact, such as in-store visits, sales, and more. Learn more at groundtruth.com. We believe that innovative technology starts with the best talent and have been ranked one of Ad Age’s Best Places to Work in 2021, 2022, 2023 & 2025! Learn more about the perks of joining our team here. A Bit About Team GroundTruth seeks a Data Engineering Associate Software Engineer to join our Integration team. The Integration Team connects and consolidates data pipelines across Avails & Inventory Forecast, Identity Graph, and POS Integration systems to ensure accurate, timely insights. We engineer seamless data flows that fuel reliable analytics and decision-making using big data technologies, such as MapReduce, Spark, and Glue. We take pride in building an Engineering Team composed of strong communicators who collaborate with multiple business and engineering stakeholders to find compromises and solutions. Our engineers are organised and detail-oriented team players who are problem solvers with a maker mindset. As an Associate Software Engineer (ASE) on our Integration Team, you will build solutions that add new capabilities to our platform. You Will Create and maintain various data pipelines for the GroundTruth platform. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies. Work with stakeholders, including the Product, Analytics and Client Services teams, to assist with data-related technical issues and support their data infrastructure needs. Prepare detailed specifications and low-level design. Participate in code reviews. Test the product in controlled, real situations before going live. Maintain the application once it is live. Contribute ideas to improve the location platform. You Have B.Tech./B.E./M.Tech./MCA or equivalent in computer science 0-3 years of experience in Data Engineering Experience with AWS Stack used for Data engineering EC2, S3, Athena, Redshift, EMR, ECS, Lambda, and Step functions Experience in Hadoop, MapReduce, Pig, Spark, and Glue Hands-on experience with Java/Python for the orchestration of data pipelines and Data engineering tasks Experience in writing analytical queries using SQL Experience in Airflow Experience in Docker Proficient in Git How can you impress us? Knowledge of REST APIs Any experience with big data technologies like Hadoop, MapReduce, and Pig is a plus Knowledge of shell scripting. Experience with BI tools like Looker. Experience with DB maintenance Experience with Amazon Web Services and Docker Configuration management and QA practices Benefits At GroundTruth, we want our employees to be comfortable with their benefits so they can focus on doing the work they love. Parental leave- Maternity and Paternity Flexible Time Offs (Earned Leaves, Sick Leaves, Birthday leave, Bereavement leave & Company Holidays) In Office Daily Catered Breakfast, Lunch, Snacks and Beverages Health cover for any hospitalization. Covers both nuclear family and parents Tele-med for free doctor consultation, discounts on health checkups and medicines Wellness/Gym Reimbursement Pet Expense Reimbursement Childcare Expenses and reimbursements Employee referral program Education reimbursement program Skill development program Cell phone reimbursement (Mobile Subsidy program). Internet reimbursement/Postpaid cell phone bill/or both. Birthday treat reimbursement Employee Provident Fund Scheme offering different tax saving options such as Voluntary Provident Fund and employee and employer contribution up to 12% Basic Creche reimbursement Co-working space reimbursement National Pension System employer match Meal card for tax benefit Special benefits on salary account Show more Show less
Posted 5 days ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Summary Position Summary Strategy & Analytics AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the Strategy practice, our Strategy & Analytics portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements Google Cloud Platform - Data Engineer Cloud is shifting business models at our clients, and transforming the way technology enables business. As our clients embark on this transformational journey to cloud, they are looking for trusted partners who can help them navigate through this journey. Our client’s journey spans across cloud strategy to implementation, migration of legacy applications to supporting operations of a cloud ecosystem and everything in between. Deloitte’s Cloud Delivery Center supports our client project teams in this journey by delivering these new solutions by which IT services are obtained, used, and managed. You will be working with other technologists to deliver cutting edge solutions using Google Cloud Services ( GCP ), programming and automation tools for some of our Fortune 1000 clients. You will have the opportunity to contribute to work that may involve building a new cloud solutions, migrating an application to co-exist in the hybrid cloud, deploying a global cloud application across multiple countries or supporting a set of cloud managed services. Our teams of technologists have a diverse range of skills and we are always looking for new ways to innovate and help our clients succeed. You will have an opportunity to leverage the skills you already have, try new technologies and develop skills that will improve your brand and career as a well-rounded cutting-edge technologist . Work you’ll do As GCP Data Engineer you will have multiple responsibilities depending on project type. As a Cloud Data Engineer, you will guide customers on how to ingest, store, process, analyze and explore/visualize data on the Google Cloud Platform. You will work on data migrations and transformational projects, and with customers to design large-scale data processing systems, develop data pipelines optimized for scaling, and troubleshoot potential platform issues. In this role you are the Data Engineer working with Deloitte's most strategic Cloud customers. Together with the team you will support customer implementation of Google Cloud products through: architecture guidance, best practices, data migration, capacity planning, implementation, troubleshooting, monitoring and much more. The key responsibilities may involve some or all of the areas listed below: Act as a trusted technical advisor to customers and solve complex Big Data challenges. Create and deliver best practices recommendations, tutorials, blog articles, sample code, and technical presentations adapting to different levels of key business and technical stakeholders. ▪ Identifying new tools and processes to improve the cloud platform and automate processes Qualifications Technical Requirements BA/BS degree in Computer Science, Mathematics or related technical field, or equivalent practical experience. Experience in Cloud SQL and Cloud Bigtable Experience in Dataflow, BigQuery, Dataproc, Datalab, Dataprep, Pub/Sub and Genomics Experience in Google Transfer Appliance, Cloud Storage Transfer Service, BigQuery Data Transfer Experience with data processing software (such as Hadoop, Kafka, Spark, Pig, Hive) and with data processing algorithms (MapReduce, Flume). Experience working with technical customers. Experience in writing software in one or more languages such as Java, C++, Python, Go and/or JavaScript. Consulting Requirements 6-9 years of relevant consulting, industry or technology experience Strong problem solving and troubleshooting skills Strong communicator Willingness to travel up in case of project requirement Preferred Qualifications Experience working data warehouses, including data warehouse technical architectures, infrastructure components, ETL/ELT and reporting/analytic tools and environments. Experience in technical consulting. Experience architecting, developing software, or internet scale production-grade Big Data solutions in virtualized environments such as Google Cloud Platform (mandatory) and AWS/Azure(good to have) Experience working with big data, information retrieval, data mining or machine learning as well as experience in building multi-tier high availability applications with modern web technologies (such as NoSQL, Kafka,NPL, MongoDB, SparkML, Tensorflow). Working knowledge of ITIL and/or agile methodologies Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300079 Show more Show less
Posted 5 days ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Summary Position Summary Strategy & Analytics AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the Strategy practice, our Strategy & Analytics portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements Google Cloud Platform - Data Engineer Cloud is shifting business models at our clients, and transforming the way technology enables business. As our clients embark on this transformational journey to cloud, they are looking for trusted partners who can help them navigate through this journey. Our client’s journey spans across cloud strategy to implementation, migration of legacy applications to supporting operations of a cloud ecosystem and everything in between. Deloitte’s Cloud Delivery Center supports our client project teams in this journey by delivering these new solutions by which IT services are obtained, used, and managed. You will be working with other technologists to deliver cutting edge solutions using Google Cloud Services ( GCP ), programming and automation tools for some of our Fortune 1000 clients. You will have the opportunity to contribute to work that may involve building a new cloud solutions, migrating an application to co-exist in the hybrid cloud, deploying a global cloud application across multiple countries or supporting a set of cloud managed services. Our teams of technologists have a diverse range of skills and we are always looking for new ways to innovate and help our clients succeed. You will have an opportunity to leverage the skills you already have, try new technologies and develop skills that will improve your brand and career as a well-rounded cutting-edge technologist . Work you’ll do As GCP Data Engineer you will have multiple responsibilities depending on project type. As a Cloud Data Engineer, you will guide customers on how to ingest, store, process, analyze and explore/visualize data on the Google Cloud Platform. You will work on data migrations and transformational projects, and with customers to design large-scale data processing systems, develop data pipelines optimized for scaling, and troubleshoot potential platform issues. In this role you are the Data Engineer working with Deloitte's most strategic Cloud customers. Together with the team you will support customer implementation of Google Cloud products through: architecture guidance, best practices, data migration, capacity planning, implementation, troubleshooting, monitoring and much more. The key responsibilities may involve some or all of the areas listed below: Act as a trusted technical advisor to customers and solve complex Big Data challenges. Create and deliver best practices recommendations, tutorials, blog articles, sample code, and technical presentations adapting to different levels of key business and technical stakeholders. ▪ Identifying new tools and processes to improve the cloud platform and automate processes Qualifications Technical Requirements BA/BS degree in Computer Science, Mathematics or related technical field, or equivalent practical experience. Experience in Cloud SQL and Cloud Bigtable Experience in Dataflow, BigQuery, Dataproc, Datalab, Dataprep, Pub/Sub and Genomics Experience in Google Transfer Appliance, Cloud Storage Transfer Service, BigQuery Data Transfer Experience with data processing software (such as Hadoop, Kafka, Spark, Pig, Hive) and with data processing algorithms (MapReduce, Flume). Experience working with technical customers. Experience in writing software in one or more languages such as Java, C++, Python, Go and/or JavaScript. Consulting Requirements 3-6 years of relevant consulting, industry or technology experience Strong problem solving and troubleshooting skills Strong communicator Willingness to travel up in case of project requirement Preferred Qualifications Experience working data warehouses, including data warehouse technical architectures, infrastructure components, ETL/ELT and reporting/analytic tools and environments. Experience in technical consulting. Experience architecting, developing software, or internet scale production-grade Big Data solutions in virtualized environments such as Google Cloud Platform (mandatory) and AWS/Azure(good to have) Experience working with big data, information retrieval, data mining or machine learning as well as experience in building multi-tier high availability applications with modern web technologies (such as NoSQL, Kafka,NPL, MongoDB, SparkML, Tensorflow). Working knowledge of ITIL and/or agile methodologies Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300075 Show more Show less
Posted 5 days ago
3.0 years
0 Lacs
Greater Kolkata Area
On-site
Summary Position Summary Strategy & Analytics AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the Strategy practice, our Strategy & Analytics portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements Google Cloud Platform - Data Engineer Cloud is shifting business models at our clients, and transforming the way technology enables business. As our clients embark on this transformational journey to cloud, they are looking for trusted partners who can help them navigate through this journey. Our client’s journey spans across cloud strategy to implementation, migration of legacy applications to supporting operations of a cloud ecosystem and everything in between. Deloitte’s Cloud Delivery Center supports our client project teams in this journey by delivering these new solutions by which IT services are obtained, used, and managed. You will be working with other technologists to deliver cutting edge solutions using Google Cloud Services ( GCP ), programming and automation tools for some of our Fortune 1000 clients. You will have the opportunity to contribute to work that may involve building a new cloud solutions, migrating an application to co-exist in the hybrid cloud, deploying a global cloud application across multiple countries or supporting a set of cloud managed services. Our teams of technologists have a diverse range of skills and we are always looking for new ways to innovate and help our clients succeed. You will have an opportunity to leverage the skills you already have, try new technologies and develop skills that will improve your brand and career as a well-rounded cutting-edge technologist . Work you’ll do As GCP Data Engineer you will have multiple responsibilities depending on project type. As a Cloud Data Engineer, you will guide customers on how to ingest, store, process, analyze and explore/visualize data on the Google Cloud Platform. You will work on data migrations and transformational projects, and with customers to design large-scale data processing systems, develop data pipelines optimized for scaling, and troubleshoot potential platform issues. In this role you are the Data Engineer working with Deloitte's most strategic Cloud customers. Together with the team you will support customer implementation of Google Cloud products through: architecture guidance, best practices, data migration, capacity planning, implementation, troubleshooting, monitoring and much more. The key responsibilities may involve some or all of the areas listed below: Act as a trusted technical advisor to customers and solve complex Big Data challenges. Create and deliver best practices recommendations, tutorials, blog articles, sample code, and technical presentations adapting to different levels of key business and technical stakeholders. ▪ Identifying new tools and processes to improve the cloud platform and automate processes Qualifications Technical Requirements BA/BS degree in Computer Science, Mathematics or related technical field, or equivalent practical experience. Experience in Cloud SQL and Cloud Bigtable Experience in Dataflow, BigQuery, Dataproc, Datalab, Dataprep, Pub/Sub and Genomics Experience in Google Transfer Appliance, Cloud Storage Transfer Service, BigQuery Data Transfer Experience with data processing software (such as Hadoop, Kafka, Spark, Pig, Hive) and with data processing algorithms (MapReduce, Flume). Experience working with technical customers. Experience in writing software in one or more languages such as Java, C++, Python, Go and/or JavaScript. Consulting Requirements 3-6 years of relevant consulting, industry or technology experience Strong problem solving and troubleshooting skills Strong communicator Willingness to travel up in case of project requirement Preferred Qualifications Experience working data warehouses, including data warehouse technical architectures, infrastructure components, ETL/ELT and reporting/analytic tools and environments. Experience in technical consulting. Experience architecting, developing software, or internet scale production-grade Big Data solutions in virtualized environments such as Google Cloud Platform (mandatory) and AWS/Azure(good to have) Experience working with big data, information retrieval, data mining or machine learning as well as experience in building multi-tier high availability applications with modern web technologies (such as NoSQL, Kafka,NPL, MongoDB, SparkML, Tensorflow). Working knowledge of ITIL and/or agile methodologies Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300075 Show more Show less
Posted 5 days ago
6.0 years
0 Lacs
Greater Kolkata Area
On-site
Summary Position Summary Strategy & Analytics AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the Strategy practice, our Strategy & Analytics portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements Google Cloud Platform - Data Engineer Cloud is shifting business models at our clients, and transforming the way technology enables business. As our clients embark on this transformational journey to cloud, they are looking for trusted partners who can help them navigate through this journey. Our client’s journey spans across cloud strategy to implementation, migration of legacy applications to supporting operations of a cloud ecosystem and everything in between. Deloitte’s Cloud Delivery Center supports our client project teams in this journey by delivering these new solutions by which IT services are obtained, used, and managed. You will be working with other technologists to deliver cutting edge solutions using Google Cloud Services ( GCP ), programming and automation tools for some of our Fortune 1000 clients. You will have the opportunity to contribute to work that may involve building a new cloud solutions, migrating an application to co-exist in the hybrid cloud, deploying a global cloud application across multiple countries or supporting a set of cloud managed services. Our teams of technologists have a diverse range of skills and we are always looking for new ways to innovate and help our clients succeed. You will have an opportunity to leverage the skills you already have, try new technologies and develop skills that will improve your brand and career as a well-rounded cutting-edge technologist . Work you’ll do As GCP Data Engineer you will have multiple responsibilities depending on project type. As a Cloud Data Engineer, you will guide customers on how to ingest, store, process, analyze and explore/visualize data on the Google Cloud Platform. You will work on data migrations and transformational projects, and with customers to design large-scale data processing systems, develop data pipelines optimized for scaling, and troubleshoot potential platform issues. In this role you are the Data Engineer working with Deloitte's most strategic Cloud customers. Together with the team you will support customer implementation of Google Cloud products through: architecture guidance, best practices, data migration, capacity planning, implementation, troubleshooting, monitoring and much more. The key responsibilities may involve some or all of the areas listed below: Act as a trusted technical advisor to customers and solve complex Big Data challenges. Create and deliver best practices recommendations, tutorials, blog articles, sample code, and technical presentations adapting to different levels of key business and technical stakeholders. ▪ Identifying new tools and processes to improve the cloud platform and automate processes Qualifications Technical Requirements BA/BS degree in Computer Science, Mathematics or related technical field, or equivalent practical experience. Experience in Cloud SQL and Cloud Bigtable Experience in Dataflow, BigQuery, Dataproc, Datalab, Dataprep, Pub/Sub and Genomics Experience in Google Transfer Appliance, Cloud Storage Transfer Service, BigQuery Data Transfer Experience with data processing software (such as Hadoop, Kafka, Spark, Pig, Hive) and with data processing algorithms (MapReduce, Flume). Experience working with technical customers. Experience in writing software in one or more languages such as Java, C++, Python, Go and/or JavaScript. Consulting Requirements 6-9 years of relevant consulting, industry or technology experience Strong problem solving and troubleshooting skills Strong communicator Willingness to travel up in case of project requirement Preferred Qualifications Experience working data warehouses, including data warehouse technical architectures, infrastructure components, ETL/ELT and reporting/analytic tools and environments. Experience in technical consulting. Experience architecting, developing software, or internet scale production-grade Big Data solutions in virtualized environments such as Google Cloud Platform (mandatory) and AWS/Azure(good to have) Experience working with big data, information retrieval, data mining or machine learning as well as experience in building multi-tier high availability applications with modern web technologies (such as NoSQL, Kafka,NPL, MongoDB, SparkML, Tensorflow). Working knowledge of ITIL and/or agile methodologies Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300079 Show more Show less
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2