Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title : Flutter Developer Experience : 1–3 Years Work Location: CHN/CBE/MDU | Hybrid mode Job Summary: We are looking for a Flutter Developer (L1) to join our team and contribute to building high-quality mobile applications for Android and iOS. The ideal candidate should have basic knowledge of Flutter, Dart, and mobile development concepts. They should be eager to learn, adapt, and collaborate with cross-functional teams. Key Responsibilities: Develop and maintain mobile applications using Flutter & Dart. Write clean, maintainable, and efficient code following best practices. Assist in UI/UX implementation based on provided designs (Figma, etc.). Work with REST APIs and third-party libraries for app functionalities. Debug and troubleshoot issues reported by testers or end-users. Collaborate with senior developers to improve application performance. Participate in code reviews, documentation, and learning sessions. Ensure applications are optimized for low network conditions and offline usage (if required). Required Skills: Basic understanding of Flutter & Dart. Familiarity with State Management (Provider, Riverpod, Bloc, etc.). Experience with Firebase, REST APIs, JSON parsing. Basic knowledge of Git/GitHub version control. Understanding of mobile app lifecycles and debugging tools. Familiarity with basic UI/UX principles and responsive design. Strong problem-solving skills and eagerness to learn new technologies. Knowledge of Android (Kotlin/Java) or iOS (Swift). Experience with database management (SQLite, Hive, SharedPreferences, etc.).Exposure to Flutter Web or Desktop Development. Experience with CI/CD pipelines for Flutter apps. Show more Show less
Posted 6 days ago
10.0 years
0 Lacs
Kochi, Kerala, India
On-site
Data Architect is responsible to define and lead the Data Architecture, Data Quality, Data Governance, ingesting, processing, and storing millions of rows of data per day. This hands-on role helps solve real big data problems. You will be working with our product, business, engineering stakeholders, understanding our current eco-systems, and then building consensus to designing solutions, writing codes and automation, defining standards, establishing best practices across the company and building world-class data solutions and applications that power crucial business decisions throughout the organization. We are looking for an open-minded, structured thinker passionate about building systems at scale. Role Design, implement and lead Data Architecture, Data Quality, Data Governance Defining data modeling standards and foundational best practices Develop and evangelize data quality standards and practices Establish data governance processes, procedures, policies, and guidelines to maintain the integrity and security of the data Drive the successful adoption of organizational data utilization and self-serviced data platforms Create and maintain critical data standards and metadata that allows data to be understood and leveraged as a shared asset Develop standards and write template codes for sourcing, collecting, and transforming data for streaming or batch processing data Design data schemes, object models, and flow diagrams to structure, store, process, and integrate data Provide architectural assessments, strategies, and roadmaps for data management Apply hands-on subject matter expertise in the Architecture and administration of Big Data platforms, Data Lake Technologies (AWS S3/Hive), and experience with ML and Data Science platforms Implement and manage industry best practice tools and processes such as Data Lake, Databricks, Delta Lake, S3, Spark ETL, Airflow, Hive Catalog, Redshift, Kafka, Kubernetes, Docker, CI/CD Translate big data and analytics requirements into data models that will operate at a large scale and high performance and guide the data analytics engineers on these data models Define templates and processes for the design and analysis of data models, data flows, and integration Lead and mentor Data Analytics team members in best practices, processes, and technologies in Data platforms Qualifications B.S. or M.S. in Computer Science, or equivalent degree 10+ years of hands-on experience in Data Warehouse, ETL, Data Modeling & Reporting 7+ years of hands-on experience in productionizing and deploying Big Data platforms and applications, Hands-on experience working with: Relational/SQL, distributed columnar data stores/NoSQL databases, time-series databases, Spark streaming, Kafka, Hive, Delta Parquet, Avro, and more Extensive experience in understanding a variety of complex business use cases and modeling the data in the data warehouse Highly skilled in SQL, Python, Spark, AWS S3, Hive Data Catalog, Parquet, Redshift, Airflow, and Tableau or similar tools Proven experience in building a Custom Enterprise Data Warehouse or implementing tools like Data Catalogs, Spark, Tableau, Kubernetes, and Docker Knowledge of infrastructure requirements such as Networking, Storage, and Hardware Optimization with hands-on experience in Amazon Web Services (AWS) Strong verbal and written communications skills are a must and should work effectively across internal and external organizations and virtual teams Demonstrated industry leadership in the fields of Data Warehousing, Data Science, and Big Data related technologies Strong understanding of distributed systems and container-based development using Docker and Kubernetes ecosystem Deep knowledge of data structures and algorithms Experience working in large teams using CI/CD and agile methodologies Unique ID - Show more Show less
Posted 6 days ago
3.0 - 5.0 years
0 Lacs
Mumbai Metropolitan Region
Remote
Marsh McLennan is seeking candidates for the following position based in the Pune office. Senior Engineer/Principal Engineer What can you expect? We are seeking a skilled Data Engineer with 3 to 5 years of hands-on experience in building and optimizing data pipelines and architectures. The ideal candidate will have expertise in Spark, AWS Glue, AWS S3, Python, complex SQL, and AWS EMR. What is in it for you? Holidays (As Per the location) Medical & Insurance benefits (As Per the location) Shared Transport (Provided the address falls in service zone) Hybrid way of working Diversify your experience and learn new skills Opportunity to work with stakeholders globally to learn and grow We will count on you to: Design and implement scalable data solutions that support our data-driven decision-making processes. What you need to have: SQL and RDBMS knowledge - 5/5. Postgres. Should have extensive hands-on Database systems carrying tables, schema, views, materialized views. AWS Knowledge. Core and Data engineering services. Glue/ Lambda/ EMR/ DMS/ S3 - services in focus. ETL data:dge :- Any ETL tool preferably Informatica. Data warehousing. Big data:- Hadoop - Concepts. Spark - 3/5 Hive - 5/5 Python/ Java. Interpersonal skills:- Excellent communication skills and Team lead capabilities. Understanding of data systems well in big organizations setup. Passion deep diving and working with data and delivering value out of it. What makes you stand out? Databricks knowledge. Any Reporting tool experience. Preferred MicroStrategy. Marsh McLennan (NYSE: MMC) is the world’s leading professional services firm in the areas of risk, strategy and people. The Company’s more than 85,000 colleagues advise clients in over 130 countries. With annual revenue of $23 billion, Marsh McLennan helps clients navigate an increasingly dynamic and complex environment through four market-leading businesses. Marsh provides data-driven risk advisory services and insurance solutions to commercial and consumer clients. Guy Carpenter develops advanced risk, reinsurance and capital strategies that help clients grow profitably and pursue emerging opportunities. Mercer delivers advice and technology-driven solutions that help organizations redefine the world of work, reshape retirement and investment outcomes, and unlock health and well being for a changing workforce. Oliver Wyman serves as a critical strategic, economic and brand advisor to private sector and governmental clients. For more information, visit marshmclennan.com, or follow us on LinkedIn and X. Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people regardless of their sex/gender, marital or parental status, ethnic origin, nationality, age, background, disability, sexual orientation, caste, gender identity or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one “anchor day” per week on which their full team will be together in person. Marsh McLennan (NYSE: MMC) is a global leader in risk, strategy and people, advising clients in 130 countries across four businesses: Marsh, Guy Carpenter, Mercer and Oliver Wyman. With annual revenue of $24 billion and more than 90,000 colleagues, Marsh McLennan helps build the confidence to thrive through the power of perspective. For more information, visit marshmclennan.com, or follow on LinkedIn and X. Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people and embrace diversity of age, background, caste, disability, ethnic origin, family duties, gender orientation or expression, gender reassignment, marital status, nationality, parental status, personal or social status, political affiliation, race, religion and beliefs, sex/gender, sexual orientation or expression, skin color, or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one “anchor day” per week on which their full team will be together in person. R_299578 Show more Show less
Posted 6 days ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
You Lead the Way. We’ve Got Your Back. At American Express, we know that with the right backing, people and businesses have the power to progress in incredible ways. Whether we’re supporting our customers’ financial confidence to move ahead, taking commerce to new heights, or encouraging people to explore the world, our colleagues are constantly redefining what’s possible — and we’re proud to back each other every step of the way. When you join #TeamAmex, you become part of a diverse community of over 60,000 colleagues, all with a common goal to deliver an exceptional customer experience every day. We back our colleagues with the support they need to thrive, professionally and personally. That’s why we have Amex Flex, our enterprise working model that provides greater flexibility to colleagues while ensuring we preserve the important aspects of our unique in-person culture. We are building an energetic, high-performance team with a nimble and creative mindset to drive our technology and products. American Express (AXP) is a powerful brand, a great place to work and has unparalleled scale. Join us for an exciting opportunity in the Marketing Technology within American Express Technologies. How will you make an impact in this role? There are hundreds of opportunities to make your mark on technology and life at American Express. Here's just some of what you'll be doing: As a part of our team, you will be developing innovative, high quality, and robust operational engineering capabilities. Develop software in our technology stack which is constantly evolving but currently includes Big data, Spark, Python, Scala, GCP, Adobe Suit ( like Customer Journey Analytics ). Work with Business partners and stakeholders to understand functional requirements, architecture dependencies, and business capability roadmaps. Create technical solution designs to meet business requirements. Define best practices to be followed by team. Taking your place as a core member of an Agile team driving the latest development practices Identify and drive reengineering opportunities, and opportunities for adopting new technologies and methods. Suggest and recommend solution architecture to resolve business problems. Perform peer code review and participate in technical discussions with the team on the best solutions possible. As part of our diverse tech team, you can architect, code and ship software that makes us an essential part of our customers' digital lives. Here, you can work alongside talented engineers in an open, supportive, inclusive environment where your voice is valued, and you make your own decisions on what tech to use to solve challenging problems. American Express offers a range of opportunities to work with the latest technologies and encourages you to back the broader engineering community through open source. And because we understand the importance of keeping your skills fresh and relevant, we give you dedicated time to invest in your professional development. Find your place in technology of #TeamAmex. Minimum Qualifications: · BS or MS degree in computer science, computer engineering, or other technical discipline, or equivalent work experience. · 5+ years of hands-on software development experience with Big Data & Analytics solutions – Hadoop Hive, Spark, Scala, Hive, Python, shell scripting, GCP Cloud Big query, Big Table, Airflow. · Working knowledge of Adobe suit like Adobe Experience Platform, Adobe Customer Journey Analytics, CDP. · Proficiency in SQL and database systems, with experience in designing and optimizing data models for performance and scalability. · Design and development experience with Kafka, Real time ETL pipeline, API is desirable. · Experience in designing, developing, and optimizing data pipelines for large-scale data processing, transformation, and analysis using Big Data and GCP technologies. · Certifications in cloud platform (GCP Professional Data Engineer) is a plus. · Understanding of distributed (multi-tiered) systems, data structures, algorithms & Design Patterns. · Strong Object-Oriented Programming skills and design patterns. · Experience with CICD pipelines, Automated test frameworks, and source code management tools (XLR, Jenkins, Git, Maven). · Good knowledge and experience with configuration management tools like GitHub · Ability to analyze complex data engineering problems, propose effective solutions, and implement them effectively. · Looks proactively beyond the obvious for continuous improvement opportunities. · Communicates effectively with product and cross functional team. We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations. Show more Show less
Posted 6 days ago
5.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Exp:5+yrs NP: Imm-15 days Rounds: 3 Rounds (Virtual) Mandate Skills: Apache spark, hive, Hadoop, spark, scala, Databricks Job Description The Role Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights. Constructing infrastructure for efficient ETL processes from various sources and storage systems. Leading the implementation of algorithms and prototypes to transform raw data into useful information. Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations. Creating innovative data validation methods and data analysis tools. Ensuring compliance with data governance and security policies. Interpreting data trends and patterns to establish operational alerts. Developing analytical tools, programs, and reporting mechanisms Conducting complex data analysis and presenting results effectively. Preparing data for prescriptive and predictive modeling. Continuously exploring opportunities to enhance data quality and reliability. Applying strong programming and problem-solving skills to develop scalable solutions. Requirements Experience in the Big Data technologies (Hadoop, Spark, Nifi, Impala) 5+ years of hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines. High proficiency in Scala/Java and Spark for applied large-scale data processing. Expertise with big data technologies, including Spark, Data Lake, and Hive. Show more Show less
Posted 6 days ago
2.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About Us : Paytm is India's leading mobile payments and financial services distribution company. Pioneer of the mobile QR payments revolution in India, Paytm builds technologies that help small businesses with payments and commerce. Paytm’s mission is to serve half a billion Indians and bring them to the mainstream economy with the help of technology. About the role : Evangelize and demonstrate the value and impact of analytics for informed business decision-making by developing and deploying analytical solutions and providing data-driven insights to business stakeholders to understand and solve various business nuances. Responsibilities : 1. The role involves working closely with Product and Business stakeholders to empower data-driven decision-making and generate insights that will help grow the key metrics1. 2. Writing SQL/HIVE queries for data mining 3. Performing deep data analysis on MS Excel and sharing regular actionable insights 4. Responsible for performing data driven analytics to generate business insights 5. Automating the regular reports/MIS using tools like HIVE, Google Data Studio and coordinating with different teams 6. Strongly follow-up with concerned teams to make sure that our business & financial metrics are met 7. Look at data from various cuts / cohorts to suggest insights - Analysis based on multiple cohorts - Transaction, GMV, Revenue, Gross Margin, users etc. for both offline & online payments Mandatory Technical Skills needed : - 1. Distinctive problem solving and analysis skills, combined with impeccable business judgment. 2. Proficient in SQL/HIVE/Data Mining & Business Analytics - Proficient in Microsoft Excel. 3. Derive business insights from data with a focus on driving business level metrics. Eligibility Criteria : 1. Minimum 2 years of experience as Data Analyst / Business Analyst. 2. Ability to interact and convince business stakeholders. 3. Hands on with SQL (sub-query and complex query), Excel / Google Sheets, and data visualization tools (Looker studio, Power BI). 4. Ability to combine structured & unstructured data. 5. Worked on large datasets of the order of 5 Million 6. Experimentative mind-set with attention to detail. Compensation : If you are the right fit, we believe in creating wealth for you With enviable 500 mn+ registered users, 21 mn+ merchants and depth of data in our ecosystem, we are in a unique position to democratize credit for deserving consumers & merchants – and we are committed to it. India's largest digital lending story is brewing here. It’s your opportunity to be a part of the story! Show more Show less
Posted 6 days ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Description Coders Brain is a global leader in IT services, digital and business solutions that partners with clients to simplify, strengthen, and transform their businesses. The company ensures high levels of certainty and satisfaction through deep industry expertise and a global network of innovation and delivery centers. Job Title: Senior Data Engineer Location: Hyderabad Experience: 6+ Years Employment Type: Full-Time Job Summary: We are looking for a highly skilled Senior Data Engineer to join our Data Engineering team. You will play a key role in designing, implementing, and optimizing robust, scalable data solutions that drive business decisions for our clients. This position involves hands-on development of data pipelines, cloud data platforms, and analytics tools using cutting-edge technologies. Key Responsibilities: Design and build reliable, scalable, and high-performance data pipelines to ingest, transform, and store data from various sources. Develop cloud-based data infrastructure using platforms such as AWS , Azure , or Google Cloud Platform (GCP) . Optimize data processing and storage frameworks for cost efficiency and performance. Ensure high standards for data quality, integrity, and governance across all systems. Collaborate with cross-functional teams including data scientists, analysts, and product managers to translate requirements into technical solutions. Troubleshoot and resolve issues with data pipelines and workflows, ensuring system reliability and availability. Stay current with emerging trends and technologies in big data and cloud ecosystems and recommend improvements accordingly. Required Qualifications: Bachelor’s degree in Computer Science , Software Engineering , or a related field. Minimum 6 years of professional experience in data engineering or a related discipline. Proficiency in Python , Java , or Scala for data engineering tasks. Strong expertise in SQL and hands-on experience with modern data warehouses (e.g., Snowflake, Redshift, BigQuery). In-depth knowledge of big data technologies such as Hadoop , Spark , or Hive . Practical experience with cloud-based data platforms such as AWS (e.g., Glue, EMR) , Azure (e.g., Data Factory, Synapse) , or GCP (e.g., Dataflow, BigQuery) . Excellent analytical, problem-solving, and communication skills. Nice to Have: Experience with containerization and orchestration tools such as Docker and Kubernetes . Familiarity with CI/CD pipelines for data workflows. Knowledge of data governance, security, and compliance best practices. Show more Show less
Posted 6 days ago
0.0 - 5.0 years
0 Lacs
Panchkula, Haryana
On-site
Job Title: Software Developer (Experience: 5 Years ) Company Website: https://elitewebtechnologies.com/ Location: MDC, Panchkula Job Type: Full-Time Experience Required: 5 Years Job Description: We are seeking a highly skilled and experienced Software Developer with over 5 years of hands-on experience in React JS , React Native , TypeScript , and Flutter . The ideal candidate should have a strong background in front-end and mobile technologies, leadership experience, and a passion for modern, scalable, and performance-driven development. Experience with blockchain and offline-first mobile development is a strong plus. Core Skills & Requirements 5+ years of experience with React JS , React Native , and Flutter Strong command of TypeScript , JavaScript (ES6+) , and Dart Proficient in Redux , Tailwind CSS , and component-based architecture Understanding of native Android/iOS development basics Ability to lead, mentor, and collaborate in agile environments Good to have experience in: Code Push, React Navigation, Firebase Messaging, App Store deployment, Play Store optimization Mobile-First & Offline-Capable Development Expertise in building offline-first mobile applications Familiar with Realm , Couchbase Lite , SQLite , Firebase Good to have experience in: Data synchronization, conflict resolution, encrypted local storage, mobile-first UX design Architecture & State Management Strong knowledge of modern app architecture and design patterns Experience with state management systems like: FLUX , REDUX , MOBX , MVVM , BLOC , Cubit , Providers , River Pod Good to have experience in: Multi-layered architecture, modular design, clean architecture practices Professional Traits & Leadership Self-motivated, innovative, and result-oriented Demonstrated ability to lead teams , manage code quality, and oversee project delivery Skilled at mediation and conflict resolution Strong communication and mentorship abilities Good to have experience in: Scrum, Agile methodologies, sprint planning, peer reviews, technical documentation Domain Expertise Proven experience in building applications for: e-Commerce Social Networking Fitness & Healthcare Blockchain-based platforms (preferred) Good to have experience in: DApps, smart contracts (Solidity), NFTs, crypto wallets, HIPAA-compliant applications Databases & Storage Practical experience with: SQLite, Realm, Hive, Couchbase Lite, MongoDB Good to have experience in: Database indexing, performance tuning, real-time sync Tools & Platforms Comfortable using: Jenkins, JIRA, VS Code, Confluence, Git, Xcode, Android Studio, Slack, Fastlane, CircleCI, MS App Center Good to have experience in: CI/CD pipelines, build automation, crash reporting tools, test automation tools External SDKs & Integrations Experience integrating: Facebook, Twitter, LinkedIn, Google, Stripe, PayPal, RazorPay, Quickblox, OpenTok, Agora, SendBird, Annie App, Amplitude Analytics, Google Maps, HERE Maps, SignalR, Pub-Sub, Socket.IO, AWS Amplify, WebRTC, Dialogflow, IBM Watson, Google Vision APIs, AppsFlyer, GrowthRx Good to have experience in: Custom SDKs, real-time communication, chat/video APIs, analytics, third-party service orchestration Ready to accelerate your Software development journey with us? Apply now by sending your CV to: hr@elitewebtechnologies.com For queries, contact: +91 91151 52125 Job Type: Full-time Pay: Up to ₹1,200,000.00 per year Schedule: Day shift Fixed shift Ability to commute/relocate: Panchkula, Haryana: Reliably commute or planning to relocate before starting work (Required) Experience: React Native: 5 years (Required) Software development: 5 years (Required) Location: Panchkula, Haryana (Required) Work Location: In person
Posted 6 days ago
2.0 - 6.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY_ Consulting _Cloud Testing : Staff The opportunity As a Cloud Test Engineer, you will be responsible for testing cloud Solutions on cloud platform and should ensure Quality of deliverables. You will work closely with Test Lead for the Projects under Test. Testing proficiency in Cloud and cloud platform knowledge either of AWS/Azure/GCP are required for this position. Added advantage to have experience in CI/CD platform, Cloud foundation and cloud data platform. Skills And Attributes For Success Delivery of Testing needs for Cloud Projects. Ability to effectively communicate with team members across geographies effectively Experience in Cloud Infrastructure testing. Sound cloud concepts and ability to suggest options Knowledge in any of the cloud platform (AWS/Azure/GCP). Knowledge in Azure Devops / Jenkins / Pipelines Thorough understanding of Requirements and provide feedback on the requirements. Develop Test Strategy for cloud Projects for various aspects like Platform testing , Application testing , Integration Testing and UAT as needed. Provide inputs for Test Planning aligned with Test Strategy. Perform Test Case design, identify opportunity for Test Automation. Develop Test Cases both Manual and Automation Scripts as required. Ensure Test readiness (Test Environment, Test Data, Tools Licenses etc) Perform Test execution and report the progress. Report defects and liaise with development & other relevant team for defect resolution. Prepare Test Report and provide inputs to Test Lead for Test Sign off/ Closure Provide support in Project meetings/ calls with Client for status reporting. Provide inputs on Test Metrics to Test Lead. Support in Analysis of Metric trends and implementing improvement actions as necessary. Handling changes and conducting Regression Testing Generate Test Summary Reports Co-coordinating Test team members and Development team Interacting with client-side people to solve issues and update status Actively take part in providing Analytics and Advanced Analytics Testing trainings in the company To qualify for the role, you must have BE/BTech/MCA/M.Sc Overall 2 to 6 years of experience in Testing Cloud solutions, minimum 2 years of experience in any of the Cloud solutions built on Azure/AWS/GCP Certifications in cloud area is desirable. Exposure in Spark SQL / Hive QL testing is desirable. Exposure in data migration project from on-premise to cloud platform is desirable. Understanding of business intelligence concepts, architecture & building blocks in areas ETL processing, Datawarehouse, dashboards and analytics. Working experience in scripting languages such as python, java scripts, java. Testing experience in more than one of these areas- Cloud foundation, Devops, Data Quality, ETL, OLAP, Reports Exposure with SQL server or Oracle database and proficiency with SQL scripting. Exposure in backend Testing of Enterprise Applications/ Systems built on different platforms including Microsoft .Net and Sharepoint technologies Exposure in ETL Testing using commercial ETL tools is desirable. Knowledge/ experience in SSRS, Spotfire (SQL Server Reporting Services) and SSIS is desirable. Exposure in Data Transformation Projects, database design concepts & white-box testing is desirable. Ideally, you’ll also have Experience/ exposure to Test Automation and scripting experience in perl & shell is desirable Experience with Test Management and Defect Management tools preferably HP ALM or JIRA Able to contribute as an individual contributor and when required Lead a small Team Able to create Test Strategy & Test Plan for Testing Cloud applications/ solutions that are moderate to complex / high risk Systems Design Test Cases, Test Data and perform Test Execution & Reporting. Should be able to perform Test Management for small Projects as and when required Participate in Defect Triaging and track the defects for resolution/ conclusion Good communication skills (both written & verbal) Good understanding of SDLC, test process in particular Good analytical & problem solving or troubleshooting skills Good understanding of Project Life Cycle and Test Life Cycle. Exposure to CMMi and Process improvement Frameworks is a plus. Should have excellent communication skills & should be able to articulate concisely & clearly Should be ready to do an individual contributor as well as Team Leader role What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 6 days ago
4.0 - 6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are seeking a highly skilled and hands-on Data Engineer to join Controls Technology to support the design, development, and implementation of our next-generation Data Mesh and Hybrid Cloud architecture. This role is critical in building scalable, resilient, and future-proof data pipelines and infrastructure that enable the seamless integration of Controls Technology data within a unified platform. The Data Engineer will work closely with the Data Mesh and Cloud Architect Lead to implement data products, ETL/ELT pipelines, hybrid cloud integrations, and governance frameworks that support data-driven decision-making across the enterprise. Key Responsibilities: Data Pipeline Development: Design, build, and optimize ETL/ELT pipelines for structured and unstructured data. Develop real-time and batch data ingestion pipelines using distributed data processing frameworks. Ensure pipelines are highly performant, cost-efficient, and secure. Apache Iceberg & Starburst Integration: Work extensively with Apache Iceberg for data lake storage optimization and schema evolution. Manage Iceberg Catalogs and ensure seamless integration with query engines. Configure and maintain Hive MetaStore (HMS) for Iceberg-backed tables and ensure proper metadata management. Utilize Starburst and Stargate to enable distributed SQL-based analytics and seamless data federation. Optimize performance tuning for large-scale querying and federated access to structured and semi-structured data. Data Mesh Implementation: Implement Data Mesh principles by developing domain-specific data products that are discoverable, interoperable, and governed. Collaborate with data domain owners to enable self-service data access while ensuring consistency and quality. Hybrid Cloud Data Integration: Develop and manage data storage, processing, and retrieval solutions across AWS and on-premise environments. Work with cloud-native tools such as AWS S3, RDS, Lambda, Glue, Redshift, and Athena to support scalable data architectures. Ensure hybrid cloud data flows are optimized, secure, and compliant with organizational standards. Data Governance & Security: Implement data governance, lineage tracking, and metadata management solutions. Enforce security best practices for data encryption, role-based access control (RBAC), and compliance with policies such as GDPR and CCPA. Performance Optimization & Monitoring: Monitor and optimize data workflows, performance tuning of queries, and resource utilization. Implement logging, alerting, and monitoring solutions using CloudWatch, Prometheus, or Grafana to ensure system health. Collaboration & Documentation: Work closely with data architects, application teams, and business units to ensure seamless integration of data solutions. Maintain clear documentation of data models, transformations, and architecture for internal reference and governance. Required Technical Skills: Programming & Scripting: Strong proficiency in Python, SQL, and Shell scripting. Experience with Scala or Java is a plus. Data Processing & Storage: Hands-on experience with Apache Spark, Kafka, Flink, or similar distributed processing frameworks. Strong knowledge of relational (PostgreSQL, MySQL, Oracle) and NoSQL databases (DynamoDB, MongoDB). Expertise in Apache Iceberg for managing large-scale data lakes, schema evolution, and ACID transactions. Experience working with Iceberg Catalogs, Hive MetaStore (HMS), and integrating Iceberg-backed tables with query engines. Familiarity with Starburst and Stargate for federated querying and cross-platform data access. Cloud & Hybrid Architecture: Experience working with AWS data services (S3, Redshift, Glue, Athena, EMR, RDS). Understanding of hybrid data storage and integration between on-prem and cloud environments. Infrastructure as Code (IaC) & DevOps: Experience with Terraform, AWS CloudFormation, or Kubernetes for provisioning infrastructure. CI/CD pipeline experience using GitHub Actions, Jenkins, or GitLab CI/CD. Data Governance & Security: Familiarity with data cataloging, lineage tracking, and metadata management. Understanding of RBAC, IAM roles, encryption, and compliance frameworks (GDPR, SOC2, etc.). Required Soft Skills: Problem-Solving & Analytical Thinking - Ability to troubleshoot complex data issues and optimize workflows. Collaboration & Communication - Comfortable working with cross-functional teams and articulating technical concepts to non-technical stakeholders. Ownership & Proactiveness - Self-driven, detail-oriented, and able to take ownership of tasks with minimal supervision. Continuous Learning - Eager to explore new technologies, improve skill sets, and stay ahead of industry trends. Qualifications: 4-6 years of experience in data engineering, cloud infrastructure, or distributed data processing. Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Technology, or a related field. Hands-on experience with data pipelines, cloud services, and large-scale data platforms. Strong foundation in SQL, Python, Apache Iceberg, Starburst, and cloud-based data solutions (AWS preferred), Apache Airflow Orchestration ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Data Architecture ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 6 days ago
12.0 - 20.0 years
35 - 60 Lacs
Mumbai
Work from Office
Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Join the innovative team at Kyndryl as a Client Technical Solutioner and unlock your potential to shape the future of technology solutions. As a key player in our organization, you will embark on an exciting journey where you get to work closely with customers, understand their unique challenges, and provide them with cutting-edge technical solutions and services. Picture yourself as a trusted advisor – collaborating directly with customers to unravel their business needs, pain points, and technical requirements. Your expertise and deep understanding of our solutions will empower you to craft tailored solutions that address their specific challenges and drive their success. Your role as a Client Technical Solutioner is pivotal in developing domain-specific solutions for our cutting-edge services and offerings. You will be at the forefront of crafting tailored domain solutions and cost cases for both simple and complex, long-term opportunities, demonstrating we meet our customers' requirements while helping them overcome their business challenges. At Kyndryl, we believe in the power of collaboration and your expertise will be essential in supporting our Technical Solutioning and Solutioning Managers during customer technology and business discussions, even at the highest levels of Business/IT Director/LOB. You will have the chance to demonstrate the value of our solutions and products, effectively communicating their business and technical benefits to decision makers and customers. In this role, you will thrive as you create innovative technical solutions that align with industry trends and exceed customer expectations. Your ability to collaborate seamlessly with internal stakeholders will enable you to gather the necessary documents and technical insights to deliver compelling bid submissions. Not only will you define winning cost models for deals, but you will also lead these deals to profitability, ensuring the ultimate success of both our customers and Kyndryl. You will play an essential role in contract negotiations, up to the point of signature, and facilitate a smooth engagement hand-over process. As the primary source of engagement management and solution design within your technical domain, you will compile, refine, and take ownership of final solution documents. Your technical expertise will shine through as you present these documents in a professional and concise manner, showcasing your mastery of the subject matter. You’ll have the opportunity to contribute to the growth and success of Kyndryl by standardizing our go-to-market pitches across various industries. By creating differentiated propositions that align with market requirements, you will position Kyndryl as a leader in the industry, opening new avenues of success for our customers and our organization. Join us as a Client Technical Solutioner at Kyndryl and unleash your potential to shape the future of technical solutions while enjoying a stimulating and rewarding career journey filled with innovation, collaboration, and growth. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Skills and Experience 10 – 15 Years (Specialist Seller / Consultant) is a must with 3 – 4 years of relevant experience of Gen AI / Agentic AI Proven past experience in Analytics Should have real world experience in Design & Implementation of scalable, fault-tolerant & secure Architectures on any one of the major hyper-scalers (AWS / Azure / GCP ) for Analytics Excellent communication skills to engage with clients and influence decisions. High level of competence in preparing Architectural documentation and presentations. Must be organized, self-sufficient and can manage multiple initiatives simultaneously. Must have the ability to coordinate with other teams and vendors, independently Deep knowledge of Services offerings and technical solutions in a practice Demonstrated experience translating distinctive technical knowledge into actionable customer insights and solutions Prior consultative selling experience Externally recognized as an expert in the technology and/or solutioning areas, to include technical certifications supporting subdomain focus area(s) Responsible for Prospecting & Qualifying leads, do the relevant Product / Market Research independently, in response to Customer’s requirement / Pain Point. Advising and Shaping Client Requirements to produce high-level designs and technical solutions in response to opportunities and requirements from Customers and Partners. Work with both internal / external stakeholders to identify business requirements, develop solutions to meet those requirements / build the Opportunity. Understand & analyze the application requirements in Client RFPs Design software applications based on the requirements within specified architectural guidelines & constraints. Lead, Design and implement Proof of Concepts & Pilots to demonstrate the solution to Clients /prospects. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.
Posted 6 days ago
5.0 years
0 Lacs
Karnataka, India
On-site
Who You’ll Work With The Senior Data Analyst will work with the Data and Artificial Intelligence team at Nike. Data and Artificial Intelligence team at Nike drives the enterprise-wide data needs that fuels Nike's innovation. This role is crucial in translating the business needs of Nike into data requirements and thereby have a significant impact on the growth of Nike's business. This role will fuel the foundational data layers that'll power the advanced data analytics of Nike. Who We Are Looking For We are looking for individuals who are highly driven and have the ability to understand and translate business requirements into data needs. The candidates should be good at problem solving and have in-depth technical knowledge on SQL, bigdata with optional expertise in pyspark. They need to have excellent verbal and written communication and should be willing to work with business consumers to understand their needs and requirements. Role requirements include A minimum of bachelor’s degree in computer science/information science engineering 5+ years of experience in data and analytics space with hands-on experience Very high expertise in SQL with the ability to work on platforms like databricks, hive and snowflake Ability to integrate and communicate moderately complex information, sometimes to audiences who are not familiar with the subject matter. Acts as a resource to teammates. Ability to integrate complex datasets and derive business value out of data Independently utilizes knowledge, skills, and abilities to identify areas of opportunity, resolve complex problems & navigate solutions. What You’ll Work On In this role you'll be working with a team of talented data engineers, product managers and data consumers who'll focus on the enterprise-wide data needs of Nike. You'll have a direct impact on the deliverables of the team, and you'll be guiding the team on solving complex business problems. Some of your day-to-day activities will include - Collaborating with engineers, product managers and business users for optimal usage of data Understanding business used cases using data Analysing data to inform business decisions Troubleshooting complex data integration problems at a business level Writing and enhancing complex queries in databricks, hive, snowflake Providing inputs to the product management in growing the data foundational layers Show more Show less
Posted 6 days ago
3.0 - 5.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Description Please note even though the GPP mentions Remote, this is a Hybrid role. Key Responsibilities Implement and automate deployment of distributed systems for ingesting and transforming data from various sources (relational, event-based, unstructured). Continuously monitor and troubleshoot data quality and integrity issues. Implement data governance processes and methods for managing metadata, access, and retention for internal and external users. Develop reliable, efficient, scalable, and quality data pipelines with monitoring and alert mechanisms using ETL/ELT tools or scripting languages. Develop physical data models and implement data storage architectures as per design guidelines. Analyze complex data elements and systems, data flow, dependencies, and relationships to contribute to conceptual, physical, and logical data models. Participate in testing and troubleshooting of data pipelines. Develop and operate large-scale data storage and processing solutions using distributed and cloud-based platforms (e.g., Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB). Use agile development technologies, such as DevOps, Scrum, Kanban, and continuous improvement cycles, for data-driven applications. Responsibilities Qualifications: College, university, or equivalent degree in a relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Competencies System Requirements Engineering: Translate stakeholder needs into verifiable requirements and establish acceptance criteria. Collaborates: Build partnerships and work collaboratively with others to meet shared objectives. Communicates Effectively: Develop and deliver multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer Focus: Build strong customer relationships and deliver customer-centric solutions. Decision Quality: Make good and timely decisions that keep the organization moving forward. Data Extraction: Perform ETL activities from various sources and transform them for consumption by downstream applications and users. Programming: Create, write, and test computer code, test scripts, and build scripts using industry standards and tools. Quality Assurance Metrics: Apply measurement science to assess whether a solution meets its intended outcomes. Solution Documentation: Document information and solutions based on knowledge gained during product development activities. Solution Validation Testing: Validate configuration item changes or solutions using best practices. Data Quality: Identify, understand, and correct flaws in data to support effective information governance. Problem Solving: Solve problems using systematic analysis processes and industry-standard methodologies. Values Differences: Recognize the value that different perspectives and cultures bring to an organization. Qualifications Skills and Experience Needed: Must-Have: 3-5 years of experience in data engineering with a strong background in Azure Databricks and Scala/Python. Hands-on experience with Spark (Scala/PySpark) and SQL. Experience with SPARK Streaming, SPARK Internals, and Query Optimization. Proficiency in Azure Cloud Services. Agile Development experience. Unit Testing of ETL. Experience creating ETL pipelines with ML model integration. Knowledge of Big Data storage strategies (optimization and performance). Critical problem-solving skills. Basic understanding of Data Models (SQL/NoSQL) including Delta Lake or Lakehouse. Quick learner. Nice-to-Have: Understanding of the ML lifecycle. Exposure to Big Data open source technologies. Experience with SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka. SQL query language proficiency. Experience with clustered compute cloud-based implementations. Familiarity with developing applications requiring large file movement for a cloud-based environment. Exposure to Agile software development. Experience building analytical solutions. Exposure to IoT technology. Work Schedule: Most of the work will be with stakeholders in the US, with an overlap of 2-3 hours during EST hours on a need basis. Job Systems/Information Technology Organization Cummins Inc. Role Category Remote Job Type Exempt - Experienced ReqID 2409179 Relocation Package Yes Show more Show less
Posted 6 days ago
0.0 - 4.0 years
9 - 13 Lacs
Pune
Work from Office
Project description You will be working in a global team that manages and performs a global technical control. You'll be joining Assets Management team which is looking after asset management data foundation and operates a set of in-house developed tooling. As an IT engineer you'll play an important role in ensuring the development methodology is followed, and lead technical design discussions with the architects. Our culture centers around partnership with our businesses, transparency, accountability and empowerment, and passion for the future. Responsibilities Design, develop, and maintain scalable data solutions using Starburst. Collaborate with cross-functional teams to integrate Starburst with existing data sources and tools. Optimize query performance and ensure data security and compliance. Implement monitoring and alerting systems for data platform health. Stay updated with the latest developments in data engineering and analytics. Skills Must have Bachelor's degree or Masters in a related technical field; or equivalent related professional experience. Prior experience as a Software Engineer applying new engineering principles to improve existing systems including leading complex, well defined projects. Strong knowledge of Big-Data Languages including: SQL Hive Spark/Pyspark Presto Python Strong knowledge of Big-Data Platforms, such as:o The Apache Hadoop ecosystemo AWS EMRo Qubole or Trino/Starburst Good knowledge and experience in cloud platforms such as AWS, GCP, or Azure. Continuous learner with the ability to apply previous experience and knowledge to quickly master new technologies. Demonstrates the ability to select among technology available to implement and solve for need. Able to understand and design moderately complex systems. Understanding of testing and monitoring tools. Ability to test, debug, fix issues within established SLAs. Experience with data visualization tools (e.g., Tableau, Power BI). Understanding of data governance and compliance standards. Nice to have Data Architecture & EngineeringDesign and implement efficient and scalable data warehousing solutions using Azure Databricks and Microsoft Fabric. Business Intelligence & Data VisualizationCreate insightful Power BI dashboards to help drive business decisions. Other Languages EnglishC1 Advanced Seniority Senior
Posted 6 days ago
6.0 - 9.0 years
4 - 8 Lacs
Gurugram
Work from Office
esign, implement, and maintain data pipelines for data ingestion, processing, and transformation in Azure. Work together with data scientists and analysts to understand the needs for data and create effective data workflows. Create and maintain data storage solutions including Azure SQL Database, Azure Data Lake, and Azure Blob Storage. Utilizing Azure Data Factory or comparable technologies, create and maintain ETL (Extract, Transform, Load) operations. Implementing data validation and cleansing procedures will ensure the quality, integrity, and dependability of the data. Improve the scalability, efficiency, and cost-effectiveness of data pipelines. Monitoring and resolving data pipeline problems will guarantee consistency and availability of the data. Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders.
Posted 6 days ago
6.0 - 9.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Design, implement, and maintain data pipelines for data ingestion, processing, and transformation in Azure. Work together with data scientists and analysts to understand the needs for data and create effective data workflows. Create and maintain data storage solutions including Azure SQL Database, Azure Data Lake, and Azure Blob Storage. Utilizing Azure Data Factory or comparable technologies, create and maintain ETL (Extract, Transform, Load) operations. Implementing data validation and cleansing procedures will ensure the quality, integrity, and dependability of the data. Improve the scalability, efficiency, and cost-effectiveness of data pipelines. Monitoring and resolving data pipeline problems will guarantee consistency and availability of the data. Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders.
Posted 6 days ago
5.0 - 10.0 years
14 - 17 Lacs
Navi Mumbai
Work from Office
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
Posted 6 days ago
5.0 - 7.0 years
13 - 17 Lacs
Bengaluru
Work from Office
A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe.You'll work with visionaries across multiple industries to improve the hybrid cloud and Al journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat. In your role, you will be responsible for: Skilled Multiple GCP services - GCS, BigQuery, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer etc. Must have Python and SQL work experience & Proactive, collaborative and ability to respond to critical situation Ability to analyse data for functional business requirements & front face customer Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 5 to 7 years of relevant experience working as technical analyst with Big Query on GCP platform. Skilled in multiple GCP services - GCS, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer Ambitious individual who can work under their own direction towards agreed targets/goals and with creative approach to work You love collaborative environments that use agile methodologies to encourage creative design thinking and find innovative ways to develop with cutting edge technologies. End to End functional knowledge of the data pipeline/transformation implementation that the candidate has done, should understand the purpose/KPIs for which data transformation was done Preferred technical and professional experience Experience with AEM Core Technologies OSGI Services, Apache Sling ,Granite Framework., Java Content Repository API, Java 8+, Localization Familiarity with building tools, Jenkin and Maven , Knowledge of version control tools, especially Git, Knowledge of Patterns and Good Practices to design and develop quality and clean code, Knowledge of HTML, CSS, and JavaScript , jQuery Familiarity with task management, bug tracking, and collaboration tools like JIRA and Confluence
Posted 6 days ago
3.0 - 7.0 years
11 - 15 Lacs
Mumbai
Work from Office
A Data Platform Engineer specialises in the design, build, and maintenance of cloud-based data infrastructure and platforms for data-intensive applications and services. They develop Infrastructure as Code and manage the foundational systems and tools for efficient data storage, processing, and management. This role involves architecting robust and scalable cloud data infrastructure, including selecting and implementing suitable storage solutions, data processing frameworks, and data orchestration tools. Additionally, a Data Platform Engineer ensures the continuous evolution of the data platform to meet changing data needs and leverage technological advancements, while maintaining high levels of data security, availability, and performance. They are also tasked with creating and managing processes and tools that enhance operational efficiency, including optimising data flow and ensuring seamless data integration, all of which are essential for enabling developers to build, deploy, and operate data-centric applications efficiently. - Grade Specific An expert on the principles and practices associated with data platform engineering, particularly within cloud environments, and demonstrates proficiency in specific technical areas related to cloud-based data infrastructure, automation, and scalability.Key responsibilities encompass:Team Leadership and ManagementSupervising a team of platform engineers, with a focus on team dynamics and the efficient delivery of cloud platform solutions.Technical Guidance and Decision-MakingProviding technical leadership and making pivotal decisions concerning platform architecture, tools, and processes. Balancing hands-on involvement with strategic oversight.Mentorship and Skill DevelopmentGuiding team members through mentorship, enhancing their technical proficiencies, and nurturing a culture of continual learning and innovation in platform engineering practices.In-Depth Technical ProficiencyPossessing a comprehensive understanding of platform engineering principles and practices, and demonstrating expertise in crucial technical areas such as cloud services, automation, and system architecture.Community ContributionMaking significant contributions to the development of the platform engineering community, staying informed about emerging trends, and applying this knowledge to drive enhancements in capability. Skills (competencies)
Posted 6 days ago
2.0 - 5.0 years
14 - 17 Lacs
Mumbai
Work from Office
Experience with Scala object-oriented/object function Strong SQL background. Experience in Spark SQL, Hive, Data Engineer. SQL Experience with data pipelines & Data Lake Strong background in distributed comp. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise SQL Experience with data pipelines & Data Lake Strong background in distributed comp Experience with Scala object-oriented/object function Strong SQL background Preferred technical and professional experience Core Scala Development Experience
Posted 6 days ago
5.0 - 7.0 years
14 - 18 Lacs
Bengaluru
Work from Office
Work with broader team to build, analyze and improve the AI solutions. You will also work with our software developers in consuming different enterprise applications Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Resource should have 5-7 years of experience. Sound knowledge of Python and should know how to use the ML related services. Proficient in Python with focus on Data Analytics Packages. Strategy Analyse large, complex data sets and provide actionable insights to inform business decisions. Strategy Design and implementing data models that help in identifying patterns and trends. Collaboration Work with data engineers to optimize and maintain data pipelines. Perform quantitative analyses that translate data into actionable insights and provide analytical, data-driven decision-making. Identify and recommend process improvements to enhance the efficiency of the data platform. Develop and maintain data models, algorithms, and statistical models Preferred technical and professional experience Experience with conversation analytics. Experience with cloud technologies Experience with data exploration tools such as Tableu
Posted 6 days ago
5.0 - 7.0 years
13 - 17 Lacs
Bengaluru
Work from Office
Skilled Multiple GCP services - GCS, BigQuery, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer etc. Must have Python and SQL work experience & Proactive, collaborative and ability to respond to critical situation Ability to analyse data for functional business requirements & front face customer Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 5 to 7 years of relevant experience working as technical analyst with Big Query on GCP platform. Skilled in multiple GCP services - GCS, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer You love collaborative environments that use agile methodologies to encourage creative design thinking and find innovative ways to develop with cutting edge technologies Ambitious individual who can work under their own direction towards agreed targets/goals and with creative approach to work Preferred technical and professional experience Create up to 3 bullets maxitive individual with an ability to manage change and proven time management Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Up-to-date technical knowledge by attending educational workshops, reviewing publications (encouraging then to focus on required skills)
Posted 6 days ago
4.0 - 9.0 years
12 - 16 Lacs
Kochi
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred technical and professional experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers
Posted 6 days ago
5.0 - 7.0 years
5 - 9 Lacs
Kochi
Work from Office
Job Title - + + Management Level: Location:Kochi, Coimbatore, Trivandrum Must have skills:Databricks including Spark-based ETL, Delta Lake Good to have skills:Pyspark Job Summary We are seeking a highly skilled and experienced Senior Data Engineer to join our growing Data and Analytics team. The ideal candidate will have deep expertise in Databricks and cloud data warehousing, with a proven track record of designing and building scalable data pipelines, optimizing data architectures, and enabling robust analytics capabilities. This role involves working collaboratively with cross-functional teams to ensure the organization leverages data as a strategic asset. Your responsibilities will include: Roles and Responsibilities Design, build, and maintain scalable data pipelines and ETL processes using Databricks and other modern tools. Architect, implement, and manage cloud-based data warehousing solutions on Databricks (Lakehouse Architecture) Develop and maintain optimized data lake architectures to support advanced analytics and machine learning use cases. Collaborate with stakeholders to gather requirements, design solutions, and ensure high-quality data delivery. Optimize data pipelines for performance and cost efficiency. Implement and enforce best practices for data governance, access control, security, and compliance in the cloud. Monitor and troubleshoot data pipelines to ensure reliability and accuracy. Lead and mentor junior engineers, fostering a culture of continuous learning and innovation. Excellent communication skills Ability to work independently and along with client based out of western Europe. Professional and Technical Skills 3.5-5 years of experience in Data Engineering roles with a focus on cloud platforms. Proficiency in Databricks, including Spark-based ETL, Delta Lake, and SQL. Strong experience with one or more cloud platforms (AWS preferred). Handson Experience with Delta lake, Unity Catalog, and Lakehouse architecture concepts. Strong programming skills in Python and SQL; experience with Pyspark a plus. Solid understanding of data modeling concepts and practices (e.g., star schema, dimensional modeling). Knowledge of CI/CD practices and version control systems (e.g., Git). Familiarity with data governance and security practices, including GDPR and CCPA compliance. Additional Information Experience with Airflow or similar workflow orchestration tools. Exposure to machine learning workflows and MLOps. Certification in Databricks, AWS Familiarity with data visualization tools such as Power BI (do not remove the hyperlink)Qualification Experience:3.5 -5 years of experience is required Educational Qualification:Graduation (Accurate educational details should capture)
Posted 6 days ago
15.0 - 20.0 years
10 - 14 Lacs
Hyderabad
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Microsoft Fabric Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding your team through the development process. You will also engage in strategic planning to align application development with organizational goals, ensuring that the solutions provided are effective and efficient. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Facilitate regular team meetings to discuss progress and address any roadblocks. Professional & Technical Skills: Lead and manage a team of data engineers, providing guidance, mentorship, and support.Foster a collaborative and innovative team culture. Work closely with stakeholders to understand data requirements and business objectives.Translate business requirements into technical specifications for the Data Warehouse.Lead the design of data models, ensuring they meet business needs and adhere to best practices.Collaborate with the Technical Architect to design dimensional models for optimal performance.Design and implement data pipelines for ingestion, transformation, and loading (ETL/ELT) using Fabric Data Factory Pipeline and Dataflows Gen2.Develop scalable and reliable solutions for batch data integration across various structured and unstructured data sources.Oversee the development of data pipelines for smooth data flow into the Fabric Data Warehouse.Implement and maintain data solutions in Fabric Lakehouse and Fabric Warehouse.Monitor and optimize pipeline performance, ensuring minimal latency and resource efficiency.Tune data processing workloads for large datasets in Fabric Warehouse and Lakehouse.Exposure in ADF and DataBricks Additional Information:- The candidate should have minimum 5 years of experience in Microsoft Fabric.- This position is based in Hyderabad.- A 15 years full time education is required. Qualification 15 years full time education
Posted 6 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Hive is a popular data warehousing tool used for querying and managing large datasets in distributed storage. In India, the demand for professionals with expertise in Hive is on the rise, with many organizations looking to hire skilled individuals for various roles related to data processing and analysis.
These cities are known for their thriving tech industries and offer numerous opportunities for professionals looking to work with Hive.
The average salary range for Hive professionals in India varies based on experience level. Entry-level positions can expect to earn around INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 12-15 lakhs per annum.
Typically, a career in Hive progresses from roles such as Junior Developer or Data Analyst to Senior Developer, Tech Lead, and eventually Architect or Data Engineer. Continuous learning and hands-on experience with Hive are crucial for advancing in this field.
Apart from expertise in Hive, professionals in this field are often expected to have knowledge of SQL, Hadoop, data modeling, ETL processes, and data visualization tools like Tableau or Power BI.
As you explore job opportunities in the field of Hive in India, remember to showcase your expertise and passion for data processing and analysis. Prepare well for interviews by honing your skills and staying updated with the latest trends in the industry. Best of luck in your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.