Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. The offering portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements Google Cloud Platform - Data Engineer Cloud is shifting business models at our clients, and transforming the way technology enables business. As our clients embark on this transformational journey to cloud, they are looking for trusted partners who can help them navigate through this journey. Our client’s journey spans across cloud strategy to implementation, migration of legacy applications to supporting operations of a cloud ecosystem and everything in between. Deloitte’s Cloud Delivery Center supports our client project teams in this journey by delivering these new solutions by which IT services are obtained, used, and managed. You will be working with other technologists to deliver cutting edge solutions using Google Cloud Services ( GCP ), programming and automation tools for some of our Fortune 1000 clients. You will have the opportunity to contribute to work that may involve building a new cloud solutions, migrating an application to co-exist in the hybrid cloud, deploying a global cloud application across multiple countries or supporting a set of cloud managed services. Our teams of technologists have a diverse range of skills and we are always looking for new ways to innovate and help our clients succeed. You will have an opportunity to leverage the skills you already have, try new technologies and develop skills that will improve your brand and career as a well-rounded cutting-edge technologist . Work you’ll do As GCP Data Engineer you will have multiple responsibilities depending on project type. As a Cloud Data Engineer, you will guide customers on how to ingest, store, process, analyze and explore/visualize data on the Google Cloud Platform. You will work on data migrations and transformational projects, and with customers to design large-scale data processing systems, develop data pipelines optimized for scaling, and troubleshoot potential platform issues. In this role you are the Data Engineer working with Deloitte's most strategic Cloud customers. Together with the team you will support customer implementation of Google Cloud products through: architecture guidance, best practices, data migration, capacity planning, implementation, troubleshooting, monitoring and much more. The key responsibilities may involve some or all of the areas listed below: Act as a trusted technical advisor to customers and solve complex Big Data challenges. Create and deliver best practices recommendations, tutorials, blog articles, sample code, and technical presentations adapting to different levels of key business and technical stakeholders. ▪ Identifying new tools and processes to improve the cloud platform and automate processes Qualifications Technical Requirements BA/BS degree in Computer Science, Mathematics or related technical field, or equivalent practical experience. Experience in Cloud SQL and Cloud Bigtable Experience in Dataflow, BigQuery, Dataproc, Datalab, Dataprep, Pub/Sub and Genomics Experience in Google Transfer Appliance, Cloud Storage Transfer Service, BigQuery Data Transfer Experience with data processing software (such as Hadoop, Kafka, Spark, Pig, Hive) and with data processing algorithms (MapReduce, Flume). Experience working with technical customers. Experience in writing software in one or more languages such as Java, C++, Python, Go and/or JavaScript. Consulting Requirements 3-6 years of relevant consulting, industry or technology experience Strong problem solving and troubleshooting skills Strong communicator Willingness to travel up in case of project requirement Preferred Qualifications Experience working data warehouses, including data warehouse technical architectures, infrastructure components, ETL/ELT and reporting/analytic tools and environments. Experience in technical consulting. Experience architecting, developing software, or internet scale production-grade Big Data solutions in virtualized environments such as Google Cloud Platform (mandatory) and AWS/Azure(good to have) Experience working with big data, information retrieval, data mining or machine learning as well as experience in building multi-tier high availability applications with modern web technologies (such as NoSQL, Kafka,NPL, MongoDB, SparkML, Tensorflow). Working knowledge of ITIL and/or agile methodologies Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 300075
Posted 1 week ago
3.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. The offering portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements Google Cloud Platform - Data Engineer Cloud is shifting business models at our clients, and transforming the way technology enables business. As our clients embark on this transformational journey to cloud, they are looking for trusted partners who can help them navigate through this journey. Our client’s journey spans across cloud strategy to implementation, migration of legacy applications to supporting operations of a cloud ecosystem and everything in between. Deloitte’s Cloud Delivery Center supports our client project teams in this journey by delivering these new solutions by which IT services are obtained, used, and managed. You will be working with other technologists to deliver cutting edge solutions using Google Cloud Services ( GCP ), programming and automation tools for some of our Fortune 1000 clients. You will have the opportunity to contribute to work that may involve building a new cloud solutions, migrating an application to co-exist in the hybrid cloud, deploying a global cloud application across multiple countries or supporting a set of cloud managed services. Our teams of technologists have a diverse range of skills and we are always looking for new ways to innovate and help our clients succeed. You will have an opportunity to leverage the skills you already have, try new technologies and develop skills that will improve your brand and career as a well-rounded cutting-edge technologist . Work you’ll do As GCP Data Engineer you will have multiple responsibilities depending on project type. As a Cloud Data Engineer, you will guide customers on how to ingest, store, process, analyze and explore/visualize data on the Google Cloud Platform. You will work on data migrations and transformational projects, and with customers to design large-scale data processing systems, develop data pipelines optimized for scaling, and troubleshoot potential platform issues. In this role you are the Data Engineer working with Deloitte's most strategic Cloud customers. Together with the team you will support customer implementation of Google Cloud products through: architecture guidance, best practices, data migration, capacity planning, implementation, troubleshooting, monitoring and much more. The key responsibilities may involve some or all of the areas listed below: Act as a trusted technical advisor to customers and solve complex Big Data challenges. Create and deliver best practices recommendations, tutorials, blog articles, sample code, and technical presentations adapting to different levels of key business and technical stakeholders. ▪ Identifying new tools and processes to improve the cloud platform and automate processes Qualifications Technical Requirements BA/BS degree in Computer Science, Mathematics or related technical field, or equivalent practical experience. Experience in Cloud SQL and Cloud Bigtable Experience in Dataflow, BigQuery, Dataproc, Datalab, Dataprep, Pub/Sub and Genomics Experience in Google Transfer Appliance, Cloud Storage Transfer Service, BigQuery Data Transfer Experience with data processing software (such as Hadoop, Kafka, Spark, Pig, Hive) and with data processing algorithms (MapReduce, Flume). Experience working with technical customers. Experience in writing software in one or more languages such as Java, C++, Python, Go and/or JavaScript. Consulting Requirements 3-6 years of relevant consulting, industry or technology experience Strong problem solving and troubleshooting skills Strong communicator Willingness to travel up in case of project requirement Preferred Qualifications Experience working data warehouses, including data warehouse technical architectures, infrastructure components, ETL/ELT and reporting/analytic tools and environments. Experience in technical consulting. Experience architecting, developing software, or internet scale production-grade Big Data solutions in virtualized environments such as Google Cloud Platform (mandatory) and AWS/Azure(good to have) Experience working with big data, information retrieval, data mining or machine learning as well as experience in building multi-tier high availability applications with modern web technologies (such as NoSQL, Kafka,NPL, MongoDB, SparkML, Tensorflow). Working knowledge of ITIL and/or agile methodologies Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 300075
Posted 1 week ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Where Data Does More. Join the Snowflake team. Snowflake’s Support team is expanding! We are looking for a Senior Cloud Support Engineer who likes working with data and solving a wide variety of issues utilizing their technical experience having worked on a variety of operating systems, database technologies, big data, data integration, connectors, and networking. Snowflake Support is committed to providing high-quality resolutions to help deliver data-driven business insights and results. We are a team of subject matter experts collectively working toward our customers’ success. We form partnerships with customers by listening, learning, and building connections. Snowflake’s values are key to our approach and success in delivering world-class Support. Putting customers first, acting with integrity, owning initiative and accountability, and getting it done are Snowflake's core values, which are reflected in everything we do. As a Senior Cloud Support Engineer , your role is to delight our customers with your passion and knowledge of Snowflake Data Warehouse. Customers will look to you for technical guidance and expert advice with regard to their effective and optimal use of Snowflake. You will be the voice of the customer regarding product feedback and improvements for Snowflake’s product and engineering teams. You will play an integral role in building knowledge within the team and be part of strategic initiatives for organizational and process improvements. Based on business needs, you may be assigned to work with one or more Snowflake Priority Support customers . You will develop a strong understanding of the customer’s use case and how they leverage the Snowflake platform. You will deliver exceptional service, enabling them to achieve the highest levels of continuity and performance from their Snowflake implementation. Ideally, you have worked in a 24x7 environment, handled technical case escalations and incident management, worked in technical support for an RDBMS, been on-call during weekends, and are familiar with database release management. AS A SENIOR CLOUD SUPPORT ENGINEER AT SNOWFLAKE, YOU WILL: Drive technical solutions to complex problems providing in-depth analysis and guidance to Snowflake customers and partners using the following methods of communication: email, web, and phone Adhere to response and resolution SLAs and escalation processes to ensure fast resolution of customer issues that exceed expectations Demonstrate good problem-solving skills and be process-oriented Utilize the Snowflake environment, connectors, 3rd party partner software, and tools to investigate issues Document known solutions to the internal and external knowledge base Report well-documented bugs and feature requests arising from customer-submitted requests Partner with engineering teams in prioritizing and resolving customer requests Participate in a variety of Support initiatives Provide support coverage during holidays and weekends based on business needs OUR IDEAL SENIOR CLOUD SUPPORT ENGINEER WILL HAVE THE FOLLOWING: Bachelor’s or Master’s degree in Computer Science or equivalent discipline. 5+ years experience in a Technical Support environment or a similar technical function in a customer-facing role. Excellent written and communication skills in English with attention to detail. Ability to reproduce and troubleshoot complex technical issues. In-depth knowledge of one of the major cloud service providers' ecosystems. ETL/ELT tools knowledge such as AWS Glue, EMR, Azure Data Factory, and Informatica. Expert working knowledge of internet protocols such as TCP/IP, HTTP/S, SFTP, and DNS as well as the ability to use diagnostic tools to troubleshoot connectivity issues. In-depth understanding of SSL/TLS handshake and troubleshooting SSL negotiation Advanced knowledge in driver configuration and troubleshooting for ODBC, JDBC, GO, and .NET. High level of proficiency with system troubleshooting on a variety of operating systems (Windows, Mac, *Nix), including many of the following tools: tcpdump, lsof, Wireshark, netstat, sar, perfmon, and process explorer. Debugging experience in Python, Java, or Scala. Experienced with software development principles, including object-oriented programming and version control systems (e.g., Git, GitHub, GitLab) Familiarity with Kafka and Spark technologies. NICE TO HAVE: Understanding of data loading/unloading process in Snowflake. Understanding Snowflake streams and tasks. Expertise in database migration processes. SQL skills, including JOINS, Common Table Expressions (CTEs), and Window Functions. Experience in supporting applications hosted on Amazon AWS or Microsoft Azure. Familiarity with containerization technologies like Docker and Kubernetes. Working experience in Data Visualization tools such as Tableau, Power BI, matplotlib, seaborn, and Plotly. Experience developing CI/CD components for production-ready data pipelines. Experience working with big data and/or MPP (massively parallel processing) databases Experienced with data warehousing fundamentals and concepts Database migration and ETL experience Familiarity with Data Manipulation and Analysis such as pandas, NumPy, scipy. Knowledge of authentication and authorization protocols (OAuth, JWT, etc.). SPECIAL REQUIREMENTS: Participate in pager duty rotations during nights, weekends, and holidays. Ability to work the 4th/night shift, which typically starts at 10 pm IST. Applicants should be flexible with schedule changes to meet business needs. Snowflake is growing fast, and we’re scaling our team to help enable and accelerate our growth. We are looking for people who share our values, challenge ordinary thinking, and push the pace of innovation while building a future for themselves and Snowflake. How do you want to make your impact? For jobs located in the United States, please visit the job posting on the Snowflake Careers Site for salary and benefits information: careers.snowflake.com
Posted 1 week ago
3.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Design, implement and maintain Java application within all phases of the Software Development Life Cycle (SDLC) Develop, test, implement and maintain application software working with established processes Communicate effectively with other engineers and QA Establish, refine and integrate development and test environment tools and software as needed Identify production and non-production application issues Identify opportunities to fine-tune and optimize applications of Java developed projects Provide technical support and consultation for Java application and infrastructure questions Serve as a mentor to less experienced Developers Be able to envision the overall solution for defined functional and non-functional requirements; and be able to define technologies, patterns and frameworks to materialize it Design and develop the framework of the system and be able to explain choices made. Also write and review design document explaining overall architecture, framework and high level design of the application Create, understand and validate Design and estimated effort for given module or task, and be able to justify it Be able to define in-scope, out-of-scope and taken assumptions while creating effort estimates Be able to identify and integrate well over all integration points in context of a project as well as other applications in the environment Coding Positions in this function deliver professional level technical work in support of the development of company products, tools, platforms and services, typically for an external customer or end user. Operates within established methodologies, procedures, and guidelines. Applies knowledge of principles and techniques to solve technical problems. Works closely with other functions to understand customer needs and to develop product roadmaps Define guidelines and benchmarks for NFR considerations during project implementation Do required POCs to make sure that suggested design or technologies meet the requirements Generally work is self-directed and not prescribed Works with less structured, more complex issues Serves as a resource to others Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Graduate or Post Graduate in Computer Science or Engineering or Science or Mathematics or related field with around 3+ years of experience in executing the Java projects Cloud Certification, preferably Azure Senior Java Technical Position with about 3+ years of hands-on technical experience in the Java related technologies. Working Knowledge of executing the projects in the Agile Methodologies Technical skills: Technical experience: Java, J2EE, spring boot, Postgres Hands on experience on building, maintaining, optimizing or modernizing applications on (or to) Public Cloud, preferably Azure, show-casing event driven, elastically scalable, fault tolerant and other cloud native architecture patterns Solid experience in Core Java, Spring, spring boot and Hibernate or Spring Data JPA Experience in SOA based architecture, Web Services (SOAP or REST) Experience in Kafka or Pulsar Experience in continuous integration (Jenkins or Sonar or etc) Hands-on experience on PostgreSQL and Oracle Experience in using profiler tools (JProfiler or JMeter) Good understanding of UML and design patterns Good understanding on Performance tuning Preferred Qualifications Technical experience: Angular Health care industry experience At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.
Posted 1 week ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. The offering portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements Google Cloud Platform - Data Engineer Cloud is shifting business models at our clients, and transforming the way technology enables business. As our clients embark on this transformational journey to cloud, they are looking for trusted partners who can help them navigate through this journey. Our client’s journey spans across cloud strategy to implementation, migration of legacy applications to supporting operations of a cloud ecosystem and everything in between. Deloitte’s Cloud Delivery Center supports our client project teams in this journey by delivering these new solutions by which IT services are obtained, used, and managed. You will be working with other technologists to deliver cutting edge solutions using Google Cloud Services ( GCP ), programming and automation tools for some of our Fortune 1000 clients. You will have the opportunity to contribute to work that may involve building a new cloud solutions, migrating an application to co-exist in the hybrid cloud, deploying a global cloud application across multiple countries or supporting a set of cloud managed services. Our teams of technologists have a diverse range of skills and we are always looking for new ways to innovate and help our clients succeed. You will have an opportunity to leverage the skills you already have, try new technologies and develop skills that will improve your brand and career as a well-rounded cutting-edge technologist . Work you’ll do As GCP Data Engineer you will have multiple responsibilities depending on project type. As a Cloud Data Engineer, you will guide customers on how to ingest, store, process, analyze and explore/visualize data on the Google Cloud Platform. You will work on data migrations and transformational projects, and with customers to design large-scale data processing systems, develop data pipelines optimized for scaling, and troubleshoot potential platform issues. In this role you are the Data Engineer working with Deloitte's most strategic Cloud customers. Together with the team you will support customer implementation of Google Cloud products through: architecture guidance, best practices, data migration, capacity planning, implementation, troubleshooting, monitoring and much more. The key responsibilities may involve some or all of the areas listed below: Act as a trusted technical advisor to customers and solve complex Big Data challenges. Create and deliver best practices recommendations, tutorials, blog articles, sample code, and technical presentations adapting to different levels of key business and technical stakeholders. ▪ Identifying new tools and processes to improve the cloud platform and automate processes Qualifications Technical Requirements BA/BS degree in Computer Science, Mathematics or related technical field, or equivalent practical experience. Experience in Cloud SQL and Cloud Bigtable Experience in Dataflow, BigQuery, Dataproc, Datalab, Dataprep, Pub/Sub and Genomics Experience in Google Transfer Appliance, Cloud Storage Transfer Service, BigQuery Data Transfer Experience with data processing software (such as Hadoop, Kafka, Spark, Pig, Hive) and with data processing algorithms (MapReduce, Flume). Experience working with technical customers. Experience in writing software in one or more languages such as Java, C++, Python, Go and/or JavaScript. Consulting Requirements 3-6 years of relevant consulting, industry or technology experience Strong problem solving and troubleshooting skills Strong communicator Willingness to travel up in case of project requirement Preferred Qualifications Experience working data warehouses, including data warehouse technical architectures, infrastructure components, ETL/ELT and reporting/analytic tools and environments. Experience in technical consulting. Experience architecting, developing software, or internet scale production-grade Big Data solutions in virtualized environments such as Google Cloud Platform (mandatory) and AWS/Azure(good to have) Experience working with big data, information retrieval, data mining or machine learning as well as experience in building multi-tier high availability applications with modern web technologies (such as NoSQL, Kafka,NPL, MongoDB, SparkML, Tensorflow). Working knowledge of ITIL and/or agile methodologies Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 300075
Posted 1 week ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. We are seeking a talented and motivated Data Engineer to join our growing data team. You will play a key role in building scalable data pipelines, optimizing data infrastructure, and enabling data-driven solutions. Primary Responsibilities Design, develop, and maintain scalable ETL/ELT pipelines for batch and real-time data processing Build and optimize data models and data warehouses to support analytics and reporting Collaborate with analysts and software engineers to deliver high-quality data solutions Ensure data quality, integrity, and security across all systems Monitor and troubleshoot data pipelines and infrastructure for performance and reliability Contribute to internal tools and frameworks to improve data engineering workflows Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications 5+ years of experience working on commercially available software and / or healthcare platforms as a Data Engineer 3+ years of solid experience designing and building Enterprise Data solutions on cloud 1+ years of experience developing solutions hosted within public cloud providers such as Azure or AWS or private cloud/container-based systems using Kubernetes/OpenShift Experience with some of the modern relational databases Experience with Data warehousing services preferably Snowflake Experience in using modern software engineering and product development tools including Agile / SAFE, Continuous Integration, Continuous Delivery, DevOps etc. Solid experience of operating in a quickly changing environment and driving technological innovation to meet business requirements Skilled at optimizing SQL statements Subject matter expert on Cloud technologies preferably Azure and Big Data ecosystem Preferred Qualifications Experience with real-time data streaming and event-driven architectures Experience building Big Data solutions on public cloud (Azure) Experience building data pipelines on Azure with skills Databricks spark, scala, Azure Data factory, Kafka and Kafka Streams, App services, Az Functions Experience developing RESTful Services in .NET, Java or any other language Experience with DevOps in Data engineering Experience with Microservices architecture Exposure to DevOps practices and infrastructure-as-code (e.g., Terraform, Docker) Knowledge of data governance and data lineage tools Ability to establish repeatable processes, best practices and implement version control software in a Cloud team environment At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 1 week ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. The offering portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements Google Cloud Platform - Data Engineer Cloud is shifting business models at our clients, and transforming the way technology enables business. As our clients embark on this transformational journey to cloud, they are looking for trusted partners who can help them navigate through this journey. Our client’s journey spans across cloud strategy to implementation, migration of legacy applications to supporting operations of a cloud ecosystem and everything in between. Deloitte’s Cloud Delivery Center supports our client project teams in this journey by delivering these new solutions by which IT services are obtained, used, and managed. You will be working with other technologists to deliver cutting edge solutions using Google Cloud Services ( GCP ), programming and automation tools for some of our Fortune 1000 clients. You will have the opportunity to contribute to work that may involve building a new cloud solutions, migrating an application to co-exist in the hybrid cloud, deploying a global cloud application across multiple countries or supporting a set of cloud managed services. Our teams of technologists have a diverse range of skills and we are always looking for new ways to innovate and help our clients succeed. You will have an opportunity to leverage the skills you already have, try new technologies and develop skills that will improve your brand and career as a well-rounded cutting-edge technologist . Work you’ll do As GCP Data Engineer you will have multiple responsibilities depending on project type. As a Cloud Data Engineer, you will guide customers on how to ingest, store, process, analyze and explore/visualize data on the Google Cloud Platform. You will work on data migrations and transformational projects, and with customers to design large-scale data processing systems, develop data pipelines optimized for scaling, and troubleshoot potential platform issues. In this role you are the Data Engineer working with Deloitte's most strategic Cloud customers. Together with the team you will support customer implementation of Google Cloud products through: architecture guidance, best practices, data migration, capacity planning, implementation, troubleshooting, monitoring and much more. The key responsibilities may involve some or all of the areas listed below: Act as a trusted technical advisor to customers and solve complex Big Data challenges. Create and deliver best practices recommendations, tutorials, blog articles, sample code, and technical presentations adapting to different levels of key business and technical stakeholders. ▪ Identifying new tools and processes to improve the cloud platform and automate processes Qualifications Technical Requirements BA/BS degree in Computer Science, Mathematics or related technical field, or equivalent practical experience. Experience in Cloud SQL and Cloud Bigtable Experience in Dataflow, BigQuery, Dataproc, Datalab, Dataprep, Pub/Sub and Genomics Experience in Google Transfer Appliance, Cloud Storage Transfer Service, BigQuery Data Transfer Experience with data processing software (such as Hadoop, Kafka, Spark, Pig, Hive) and with data processing algorithms (MapReduce, Flume). Experience working with technical customers. Experience in writing software in one or more languages such as Java, C++, Python, Go and/or JavaScript. Consulting Requirements 3-6 years of relevant consulting, industry or technology experience Strong problem solving and troubleshooting skills Strong communicator Willingness to travel up in case of project requirement Preferred Qualifications Experience working data warehouses, including data warehouse technical architectures, infrastructure components, ETL/ELT and reporting/analytic tools and environments. Experience in technical consulting. Experience architecting, developing software, or internet scale production-grade Big Data solutions in virtualized environments such as Google Cloud Platform (mandatory) and AWS/Azure(good to have) Experience working with big data, information retrieval, data mining or machine learning as well as experience in building multi-tier high availability applications with modern web technologies (such as NoSQL, Kafka,NPL, MongoDB, SparkML, Tensorflow). Working knowledge of ITIL and/or agile methodologies Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 300075
Posted 1 week ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title: Full Stack Engineer Location: Bengaluru L&T Technology Services is seeking Full stack Engineer (Experience range - 5+ years) of experience, proficient in: Strong hands-on experience with Spring Boot and Microservices architecture for scalable application development. Apache Kafka for real-time data streaming and event-driven systems. Solid working knowledge of AWS services for deploying and managing cloud-native applications. Experience with at least one modern JavaScript framework: React.js or Angular , or Node.js for building responsive UIs or APIs. Ability to work in an agile environment, contribute to system design, and collaborate across DevOps, QA, and frontend/backend teams. Required Skills: Spring boot, Microservice, Kafka, AWS, React.js or Angular or Node.js
Posted 1 week ago
8.0 - 11.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Scope We are a leading SaaS and AI-driven Global Supply Chain Solutions software product company and Glass Door's “Best Places to Work” The only company recognized as a Leader in 3 2021 Gartner Magic Quadrant reports covering supply chain planning solutions, transportation management systems, and warehouse management systems Our Current Technical Environment Software: Unix, Any scripting language, WMS application (Any), PL/SQL, API, MOCA Future Software – Kafka, Stratosphere, Microservices, Java Application Architecture: Native SaaS, Cognitive Cloud Architecture: Private cloud, MS Azure (ARM templates, AKS, HD insight, Application gateway, Virtue Networks, Event Hub, Azure AD) What Will You Do Support Engagements: Work with global technical and functional teams to support various customer engagements. Customer Interaction: Understand customer requests, support designed products/solutions to meet business requirements, and ensure high customer satisfaction. Issue Resolution: Address and resolve technical issues adhering to SLAs, document learnings, and create knowledge articles. Environment Management: Replicate and maintain customer environments and knowledge of customer solution architecture and integration points. Customer Satisfaction: Provide quality and timely solutions to improve customer satisfaction and follow-up until closure. Stakeholder Interaction: Interact with internal and external stakeholders and report to management. Process Improvement: Identify areas for improvement and automation in routine tasks. Continuous Learning: Stay updated with new technologies and products, demonstrate quick learning ability, and maintain good interpersonal and communication skills. Architecture Simplification: Drive simpler, more robust, and efficient architecture and designs. Product Representation: Confidently represent product and portfolio, including vision and technical roadmaps, within the company and to strategic customers when necessary. Detailed Responsibilities Customer Issue Resolution: Understand customer-raised issues, especially in Cloud/SaaS environments, and take appropriate actions to resolve them. Code Review: Review product source code or design documents as necessary. Case Management: Own and resolve all cases for global customers, adhering to defined SLAs. Knowledge Sharing: Document learnings and create knowledge articles for repeated cases. Environment Replication: Replicate and maintain customer environments. Solution Knowledge: Maintain knowledge of customer solutions and customizations. Urgency in Interaction: Demonstrate a sense of urgency and swiftness in all customer interactions. Techno-Functional Point of Contact: Act as the techno-functional POC for all cases, ensuring timely triage and assignment. Global Collaboration: Utilize instant messenger and other tools to collaborate globally. Shift Work: Work in rotational shifts and be flexible with timings. Goal Achievement: Meet organizational and team-level goals. Customer Satisfaction: Improve customer satisfaction by providing quality and timely solutions and follow-up until case closure. Process Automation: Identify areas for improvement and scope for automation in routine tasks or activities. Team Player: Help in meeting team-level goals and be a team player. What We Are Looking For Educational Background: Bachelor’s degree (STEM preferred) with a minimum of 8 to 11 years of experience. Team Experience: Experience in working as a team. Skills: Good communication and strong analytical skills. Technical Proficiency: Experience in working with SQL/Oracle DB complex queries. Domain Knowledge: Fair understanding of the Supply Chain domain. Support Engineering Experience: Experience in support engineering roles. Techno-Functional Expertise: Possess strong techno-functional expertise. Tech Savviness: Ability to adapt to any technology quickly. Critical Issue Support: Provide technical and solution support during critical/major issues. Tool Experience: Experience with varied tools such as AppDynamics, Splunk, and ServiceNow. Shift Flexibility: Flexible to work in shift timings: Shift 1: 6 am to 3 pm Shift 2: 2 pm to 11 pm Shift 3: 10 pm to 7 am Our Values If you want to know the heart of a company, take a look at their values. Ours unite us. They are what drive our success – and the success of our customers. Does your heart beat like ours? Find out here: Core Values All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status.
Posted 1 week ago
11.0 years
0 Lacs
India
On-site
Company Description 👋🏼 We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale — across all devices and digital mediums, and our people exist everywhere in the world (17500+ experts across 39 countries, to be exact). Our work culture is dynamic and non-hierarchical. We're looking for great new colleagues. That's where you come in! Job Description REQUIREMENTS: Total experience 11+years. Strong working experience with architecture and development in Java 8 or higher. Experience with front-end frameworks such as React, Redux, React.js, or Vue. Familiarity with Node.js and modern backend stacks. Deep knowledge of AWS, Azure, or GCP platforms and services. Strong experience with Azure DevOps, Git, Jenkins, and CI/CD pipeline. Deep understanding of design patterns, data structures, and microservices architecture. Strong knowledge of object-oriented programming, data structures, and algorithms. Experience with scalable system design, performance tuning, and application security. Familiarity with data integration patterns, middleware, and message brokers (e.g., Kafka, RabbitMQ). A good understanding of UML and design patterns. Strong experience with IBM Integration Composer & IBM ODM. Hands-on with container orchestration using Kubernetes, OpenShift. Working knowledge of security protocols like OAuth 2.0, SAML. Excellent communication skills and the ability to collaborate effectively with cross-functional teams. RESPONSIBILITIES: Writing and reviewing great quality code. Understanding the client’s business use cases and technical requirements and be able to convert them into technical design which elegantly meets the requirements. Mapping decisions with requirements and be able to translate the same to developers. Identifying different solutions and being able to narrow down the best option that meets the clients’ requirements. Defining guidelines and benchmarks for NFR considerations during project implementation Writing and reviewing design document explaining overall architecture, framework, and high-level design of the application for the developers Reviewing architecture and design on various aspects like extensibility, scalability, security, design patterns, user experience, NFRs, etc., and ensure that all relevant best practices are followed. Developing and designing the overall solution for defined functional and non-functional requirements; and defining technologies, patterns, and frameworks to materialize it Understanding and relating technology integration scenarios and applying these learnings in projects Resolving issues that are raised during code/review, through exhaustive systematic analysis of the root cause, and being able to justify the decision taken. Carrying out POCs to make sure that suggested design/technologies meet the requirements. Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field.
Posted 1 week ago
2.0 years
0 Lacs
India
On-site
We’re building the next-generation communications analytics and automation platform—one that fuses deep telemetry, enterprise-scale voice/calling data, and AI-driven remediation. As a Senior Backend Engineer , you'll play a core role in designing the resilient, scalable backend of a high-visibility platform that already drives action across global Microsoft Teams deployments. This isn’t a maintenance gig. This is architecture, orchestration, and ownership. You’ll help design microservices, implement scalable APIs, and ensure data flows seamlessly from complex real-time systems (like call quality diagnostics and device telemetry) into actionable intelligence and automation pipelines. If you’re excited by backend systems with real-world impact—and want to transition into intelligent agentic systems powered by GenAI—this role is built for you. What You'll Work On Platform Engineering (Core Backend) Design and implement robust, cloud-native services using modern backend stacks (Node.js, Python, .NET Core, or similar). Build scalable APIs to surface data and actions across TeamsCoreIQ modules (call analytics, device insights, policy management, AI-based RCA). Integrate with Microsoft Graph APIs and Teams Calling infrastructure (Auto Attendants, Call Queues, Call Quality, Presence, Policies). Develop event-driven workflows using queues (Service Bus, Kafka, RabbitMQ) for high-throughput ingestion and action pipelines. Work with real-time data stores, telemetry ingestion, and time-series analytics backends (PostgreSQL, MongoDB, InfluxDB, or equivalent). Infrastructure & DevOps Support Help scale and secure workloads using Azure, Kubernetes, and CI/CD pipelines (GitHub Actions, Azure DevOps). Implement observability practices—logging, metrics, alerting—for zero-downtime insights and RCA. Future-Forward (Agentic Track) Support the evolution of the backend toward intelligent agent orchestration: Build services that allow modular “agents” to retrieve, infer, and act (e.g. provisioning, remediation, escalation). Explore interfaces for integrating OpenAI, Azure AI, or RAG pipelines to make automation contextual and proactive. What You Bring Must-Have Technical Skills 2+ years backend engineering experience with production-grade systems. Strong proficiency in at least one modern backend language (Node.js, Python, Go, or .NET Core). Deep understanding of RESTful API design, GraphQL is a bonus. Experience building cloud-native apps on Azure (preferred), AWS or GCP. Familiarity with Microsoft ecosystem: Graph API, Teams, Entra ID (AAD), SIP/VoIP call data a big plus. Experience with relational and NoSQL databases; data modeling and performance tuning. Bonus (Not Mandatory, but Highly Valued) Exposure to AI/ML pipelines, LangChain, OpenAI API, or vector databases (Pinecone, Weaviate). Background in observability, root-cause analysis systems, or voice analytics. Experience with policy engines, RBAC, and multi-tenant SaaS platforms. Traits We Love Systems Thinker – You optimize for scale and understand how backend services interact across a distributed system. Builder’s DNA – You love to own, refine, and ship high-quality features fast. Learning Velocity – You’re interested in agentic architectures, GenAI, and eager to transition toward intelligent orchestration. Code Ethic – You write clean, maintainable, testable code—and always think security-first. Performance Expectations (First 30 Days) ü Ship a core modules with full test coverage and observability. ü Deliver API endpoints for at least one major module (e.g. RCA, Call Analytics, DeviceIQ). ü Draft and refine at least one reusable internal service that improves time-to-market for future agents. ü Collaborate with frontend, DevOps, and AI teams to support rapid iteration and experimentation. Tips: Provide a summary of the role, what success in the position looks like, and how this role fits into the organization overall.
Posted 1 week ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Head of Architecture and Technology (Hands-On, High-Ownership) Company: Elysium PTE. LTD. Location: Chennai, Tamil Nadu — at office Employment Type: Full-time, permanent Compensation: ₹15 L fixed CTC + up to 5 % ESOP (performance-linked vesting, 4-year schedule with 1-year cliff) Reports to: Founding Team ________________________________________ About Elysium Elysium is a founder-led studio headquartered in Singapore with its delivery hub in Chennai. We are currently building a global gaming-based mar-tech platform while running a premium digital-services practice (branding, immersive web, SaaS MVPs, AI-powered solutions). We thrive on speed, experimentation and shared ownership. ________________________________________ The opportunity We’re looking for a hungry technologist who can work in an early-stage start-up along with the founders to build ambitious global products & services. You’ll code hands-on every week, shape product architecture, and grow a lean engineering pod—owning both our flagship product and client deliveries. ________________________________________ What you will achieve in your first 12 months • Co-ordinate & develop the In-house products with internal & external teams. • Build and mentor a six-to-eight-person engineering/design squad that hits ≥ 85 % on time delivery for IT-service clients. • Cut mean time-to-deployment to under 30 minutes through automated CI/CD and Infrastructure-as-Code. • Implement GDPR-ready data flows and a zero-trust security baseline across all projects. • Publish quarterly tech radars and internal playbooks that keep the team learning and shipping fast. ________________________________________ Day-to-day responsibilities • Resource management & planning using the internal & external teams with respect to our products & client deliveries. • Pair-program and review pull requests to enforce clean, testable code. • Translate product/user stories into domain models, sprint plans and staffing forecasts. • Design cloud architecture (AWS / GCP) that balances cost and scale; own IaC, monitoring and on-call until an SRE is hired. • Evaluate and manage specialist vendors for parts of the flagship app; hold them accountable on quality and deadlines. • Scope and pitch technical solutions in client calls; draft SoWs and high-level estimates with founders. • Coach developers and designers, set engineering KPIs, run retrospectives and post-mortems. • Prepare technical artefacts for future fundraising and participate in VC diligence. ________________________________________ Must-have Requirements • 5 – 8 years modern full-stack development with at least one product shipped to >10 k MAU or comparable B2B scale. • Expert knowledge of modern full-stack ecosystems: Node.js or Python or Go; React/Next.js; distributed data stores (PostgreSQL, DynamoDB, Redis, Kafka or similar). • Deep familiarity with AWS, GCP or Azure, including cost-optimized design, autoscaling, serverless patterns, container orchestration and IaC tools such as Terraform or CDK. • Demonstrated ownership of DevSecOps practices: CI/CD, automated testing matrices, vulnerability scanning, SRE dashboards and incident post-mortems. • Excellent communication skills, able to explain complex trade-offs to founders, designers, marketers and non-technical investors. • Hunger to learn, ship fast, and own meaningful equity in lieu of a senior-corporate pay check. ________________________________________ Nice-to-have extras • Prior work in fintech, ad-tech or loyalty. • Experience with WebGL/Three.js, real-time event streaming (Kafka, Kinesis), LLM pipelines & Blockchain. • Exposure to seed- or Series-A fundraising, investor tech diligence or small-team leadership. ________________________________________ What we offer • ESOP of up to 5 % on a 4-year vest (1-year cliff) with performance accelerators tied to product milestones. • Direct influence on tech stack, culture and product direction—your code and decisions will shape the company’s valuation. • A team that values curiosity, transparency and shipping beautiful work at start-up speed. ________________________________________
Posted 1 week ago
10.0 years
0 Lacs
Delhi, India
On-site
Company Size Mid-Sized Experience Required 10 - 15 years Working Days 5 days/week Office Location Delhi Role & Responsibilities Lead and mentor a team of data engineers, ensuring high performance and career growth. Architect and optimize scalable data infrastructure, ensuring high availability and reliability. Drive the development and implementation of data governance frameworks and best practices. Work closely with cross-functional teams to define and execute a data roadmap. Optimize data processing workflows for performance and cost efficiency. Ensure data security, compliance, and quality across all data platforms. Foster a culture of innovation and technical excellence within the data team. Ideal Candidate 10+ years of experience in software/data engineering, with at least 3+ years in a leadership role. Expertise in backend development with programming languages such as Java, PHP, Python, Node.JS, GoLang, JavaScript, HTML, and CSS. Proficiency in SQL, Python, and Scala for data processing and analytics. Strong understanding of cloud platforms (AWS, GCP, or Azure) and their data services. Strong foundation and expertise in HLD and LLD, as well as design patterns, preferably using Spring Boot or Google Guice Experience in big data technologies such as Spark, Hadoop, Kafka, and distributed computing frameworks. Hands-on experience with data warehousing solutions such as Snowflake, Redshift, or BigQuery Deep knowledge of data governance, security, and compliance (GDPR, SOC2, etc.). Experience in NoSQL databases like Redis, Cassandra, MongoDB, and TiDB. Familiarity with automation and DevOps tools like Jenkins, Ansible, Docker, Kubernetes, Chef, Grafana, and ELK. Proven ability to drive technical strategy and align it with business objectives. Strong leadership, communication, and stakeholder management skills. Preferred Qualifications Experience in machine learning infrastructure or MLOps is a plus. Exposure to real-time data processing and analytics. Interest in data structures, algorithm analysis and design, multicore programming, and scalable architecture. Prior experience in a SaaS or high-growth tech company. Perks, Benefits and Work Culture Testimonial from a designer: 'One of the things I love about the design team at Wingify is the fact that every designer has a style which is unique to them. The second best thing is non-compliance to pre-existing rules for new products. So I just don't follow guidelines, I help create them.' Skills: infrastructure,soc2,ansible,drive,data governance,redshift,gdpr,javascript,cassandra,design,spring boot,jenkins,docker,mongodb,java,tidb,elk,python,php,aws,snowflake,lld,chef,bigquery,gcp,golang,html,data,kafka,grafana,kubernetes,scala,css,hadoop,azure,redis,sql,data processing,spark,hld,node.js,google guice,compliance
Posted 1 week ago
6.0 - 8.0 years
0 - 0 Lacs
bangalore, noida, chennai
Remote
Sr IT Data Analyst We are currently seeking a Sr IT Data Analyst to perform data analysis for a data warehouse/operational data store, data marts, and other data stores in support of the Optum business. The new hire will define and maintain business intelligence/data warehouse methodologies, standards, and industry best practices. You will work with Development and QA team to develop data delivery/processing solutions and to create Data Dictionary with full description of data elements and their usage. Responsibilities Include: Gather business requirements for analytical applications in iterative/agile development model partnering with Business and IT stakeholders Create source-to-target mapping based on requirements Create rules definitions, data profiling and transformation logic Gather and prepare analysis based on requirements from internal and external sources to evaluate and demonstrate program effectiveness and efficiency, and problem solving Support Data Governance activities and be responsible for data integrity Developing scalable reporting processes and querying data sources to conduct ad hoc analyses/detailed data profiling. Research complex functional data/analytical issues Assume responsibility for data integrity, data quality among various internal groups and/or between internal and external sources Provides source system analysis and perform gap analysis between source and target systems Requirements: 5+ years of Healthcare business and data analysis experience Proficient in SQL, understands data modeling and storage concepts like Snowflake Must have an aptitude for learning new data flows quickly and participate in data quality and automation discussions. Be comfortable in working as SME educating data consumers on data profiles and issues. Must be able to take end to end responsibility in quickly solving data issues in production setting. Knowledge of Data Platforms, Data As a Service model and DataOps practices Preferred Qualifications: Highly preferred is working knowledge of Kafka, Databricks, GitHub, Airflow, Azure HealthCare Industry Claims and Eligibility experience Experience in Python scripts Knowledge of AI models
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Java Full Stack Developer Exp: 5+ Years Mandate Skill: Spring Boot for backend development and be proficient in ReactJS for front-end development Required Skills ü Backend: Java, Spring Boot, Microservices, REST APIs, JPA/Hibernate ü Frontend: ReactJS, JavaScript, TypeScript, Redux ü Database: PostgreSQL, MySQL, MongoDB ü Cloud & DevOps: Docker, Kubernetes, CI/CD, GitHub Actions or Jenkins ü Messaging & Caching: Kafka, Redis ü Agile Practices: Jira, Confluence, Scrum Salary: Max 2000000 LPA We are looking for a mid-level full stack developer with a strong backend focus to join our team. The ideal candidate should have hands-on experience in Spring Boot for backend development and be proficient in ReactJS for front-end development . The candidate will be responsible for developing, enhancing, and maintaining enterprise applications while working in an Agile environment. Key Responsibilities Backend Development: Design, develop, and maintain RESTful APIs using Spring Boot and Java. Implement microservices architecture and ensure high-performance applications. Work with relational and NoSQL databases, optimizing queries and performance. Integrate with third-party APIs and messaging queues (Kafka, RabbitMQ). Frontend Development: Build and maintain user interfaces using ReactJS and modern UI frameworks. Ensure seamless API integration between front-end and back-end systems. Implement reusable components and optimize front-end performance. DevOps & Deployment: Work with Docker and Kubernetes for application deployment. Ensure CI/CD pipeline integration and automation. Collaboration & Agile Process: Work closely with onshore and offshore teams in a POD-based delivery model. Participate in daily stand-ups, sprint planning, and retrospectives. Write clean, maintainable, and well-documented code following best practices. Preferred Qualifications Prior experience working on Albertsons projects is a huge plus. Familiarity with Google Cloud Platform (GCP) or any cloud platform. Exposure to monitoring tools like Prometheus, Grafana. Strong problem-solving skills and ability to work independently.
Posted 1 week ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Title: Software Engineer - Backend (Python) Experience: 7+ Years Location : Hyderabad About the Role: Our team is responsible for building the backend components of the GenAI Platform. The Platform Offers Safe, compliant and cost-efficient access to LLMs, including Opensource & Commercial ones, adhering to Experian standards and policies Reusable tools, frameworks and coding patterns to perform various functions involved in either fine-tuning a LLM or developing a RAG-based application What you'll do here Design & build backend components of our GenAI platform on AWS. Collaborate with geographically distributed cross-functional teams. Participate in on-call rotation with the rest of the team to handle production incidents. What you'll need to succeed Must Have Skills At least 7+ years of professional backend web development experience with Python. Experience of AI and RAG Experience with DevOps & IaC tools such as Terraform, Jenkins etc. Experience with MLOps platorms such as AWS Sagemaker, Kubeflow or MLflow. Experience with web development frameworks such as Flask, Django or FastAPI. Experience with concurrent programming designs such as AsyncIO. Experience with any of the public cloud platforms like AWS, Azure, GCP, preferably AWS. Experience with CI/CD practices, tools, and frameworks. Nice To Have Skills Experience with Apache Kafka and developing Kafka client applications in Python. Experience with big data processing frameworks, preferably Apache Spark. Experience with containers (Docker) and container platorms like AWS ECS or AWS EKS. Experience with unit and functional testing frameworks. Experience with various Python packaging options such as Wheel, PEX or Conda. Experience with metaprogramming techniques in Python
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Title : Software Engineer - Backend (Python) About The Role Our team is responsible for building the backend components of MLOps platform on AWS. The backend components we build are the fundamental blocks for feature engineering, feature serving, model deployment and model inference in both batch and online modes. What You'll Do Here Design & build backend components of our MLOps platform on AWS. Collaborate with geographically distributed cross-functional teams. Participate in on-call rotation with the rest of the team to handle production incidents. What you'll need to succeed Must Have Skills Experience with web development frameworks such as Flask, Django or FastAPI. Experience working with WSGI & ASGI web servers such as Gunicorn, Uvicorn etc. Experience with concurrent programming designs such as AsyncIO. Experience with unit and functional testing frameworks. Experience with any of the public cloud platforms like AWS, Azure, GCP, preferably AWS. Experience with CI/CD practices, tools, and frameworks. Nice To Have Skills Experience with Apache Kafka and developing Kafka client applications in Python. Experience with MLOps platorms such as AWS Sagemaker, Kubeflow or MLflow. Experience with big data processing frameworks, preferably Apache Spark. Experience with containers (Docker) and container platorms like AWS ECS or AWS EKS. Experience with DevOps & IaC tools such as Terraform, Jenkins etc. Experience with various Python packaging options such as Wheel, PEX or Conda. Experience with metaprogramming techniques in Python. Primary Skills Python Development (Flask, Django or FastAPI) WSGI & ASGI web servers (Gunicorn, Uvicorn etc) AWS
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
💼Job Title: Kafka Developer 👨 💻Job Type: Fulltime 📍Location: Pune 💼Work regime: Hybrid 🔥Keywords: Kafka, Apache Kafka, Kafka Connect, Kafka Streams, and Schema Registry. Position Overview: We are looking for a Kafka Developer to design and implement real-time data ingestion pipelines using Apache Kafka. The role involves integrating with upstream flow record sources, transforming and validating data, and streaming it into a centralized data lake for analytics and operational intelligence What you will Have:- Responsibilities: Key Responsibilities : Develop Kafka producers to ingest flow records from upstream systems such as flow record exporters (e.g., IPFIX-compatible probes). Build Kafka consumers to stream data into Spark Structured Streaming jobs and downstream data lakes. Define and manage Kafka topic schemas using Avro and Schema Registry for schema evolution. Implement message serialization, transformation, enrichment, and validation logic within the streaming pipeline. Ensure exactly once processing, checkpointing, and fault tolerance in streaming jobs. Integrate with downstream systems such as HDFS or Parquet-based data lakes, ensuring compatibility with ingestion standards. Collaborate with Kafka administrators to align topic configurations, retention policies, and security protocols. Participate in code reviews, unit testing, and performance tuning to ensure high-quality deliverables. Document pipeline architecture, data flow logic, and operational procedures for handover and support. Required Skills & Qualifications : Proven experience in developing Kafka producers and consumers for real-time data ingestion pipelines. Strong hands-on expertise in Apache Kafka, Kafka Connect, Kafka Streams, and Schema Registry. Proficiency in Apache Spark (Structured Streaming) for real-time data transformation and enrichment. Solid understanding of IPFIX, NetFlow, and network flow data formats; experience integrating with nProbe Cento is a plus. Experience with Avro, JSON, or Protobuf for message serialization and schema evolution. Familiarity with Cloudera Data Platform components such as HDFS, Hive, YARN, and Knox. Experience integrating Kafka pipelines with data lakes or warehouses using Parquet or Delta formats. Strong programming skills in Scala, Java, or Python for stream processing and data engineering tasks. Knowledge of Kafka security protocols including TLS/SSL, Kerberos, and access control via Apache Ranger. Experience with monitoring and logging tools such as Prometheus, Grafana, and Splunk. Understanding of CI/CD pipelines, Git-based workflows, and containerization (Docker/Kubernetes) A little about us: Innova Solutions is a diverse and award-winning global technology services partner. We provide our clients with strategic technology, talent, and business transformation solutions, enabling them to be leaders in their field. Founded in 1998, headquartered in Atlanta (Duluth), Georgia. Employs over 50,000 professionals worldwide, with annual revenue approaching $3.0B. Delivers strategic technology and business transformation solutions globally. Operates through global delivery centers across North America, Asia, and Europe. Provides services for data center migration and workload development for cloud service providers. Awardee of prestigious recognitions including: Women’s Choice Awards - Best Companies to Work for Women & Millennials, 2024 Forbes, America’s Best Temporary Staffing and Best Professional Recruiting Firms, 2023 American Best in Business, Globee Awards, Healthcare Vulnerability Technology Solutions, 2023 Global Health & Pharma, Best Full Service Workforce Lifecycle Management Enterprise, 2023 Received 3 SBU Leadership in Business Awards Stevie International Business Awards, Denials Remediation Healthcare Technology Solutions, 2023
Posted 1 week ago
4.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Role: Senior Software Engineer Experience Required : 4-6 years Skills: Java, Springboot Location : Sector 16 , Noida Work Mode: 5 days (Work from Office) Interview Mode : Face2Face Notice Period: Immediate/Serving only About Times Internet At Times Internet, we create premium digital products that simplify and enhance the lives of millions. As India’s largest digital products company, we have a significant presence across a wide range of categories, including News, Sports, Fintech, and Enterprise solutions. Our portfolio features market-leading and iconic brands such as TOI, ET, NBT, Cricbuzz, Times Prime, Times Card, Indiatimes, Whatshot, Abound, Willow TV, Techgig and Times Mobile among many more. Each of these products is crafted to enrich your experiences and bring you closer to your interests and aspirations. As an equal opportunity employer, Times Internet strongly promotes inclusivity and diversity. We are proud to have achieved overall gender pay parity in 2018, verified by an independent audit conducted by Aon Hewitt. We are driven by the excitement of new possibilities and are committed to bringing innovative products, ideas, and technologies to help people make the most of every day. Join us and take us to the next level! About the Business Unit: Architecture and Group Initiatives (AGI) AGI owns the world-class Enterprise CMS solutions that empower all digital newsrooms within Times Internet and beyond. The solutions include state-of-the-art authoring tools with AI-enabled generative and assistive features, analytics and reporting tools and services that easily scale to the millions of requests per minute. This unique scaling need and engineering of state-of-the-art products make AGI a place of constant evolution and innovation across product, design and engineering in the ever-growing digital and print media industry landscape. About the role: We seek a highly skilled and experienced Java Senior Software Engineer to join our dynamic team who can play a key role in designing, developing, and maintaining our Internet-based applications. As a Senior Engineer, you have to actively participate in designing and implementing projects with high technical complexity, scalability, and performance implications. You will collaborate with cross-functional teams to deliver high-quality software solutions that meet customer needs and business objectives. Roles and Responsibilities Design, development, and testing of large-scale and high-performance web applications and frameworks. Create reusable frameworks through hands-on development and unit testing. Write clean, efficient, and maintainable code following best practices and coding standards. Troubleshoot and debug issues, and implement solutions on time. Participate in architectural discussions and contribute to the overall technical roadmap. Stay updated on emerging technologies and trends in Java development, and make recommendations for adoption where appropriate. Skills Required: Bachelor's degree in Computer Science, Engineering, or a related field. 4+ years of hands-on experience in Java development, with a strong understanding of core Java concepts and object-oriented programming principles. Proficiency in Spring framework, including Spring Boot, Spring MVC, and Spring Data. Experience with Kafka for building distributed, real-time streaming applications. Strong understanding of relational databases such as MySQL, including schema design and optimization. Proficiency in writing SQL Queries is a must. Experience with NoSQL Databases such as MongoDB, and Redis. Experience with microservices architecture and containerization technologies such as Docker and Kubernetes. Excellent problem-solving skills and attention to detail. Knowledge of software development lifecycle methodologies such as Agile or Scrum. Strong communication and collaboration skills. Ability to work effectively in a fast-paced environment and manage multiple priorities. Self-motivation and the ability to work under minimal supervision.
Posted 1 week ago
1.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description About Oracle Analytics & Big Data Service: Oracle Analytics is a complete platform that supports every role within analytics, offering cloud-native services or on-premises solutions without compromising security or governance. Our platform delivers a unified system for managing everything from data collection to decision-making, with seamless integration of AI and machine learning to help businesses accelerate productivity and uncover critical insights. Oracle Big Data Service, a part of Oracle Analytics, is a fully managed, automated cloud service designed to help enterprises create scalable Hadoop-based data lakes. The service work scope encompasses not just good integration with OCI’s native infrastructure (security, cloud, storage, etc.) but also deep integration with other relevant cloud-native services in OCI. It includes doing cloud-native ways of doing service level patching & upgrades and maintaining high availability of the service in the face of random failures & planned downtimes in the underlying infrastructure (e.g., for things like patching the Linux kernels to take care of a security vulnerability). Developing systems for monitoring and getting telemetry into the service’s runtime characteristics and being able to take actions on the telemetry data is a part of the charter. We are interested in experienced engineers with expertise and passion for solving difficult problems in distributed systems and highly available services to join our Oracle Big Data Service team. In this role, you will be instrumental in building, maintaining, and enhancing our managed, cloud-native Big Data service focused on large-scale data processing and analytics. At Oracle, you can help, shape, design, and build innovative new systems from the ground up. These are exciting times in our space - we are growing fast, still at an early stage, and working on ambitious new initiatives. Engineers at any level can have significant technical and business impact. Minimum Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related technical field. Minimum of 1-2 years of experience in software development, with a focus on large-scale distributed systems, cloud services, or Big Data technologies. US passport holders. This is required by the position to access US Gov regions. Expertise in coding in Java, Python with emphasis on tuning/optimization Experience with Linux systems administration, troubleshooting, and security best practices in cloud environments. Experience with open-source software in the Big Data ecosystem Experience at an organization with operational/dev-ops culture Solid understanding of networking, storage, and security components related to cloud infrastructure. Solid foundation in data structures, algorithms, and software design with strong analytical and debugging skills. Preferred Qualifications: Hands-on experience with Hadoop ecosystem (HDFS, MapReduce, YARN), Spark, Kafka, Flink and other big data technologies. Proven expertise in cloud-native architectures and services, preferably within Oracle Cloud Infrastructure (OCI), AWS, Azure, or GCP. In-depth understanding of Java and JVM mechanics Good problem-solving skills and the ability to work in a fast-paced, agile environment. Responsibilities Key Responsibilities: Participate in development and maintenance of a scalable and secure Hadoop-based data lake service. Code, integrate, and operationalize open and closed source data ecosystem components for Oracle cloud service offerings Collaborate with cross-functional teams including DevOps, Security, and Product Management to define and execute product roadmaps, service updates, and feature enhancements. Becoming an active member of the Apache open source community when working on open source components Ensure compliance with security protocols and industry best practices when handling large-scale data processing in the cloud. Qualifications Career Level - IC2 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 1 week ago
0 years
0 Lacs
Delhi, India
On-site
Description Skills Required: Bash/Shell scripting Git Hub ETL Apache Spark Data validation strategies Docker & Kubernetes (for containerized deployments) Monitoring tools: Prometheus, Grafana Strong in python Grafana-Prometheus, PowerBI/Tableau (important) Requirements Extensive hands-on experience implementing data migration and data processing Strong Experience implementing ETL/ELT processes and building data pipelines including workflow management, job scheduling and monitoring Experience with building and implementing Big Data platforms On-Prem or On Cloud, covering ingestion (Batch and Real-time), processing (Batch and real-time), Polyglot Storage, Data Access Good understanding of Data Warehouse, Data Governance, Data Security, Data Compliance, Data Quality, Meta Data Management, Master Data Management, Data Catalog Proven understanding and demonstrable implementation experience of big data platform technologies on the cloud (AWS and Azure) including surrounding services like IAM, SSO, Cluster monitoring, Log Analytics, etc. Experience with source code management tools such as TFS or Git Knowledge of DevOps with CICD pipeline setup and automate Building and integrating systems to meet the business needs Defining features, phases, and solution requirements and providing specifications accordingly Experience building stream-processing systems, using solutions such as Azure Even Hub/ Kafka etc. Strong experience with data modeling and schema design Strong knowledge in SQL and no-sql Database and/or BI/DW. Excellent interpersonal and teamwork skills Experience With Leading And Mentorship Of Other Team Members Good knowledge of Agile Scrum Good communication skills Strong analytical, logic and quantitative ability. Takes ownership of a task. Values accountability and responsibility. Quick learner Job responsibilities ETL/ELT processes, data pipelines, Big Data platforms (On-Prem/Cloud), data ingestion (Batch/Real-time), data processing, Polyglot Storage, Data Governance, Cloud (AWS/Azure), IAM, SSO, Cluster monitoring, Log Analytics, source code management (Git/TFS), DevOps, CICD automation, stream processing (Kafka, Azure Event Hub), data modeling, schema design, SQL/NoSQL, BI/DW, Agile Scrum, team leadership, communication, analytical skills, ownership, quick learner What we offer Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally. Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today. Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way! High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world’s largest and most forward-thinking companies. Since 2000, we’ve been at the forefront of the digital revolution – helping create some of the most innovative and widely used digital products and experiences. Today we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services.
Posted 1 week ago
1.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description About Oracle Analytics & Big Data Service: Oracle Analytics is a complete platform that supports every role within analytics, offering cloud-native services or on-premises solutions without compromising security or governance. Our platform delivers a unified system for managing everything from data collection to decision-making, with seamless integration of AI and machine learning to help businesses accelerate productivity and uncover critical insights. Oracle Big Data Service, a part of Oracle Analytics, is a fully managed, automated cloud service designed to help enterprises create scalable Hadoop-based data lakes. The service work scope encompasses not just good integration with OCI’s native infrastructure (security, cloud, storage, etc.) but also deep integration with other relevant cloud-native services in OCI. It includes doing cloud-native ways of doing service level patching & upgrades and maintaining high availability of the service in the face of random failures & planned downtimes in the underlying infrastructure (e.g., for things like patching the Linux kernels to take care of a security vulnerability). Developing systems for monitoring and getting telemetry into the service’s runtime characteristics and being able to take actions on the telemetry data is a part of the charter. We are interested in experienced engineers with expertise and passion for solving difficult problems in distributed systems and highly available services to join our Oracle Big Data Service team. In this role, you will be instrumental in building, maintaining, and enhancing our managed, cloud-native Big Data service focused on large-scale data processing and analytics. At Oracle, you can help, shape, design, and build innovative new systems from the ground up. These are exciting times in our space - we are growing fast, still at an early stage, and working on ambitious new initiatives. Engineers at any level can have significant technical and business impact. Minimum Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related technical field. Minimum of 1-2 years of experience in software development, with a focus on large-scale distributed systems, cloud services, or Big Data technologies. US passport holders. This is required by the position to access US Gov regions. Expertise in coding in Java, Python with emphasis on tuning/optimization Experience with Linux systems administration, troubleshooting, and security best practices in cloud environments. Experience with open-source software in the Big Data ecosystem Experience at an organization with operational/dev-ops culture Solid understanding of networking, storage, and security components related to cloud infrastructure. Solid foundation in data structures, algorithms, and software design with strong analytical and debugging skills. Preferred Qualifications: Hands-on experience with Hadoop ecosystem (HDFS, MapReduce, YARN), Spark, Kafka, Flink and other big data technologies. Proven expertise in cloud-native architectures and services, preferably within Oracle Cloud Infrastructure (OCI), AWS, Azure, or GCP. In-depth understanding of Java and JVM mechanics Good problem-solving skills and the ability to work in a fast-paced, agile environment. Responsibilities Key Responsibilities: Participate in development and maintenance of a scalable and secure Hadoop-based data lake service. Code, integrate, and operationalize open and closed source data ecosystem components for Oracle cloud service offerings Collaborate with cross-functional teams including DevOps, Security, and Product Management to define and execute product roadmaps, service updates, and feature enhancements. Becoming an active member of the Apache open source community when working on open source components Ensure compliance with security protocols and industry best practices when handling large-scale data processing in the cloud. Qualifications Career Level - IC2 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 1 week ago
1.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description About Oracle Analytics & Big Data Service: Oracle Analytics is a complete platform that supports every role within analytics, offering cloud-native services or on-premises solutions without compromising security or governance. Our platform delivers a unified system for managing everything from data collection to decision-making, with seamless integration of AI and machine learning to help businesses accelerate productivity and uncover critical insights. Oracle Big Data Service, a part of Oracle Analytics, is a fully managed, automated cloud service designed to help enterprises create scalable Hadoop-based data lakes. The service work scope encompasses not just good integration with OCI’s native infrastructure (security, cloud, storage, etc.) but also deep integration with other relevant cloud-native services in OCI. It includes doing cloud-native ways of doing service level patching & upgrades and maintaining high availability of the service in the face of random failures & planned downtimes in the underlying infrastructure (e.g., for things like patching the Linux kernels to take care of a security vulnerability). Developing systems for monitoring and getting telemetry into the service’s runtime characteristics and being able to take actions on the telemetry data is a part of the charter. We are interested in experienced engineers with expertise and passion for solving difficult problems in distributed systems and highly available services to join our Oracle Big Data Service team. In this role, you will be instrumental in building, maintaining, and enhancing our managed, cloud-native Big Data service focused on large-scale data processing and analytics. At Oracle, you can help, shape, design, and build innovative new systems from the ground up. These are exciting times in our space - we are growing fast, still at an early stage, and working on ambitious new initiatives. Engineers at any level can have significant technical and business impact. Minimum Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related technical field. Minimum of 1-2 years of experience in software development, with a focus on large-scale distributed systems, cloud services, or Big Data technologies. US passport holders. This is required by the position to access US Gov regions. Expertise in coding in Java, Python with emphasis on tuning/optimization Experience with Linux systems administration, troubleshooting, and security best practices in cloud environments. Experience with open-source software in the Big Data ecosystem Experience at an organization with operational/dev-ops culture Solid understanding of networking, storage, and security components related to cloud infrastructure. Solid foundation in data structures, algorithms, and software design with strong analytical and debugging skills. Preferred Qualifications: Hands-on experience with Hadoop ecosystem (HDFS, MapReduce, YARN), Spark, Kafka, Flink and other big data technologies. Proven expertise in cloud-native architectures and services, preferably within Oracle Cloud Infrastructure (OCI), AWS, Azure, or GCP. In-depth understanding of Java and JVM mechanics Good problem-solving skills and the ability to work in a fast-paced, agile environment. Responsibilities Key Responsibilities: Participate in development and maintenance of a scalable and secure Hadoop-based data lake service. Code, integrate, and operationalize open and closed source data ecosystem components for Oracle cloud service offerings Collaborate with cross-functional teams including DevOps, Security, and Product Management to define and execute product roadmaps, service updates, and feature enhancements. Becoming an active member of the Apache open source community when working on open source components Ensure compliance with security protocols and industry best practices when handling large-scale data processing in the cloud. Qualifications Career Level - IC2 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 1 week ago
14.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The Applications Development Technology Lead Analyst is a senior level position responsible for establishing and implementing new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to lead applications systems analysis and programming activities. Responsibilities: Partner with multiple management teams to ensure appropriate integration of functions to meet goals as well as identify and define necessary system enhancements to deploy new products and process improvements Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation Develop comprehensive knowledge of how areas of business, such as architecture and infrastructure, integrate to accomplish business goals Provide in-depth analysis with interpretive thinking to define issues and develop innovative solutions Serve as advisor or coach to mid-level developers and analysts, allocating work as necessary Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: 14+ years of relevant experience in Apps Development or systems analysis role Extensive experience system analysis and in programming of software applications Experience in managing and implementing successful projects Subject Matter Expert (SME) in at least one area of Applications Development Ability to adjust priorities quickly as circumstances dictate Demonstrated leadership and project management skills Consistently demonstrates clear and concise written and verbal communication Education: Bachelor’s degree/University degree or equivalent experience Master’s degree preferred This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. Knowledge/Experience: 14+ years of industry experience Experience of Agile development and scrums Strong knowledge on Core Java, Spring(Core, Boot etc), Expertise in Web API implementations (Web services, Restful services etc.) . Good understanding of Linux or Unix operating systems. Strong knowledge on build (Ant/Maven), continuous integration (Jenkins), code quality analysis (SonarQube) and unit and integration testing (JUnit) Exposure to SCM tool like bitbucket . Candidates with strong knowledge of Docker / Kubernetes / OpenShift.. Strong knowledge of distributed messaging platforms like (Apache Kafka, RabbitMQ etc) Good understanding of No SQL database like Mongo Db. Skills: Hands on coding experience on Core Java and Spring Hands on coding experience in python is a plus. Strong analysis and design skills including OO design patterns Solid understanding of SOA concepts, RESTful API design Ability to produce professional, technically-sound, and visually-appealing presentations and architecture designs Experience creating high level technical/process documentation and presentations for audiences at various levels. Experience writing/editing technical, business, and process documentation in an Information Technology/Engineering environment Must be able to understand requirements & convert to technical design and code Knowledge of source code control systems, unit test framework, build and deployment tools Experienced with large scale programs rollout and ability to create and maintain detailed WBS and project plans. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 1 week ago
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About VOIS VOIS (Vodafone Intelligent Solutions) is a strategic arm of Vodafone Group Plc, creating value and enhancing quality and efficiency across 28 countries, and operating from 7 locations: Albania, Egypt, Hungary, India, Romania, Spain and the UK. Over 29,000 highly skilled individuals are dedicated to being Vodafone Group’s partner of choice for talent, technology, and transformation. We deliver the best services across IT, Business Intelligence Services, Customer Operations, Business Operations, HR, Finance, Supply Chain, HR Operations, and many more. Established in 2006, VOIS has evolved into a global, multi-functional organization, a Centre of Excellence for Intelligent Solutions focused on adding value and delivering business outcomes for Vodafone. VOIS India In 2009, VOIS started operating in India and now has established global delivery centers in Pune, Bangalore and Ahmedabad. With more than 14,500 employees, VOIS India supports global markets and group functions of Vodafone and delivers best-in-class customer experience through multi-functional services in the areas of Information Technology, Networks, Business Intelligence and Analytics, Digital Business Solutions (Robotics & AI), Commercial Operations (Consumer & Business), Intelligent Operations, Finance Operations, Supply Chain Operations and HR Operations and more. Role Purpose Mode : Hybrid Location : Pune Experience : 5 to 8 years Core Competencies, Knowledge And Experience 5-7 years’ experience in managing large data sets, simulation/ optimization and distributed computing tools. Excellent communication & presentation skills with track record of engaging with business project leads. Role Purpose Primary responsibility is to define data lifecycle, including data models and data sources for analytics platform, gathering data from business and cleaning them in order to provide ready-to-work inputs for Data Scientists Apply strong expertise in in automating end to end data science pipelines & big data pipelines (Collect, ingest, store , transform and optimize scale) The incumbent will work on the assigned projects & it's stakeholder alongside Data Scientists to understand the business challenges faced by them. The work involves working with large data sets, simulation/ optimization and distributed computing tools. The candidate works with the assigned business stakeholder(s) to agree scope, deliverables, process and expected outcomes from the products and services developed. Must Have Technical / Professional Qualifications Experience working with large data sets, simulation/ optimization and distributed computing tools Experience in transformation data with Apache spark for Data Science activities Experience in working with distributed storage on cloud (AWS/GCP) or HDFS Experience in building data pipelines with Airflow Experience in ingesting data from different sources using Kafka/Sqoop/Flume/ Nifi Experience in solving simple to complex big data platform/framework issues Experience in building real time analytics system with Apache Spark, Flink & Kafka Experience in Scala, Python, Java & R Experience in working with NoSQL databases (Cassandra, Mongo DB, HBase, Redis) Key Accountabilities And Decision Ownership Understand the data science problems and design & schedule end to end pipelines For the given problem identify the right big data technologies to solve the problem in an optimized way Automate the data science pipelines, deploy ML algorithms and track the performance Build customer 360, feature store for different machine learning problems Build data model for machine learning feature store on high velocity, flexible schema databases VOIS Equal Opportunity Employer Commitment VOIS is proud to be an Equal Employment Opportunity Employer. We celebrate differences and we welcome and value diverse people and insights. We believe that being authentically human and inclusive powers our employees’ growth and enables them to create a positive impact on themselves and society. We do not discriminate based on age, color, gender (including pregnancy, childbirth, or related medical conditions), gender identity, gender expression, national origin, race, religion, sexual orientation, status as an individual with a disability, or other applicable legally protected characteristics. As a result of living and breathing our commitment, our employees have helped us get certified as a Great Place to Work in India for four years running. We have been also highlighted among the Top 5 Best Workplaces for Diversity, Equity, and Inclusion, Top 10 Best Workplaces for Women, Top 25 Best Workplaces in IT & IT-BPM and 14th Overall Best Workplaces in India by the Great Place to Work Institute in 2023. These achievements position us among a select group of trustworthy and high-performing companies which put their employees at the heart of everything they do. By joining us, you are part of our commitment. We look forward to welcoming you into our family which represents a variety of cultures, backgrounds, perspectives, and skills! Apply now, and we’ll be in touch!
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40005 Jobs | Dublin
Wipro
19416 Jobs | Bengaluru
Accenture in India
16187 Jobs | Dublin 2
EY
15356 Jobs | London
Uplers
11435 Jobs | Ahmedabad
Amazon
10613 Jobs | Seattle,WA
Oracle
9462 Jobs | Redwood City
IBM
9313 Jobs | Armonk
Accenture services Pvt Ltd
8087 Jobs |
Capgemini
7830 Jobs | Paris,France