Jobs
Interviews

8236 Hadoop Jobs - Page 3

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

1.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Description Want to participate in building the next generation of online payment system that supports multiple countries and payment methods? Amazon Payment Services (APS) is a leading payment service provider in MENA region with operations spanning across 8 countries and offers online payment services to thousands of merchants. APS team is building robust payment solution for driving the best payment experience on & off Amazon. Over 100 million customers send tens of billions of dollars moving at light-speed through our systems annually. We build systems that process payments at an unprecedented scale with accuracy, speed and mission-critical availability. We innovate to improve customer experience, with support for currency of choice, in-store payments, pay on delivery, credit and debit card payments, seller disbursements and gift cards. Many new exciting & challenging ideas are in the works. Key job responsibilities Data Engineers focus on managing data requests, maintaining operational excellence, and enhancing core infrastructure. You will be collaborating closely with both technical and non-technical teams to design and execute roadmaps Basic Qualifications 1+ years of data engineering experience Experience with SQL Experience with data modeling, warehousing and building ETL pipelines Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) Experience with one or more scripting language (e.g., Python, KornShell) Preferred Qualifications Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI MAA 15 SEZ Job ID: A3049756

Posted 1 day ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. The offering portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements Google Cloud Platform - Data Engineer Cloud is shifting business models at our clients, and transforming the way technology enables business. As our clients embark on this transformational journey to cloud, they are looking for trusted partners who can help them navigate through this journey. Our client’s journey spans across cloud strategy to implementation, migration of legacy applications to supporting operations of a cloud ecosystem and everything in between. Deloitte’s Cloud Delivery Center supports our client project teams in this journey by delivering these new solutions by which IT services are obtained, used, and managed. You will be working with other technologists to deliver cutting edge solutions using Google Cloud Services ( GCP ), programming and automation tools for some of our Fortune 1000 clients. You will have the opportunity to contribute to work that may involve building a new cloud solutions, migrating an application to co-exist in the hybrid cloud, deploying a global cloud application across multiple countries or supporting a set of cloud managed services. Our teams of technologists have a diverse range of skills and we are always looking for new ways to innovate and help our clients succeed. You will have an opportunity to leverage the skills you already have, try new technologies and develop skills that will improve your brand and career as a well-rounded cutting-edge technologist . Work you’ll do As GCP Data Engineer you will have multiple responsibilities depending on project type. As a Cloud Data Engineer, you will guide customers on how to ingest, store, process, analyze and explore/visualize data on the Google Cloud Platform. You will work on data migrations and transformational projects, and with customers to design large-scale data processing systems, develop data pipelines optimized for scaling, and troubleshoot potential platform issues. In this role you are the Data Engineer working with Deloitte's most strategic Cloud customers. Together with the team you will support customer implementation of Google Cloud products through: architecture guidance, best practices, data migration, capacity planning, implementation, troubleshooting, monitoring and much more. The key responsibilities may involve some or all of the areas listed below: Act as a trusted technical advisor to customers and solve complex Big Data challenges. Create and deliver best practices recommendations, tutorials, blog articles, sample code, and technical presentations adapting to different levels of key business and technical stakeholders. ▪ Identifying new tools and processes to improve the cloud platform and automate processes Qualifications Technical Requirements BA/BS degree in Computer Science, Mathematics or related technical field, or equivalent practical experience. Experience in Cloud SQL and Cloud Bigtable Experience in Dataflow, BigQuery, Dataproc, Datalab, Dataprep, Pub/Sub and Genomics Experience in Google Transfer Appliance, Cloud Storage Transfer Service, BigQuery Data Transfer Experience with data processing software (such as Hadoop, Kafka, Spark, Pig, Hive) and with data processing algorithms (MapReduce, Flume). Experience working with technical customers. Experience in writing software in one or more languages such as Java, C++, Python, Go and/or JavaScript. Consulting Requirements 3-6 years of relevant consulting, industry or technology experience Strong problem solving and troubleshooting skills Strong communicator Willingness to travel up in case of project requirement Preferred Qualifications Experience working data warehouses, including data warehouse technical architectures, infrastructure components, ETL/ELT and reporting/analytic tools and environments. Experience in technical consulting. Experience architecting, developing software, or internet scale production-grade Big Data solutions in virtualized environments such as Google Cloud Platform (mandatory) and AWS/Azure(good to have) Experience working with big data, information retrieval, data mining or machine learning as well as experience in building multi-tier high availability applications with modern web technologies (such as NoSQL, Kafka,NPL, MongoDB, SparkML, Tensorflow). Working knowledge of ITIL and/or agile methodologies Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 300075

Posted 1 day ago

Apply

1.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Description Want to participate in building the next generation of online payment system that supports multiple countries and payment methods? Amazon Payment Services (APS) is a leading payment service provider in MENA region with operations spanning across 8 countries and offers online payment services to thousands of merchants. APS team is building robust payment solution for driving the best payment experience on & off Amazon. Over 100 million customers send tens of billions of dollars moving at light-speed through our systems annually. We build systems that process payments at an unprecedented scale with accuracy, speed and mission-critical availability. We innovate to improve customer experience, with support for currency of choice, in-store payments, pay on delivery, credit and debit card payments, seller disbursements and gift cards. Many new exciting & challenging ideas are in the works. Key job responsibilities Data Engineers focus on managing data requests, maintaining operational excellence, and enhancing core infrastructure. You will be collaborating closely with both technical and non-technical teams to design and execute roadmaps Basic Qualifications 1+ years of data engineering experience Experience with SQL Experience with data modeling, warehousing and building ETL pipelines Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) Experience with one or more scripting language (e.g., Python, KornShell) Preferred Qualifications Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI MAA 15 SEZ Job ID: A3049753

Posted 1 day ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description: About Us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities, and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview*: Batch Support Services team is responsible for all Retail, Preferred and Global Wealth & Investment Management (GWIM) business aligned infrastructure and provides stability and resiliency in a standardized production like end-to-end batch testing environment to enhance the speed to market capabilities by stability and resiliency , by Balancing MIPS batch workload in the test environments ,Leveraging automated scheduling, notifications, and dashboards, Centralizing batch execution by Domain, incorporating common tools & ensuring automated hand-offs of critical processes, Evaluating and building out critical batch environments encompassing mainframe and mid-range batch applications reducing manual intervention Job Description As a member of Batch Support Services team, the person will be responsible for supporting Midrange (Datastage & Autosys), Hadoop batch, scheduling support and maintenance across multiple test environments and Batch Execution. Use support tools to navigate through logs in problem analysis and adhering to standards and procedures for technical and change implementation of scheduling support. Identify and implement opportunities for process improvements, potential risks, and increased efficiencies as part of Batch Optimization. Ability to work in cross functional and multi-location teams. Responsibilities: Analysis and support of batch application testing of Mid-Range (Datastage & Autosys, Hadoop) batch components in integrated and independent test environments. Understand functionalities of change and problem requests. Analyze the batch issues and provide the resolutions for midrange applications. Analysis of impact in the existing system and estimation. Work with multi-platform batch application teams to optimize testing capabilities and production deployments. Analyze, develop batch components for midrange using IIS Datastage - Datastage, Autosys, PySpark, Hadoop, UNIX Shell Scripting Optimize the multi-platform batch applications (midrange and mainframe) using testing capabilities. Support test batch execution for midrange and mainframe applications as part of integrated and independent releases. Understand functionalities of change and problem requests & batch optimization based on the system understanding. Write UNIX shell scripting, for various functions such as maintenance, backup, and server health checks. Perform application support activities using Endevor & Subversion (SVN). Co-ordinate with required stakeholders (Release Management, Data Management and Configuration Management) to support the project. Requirements*: Education* B.E./ B. Tech/M.E./M. Tech/BSC/MSC/BCA/MCA (prefer IT/CS specialization) Certifications If Any – NA Experience Range- 2 – 4 yrs Foundational Skills: Datastage, Autosys, Hadoop, PySpark UNIX Shell Scripting Desired Skills Mainframe Batch Experience with CA7, JCL, TSO, NDM, Cobol, JCL DB2, IMS Work Timings*: *Rotational Shift (6:30 AM to 10:30 PM any shift IST). Will be required to work in shifts for coverage during offshore hours including weekends. Job Location: Chennai, Hyderabad, Mumbai,Gurugram,Giftcity

Posted 1 day ago

Apply

12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. [Senior Manager Software Development Engineering] What You Will Do Let’s do this. Let’s change the world. In this vital role you will be responsible for designing, developing, and maintaining software applications and solutions that meet business needs and ensuring the availability and performance of critical systems and applications. This role involves working closely with product managers, designers, and other engineers to create high-quality, scalable software solutions and automating operations, monitoring system health, and responding to incidents to minimize downtime. Roles & Responsibilities: Provide technical leadership to enhance the culture of innovation, automation, and solving difficult scientific and business challenges. Technical leadership includes providing vision and direction to develop scalable reliable solutions. Provide leadership to select right-sized and appropriate tools and architectures based on requirements, data source format, and current technologies Develop, refactor, research and improve Weave cloud platform capabilities. Understand business drivers and technical needs so our cloud services seamlessly, automatically, and securely provides them the best service. Develop data flow pipelines to extract, transform, and load data from various data sources in various forms, including custom ETL pipelines that enable model and product development Build strong partnership with stakeholder Build data products and service processes which perform data transformation, metadata extraction, workload management and error processing management to ensure high quality data Provide clear documentation for delivered solutions and processes, integrating documentation Collaborate with business partners to understand user stories and ensure technical solution/build can deliver to those needs Work with multi-functional teams to design and document effective and efficient solutions. Develop change management strategies and assist in their implementation. Mentor junior data engineers on standard methodologies in the industry and in the Amgen data landscape What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Basic Qualifications and Experience: Doctorate Degree /Master's degree / Bachelor's degree and 12to 17 years Computer Science, IT or related field experience Preferred Skills: Must-Have Skills: Superb communication and interpersonal skills, with the ability to work cross-functionally with multi-functional GTM, product, and engineering teams. Minimum of 10+ years overall Software Engineer or Cloud Architect experience Minimum 3+ years in architecture role using public cloud solutions such as AWS Experience with AWS Technology stack Good-to-Have Skills: Familiarity with big data technologies, AI platforms, and cloud-based data solutions. Ability to work effectively across matrixed organizations and lead collaboration between data and AI teams. Passion for technology and customer success, particularly in driving innovative AI and data solutions. Experience working with teams of data scientists, software engineers and business experts to drive insights Experience with AWS Services such as EC2, S3, Redshift/Spectrum, Glue, Athena, RDS, Lambda, and API gateway. Experience with Big Data Technologies (Hadoop, Hive, Hbase, Pig, Spark, etc) Good understanding of relevant data standards and industry trends Ability to understand new business requirements and prioritize them for delivery Experience working in biopharma/life sciences industry Proficient in one of the coding languages (Python, Java, Scala) Hands on experience writing SQL using any RDBMS (Redshift, Postgres, MySQL, Teradata, Oracle, etc.). Experience with Schema Design & Dimensional data modeling. Experience with software DevOps CI/CD tools, such Git, Jenkins, Linux, and Shell Script Hands on experience using Databricks/Jupyter or similar notebook environment. Experience working with GxP systems Experience working in an agile environment (i.e. user stories, iterative development, etc.) Experience working with test-driven development and software test automation Experience working in a Product environment Good overall understanding of business, manufacturing, and laboratory systems common in the pharmaceutical industry, as well as the integration of these systems through applicable standards. Soft Skills: Excellent analytical and troubleshooting skills. Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to handle multiple priorities successfully. Team-oriented, with a focus on achieving team goals What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 1 day ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Description: About Us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities, and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview*: Batch Support Services team is responsible for all Retail, Preferred and Global Wealth & Investment Management (GWIM) business aligned infrastructure and provides stability and resiliency in a standardized production like end-to-end batch testing environment to enhance the speed to market capabilities by stability and resiliency , by Balancing MIPS batch workload in the test environments ,Leveraging automated scheduling, notifications, and dashboards, Centralizing batch execution by Domain, incorporating common tools & ensuring automated hand-offs of critical processes, Evaluating and building out critical batch environments encompassing mainframe and mid-range batch applications reducing manual intervention Job Description As a member of Batch Support Services team, the person will be responsible for supporting Midrange (Datastage & Autosys), Hadoop batch, scheduling support and maintenance across multiple test environments and Batch Execution. Use support tools to navigate through logs in problem analysis and adhering to standards and procedures for technical and change implementation of scheduling support. Identify and implement opportunities for process improvements, potential risks, and increased efficiencies as part of Batch Optimization. Ability to work in cross functional and multi-location teams. Responsibilities: Analysis and support of batch application testing of Mid-Range (Datastage & Autosys, Hadoop) batch components in integrated and independent test environments. Understand functionalities of change and problem requests. Analyze the batch issues and provide the resolutions for midrange applications. Analysis of impact in the existing system and estimation. Work with multi-platform batch application teams to optimize testing capabilities and production deployments. Analyze, develop batch components for midrange using IIS Datastage - Datastage, Autosys, PySpark, Hadoop, UNIX Shell Scripting Optimize the multi-platform batch applications (midrange and mainframe) using testing capabilities. Support test batch execution for midrange and mainframe applications as part of integrated and independent releases. Understand functionalities of change and problem requests & batch optimization based on the system understanding. Write UNIX shell scripting, for various functions such as maintenance, backup, and server health checks. Perform application support activities using Endevor & Subversion (SVN). Co-ordinate with required stakeholders (Release Management, Data Management and Configuration Management) to support the project. Requirements*: Education* B.E./ B. Tech/M.E./M. Tech/BSC/MSC/BCA/MCA (prefer IT/CS specialization) Certifications If Any – NA Experience Range- 2 – 4 yrs Foundational Skills: Datastage, Autosys, Hadoop, PySpark UNIX Shell Scripting Desired Skills Mainframe Batch Experience with CA7, JCL, TSO, NDM, Cobol, JCL DB2, IMS Work Timings*: *Rotational Shift (6:30 AM to 10:30 PM any shift IST). Will be required to work in shifts for coverage during offshore hours including weekends. Job Location: Chennai, Hyderabad, Mumbai,Gurugram,Giftcity

Posted 1 day ago

Apply

10.0 years

0 Lacs

Delhi, India

On-site

Company Size Mid-Sized Experience Required 10 - 15 years Working Days 5 days/week Office Location Delhi Role & Responsibilities Lead and mentor a team of data engineers, ensuring high performance and career growth. Architect and optimize scalable data infrastructure, ensuring high availability and reliability. Drive the development and implementation of data governance frameworks and best practices. Work closely with cross-functional teams to define and execute a data roadmap. Optimize data processing workflows for performance and cost efficiency. Ensure data security, compliance, and quality across all data platforms. Foster a culture of innovation and technical excellence within the data team. Ideal Candidate 10+ years of experience in software/data engineering, with at least 3+ years in a leadership role. Expertise in backend development with programming languages such as Java, PHP, Python, Node.JS, GoLang, JavaScript, HTML, and CSS. Proficiency in SQL, Python, and Scala for data processing and analytics. Strong understanding of cloud platforms (AWS, GCP, or Azure) and their data services. Strong foundation and expertise in HLD and LLD, as well as design patterns, preferably using Spring Boot or Google Guice Experience in big data technologies such as Spark, Hadoop, Kafka, and distributed computing frameworks. Hands-on experience with data warehousing solutions such as Snowflake, Redshift, or BigQuery Deep knowledge of data governance, security, and compliance (GDPR, SOC2, etc.). Experience in NoSQL databases like Redis, Cassandra, MongoDB, and TiDB. Familiarity with automation and DevOps tools like Jenkins, Ansible, Docker, Kubernetes, Chef, Grafana, and ELK. Proven ability to drive technical strategy and align it with business objectives. Strong leadership, communication, and stakeholder management skills. Preferred Qualifications Experience in machine learning infrastructure or MLOps is a plus. Exposure to real-time data processing and analytics. Interest in data structures, algorithm analysis and design, multicore programming, and scalable architecture. Prior experience in a SaaS or high-growth tech company. Perks, Benefits and Work Culture Testimonial from a designer: 'One of the things I love about the design team at Wingify is the fact that every designer has a style which is unique to them. The second best thing is non-compliance to pre-existing rules for new products. So I just don't follow guidelines, I help create them.' Skills: infrastructure,soc2,ansible,drive,data governance,redshift,gdpr,javascript,cassandra,design,spring boot,jenkins,docker,mongodb,java,tidb,elk,python,php,aws,snowflake,lld,chef,bigquery,gcp,golang,html,data,kafka,grafana,kubernetes,scala,css,hadoop,azure,redis,sql,data processing,spark,hld,node.js,google guice,compliance

Posted 1 day ago

Apply

1.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description About Oracle Analytics & Big Data Service: Oracle Analytics is a complete platform that supports every role within analytics, offering cloud-native services or on-premises solutions without compromising security or governance. Our platform delivers a unified system for managing everything from data collection to decision-making, with seamless integration of AI and machine learning to help businesses accelerate productivity and uncover critical insights. Oracle Big Data Service, a part of Oracle Analytics, is a fully managed, automated cloud service designed to help enterprises create scalable Hadoop-based data lakes. The service work scope encompasses not just good integration with OCI’s native infrastructure (security, cloud, storage, etc.) but also deep integration with other relevant cloud-native services in OCI. It includes doing cloud-native ways of doing service level patching & upgrades and maintaining high availability of the service in the face of random failures & planned downtimes in the underlying infrastructure (e.g., for things like patching the Linux kernels to take care of a security vulnerability). Developing systems for monitoring and getting telemetry into the service’s runtime characteristics and being able to take actions on the telemetry data is a part of the charter. We are interested in experienced engineers with expertise and passion for solving difficult problems in distributed systems and highly available services to join our Oracle Big Data Service team. In this role, you will be instrumental in building, maintaining, and enhancing our managed, cloud-native Big Data service focused on large-scale data processing and analytics. At Oracle, you can help, shape, design, and build innovative new systems from the ground up. These are exciting times in our space - we are growing fast, still at an early stage, and working on ambitious new initiatives. Engineers at any level can have significant technical and business impact. Minimum Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related technical field. Minimum of 1-2 years of experience in software development, with a focus on large-scale distributed systems, cloud services, or Big Data technologies. US passport holders. This is required by the position to access US Gov regions. Expertise in coding in Java, Python with emphasis on tuning/optimization Experience with Linux systems administration, troubleshooting, and security best practices in cloud environments. Experience with open-source software in the Big Data ecosystem Experience at an organization with operational/dev-ops culture Solid understanding of networking, storage, and security components related to cloud infrastructure. Solid foundation in data structures, algorithms, and software design with strong analytical and debugging skills. Preferred Qualifications: Hands-on experience with Hadoop ecosystem (HDFS, MapReduce, YARN), Spark, Kafka, Flink and other big data technologies. Proven expertise in cloud-native architectures and services, preferably within Oracle Cloud Infrastructure (OCI), AWS, Azure, or GCP. In-depth understanding of Java and JVM mechanics Good problem-solving skills and the ability to work in a fast-paced, agile environment. Responsibilities Key Responsibilities: Participate in development and maintenance of a scalable and secure Hadoop-based data lake service. Code, integrate, and operationalize open and closed source data ecosystem components for Oracle cloud service offerings Collaborate with cross-functional teams including DevOps, Security, and Product Management to define and execute product roadmaps, service updates, and feature enhancements. Becoming an active member of the Apache open source community when working on open source components Ensure compliance with security protocols and industry best practices when handling large-scale data processing in the cloud. Qualifications Career Level - IC2 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 1 day ago

Apply

1.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Description About Oracle Analytics & Big Data Service: Oracle Analytics is a complete platform that supports every role within analytics, offering cloud-native services or on-premises solutions without compromising security or governance. Our platform delivers a unified system for managing everything from data collection to decision-making, with seamless integration of AI and machine learning to help businesses accelerate productivity and uncover critical insights. Oracle Big Data Service, a part of Oracle Analytics, is a fully managed, automated cloud service designed to help enterprises create scalable Hadoop-based data lakes. The service work scope encompasses not just good integration with OCI’s native infrastructure (security, cloud, storage, etc.) but also deep integration with other relevant cloud-native services in OCI. It includes doing cloud-native ways of doing service level patching & upgrades and maintaining high availability of the service in the face of random failures & planned downtimes in the underlying infrastructure (e.g., for things like patching the Linux kernels to take care of a security vulnerability). Developing systems for monitoring and getting telemetry into the service’s runtime characteristics and being able to take actions on the telemetry data is a part of the charter. We are interested in experienced engineers with expertise and passion for solving difficult problems in distributed systems and highly available services to join our Oracle Big Data Service team. In this role, you will be instrumental in building, maintaining, and enhancing our managed, cloud-native Big Data service focused on large-scale data processing and analytics. At Oracle, you can help, shape, design, and build innovative new systems from the ground up. These are exciting times in our space - we are growing fast, still at an early stage, and working on ambitious new initiatives. Engineers at any level can have significant technical and business impact. Minimum Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related technical field. Minimum of 1-2 years of experience in software development, with a focus on large-scale distributed systems, cloud services, or Big Data technologies. US passport holders. This is required by the position to access US Gov regions. Expertise in coding in Java, Python with emphasis on tuning/optimization Experience with Linux systems administration, troubleshooting, and security best practices in cloud environments. Experience with open-source software in the Big Data ecosystem Experience at an organization with operational/dev-ops culture Solid understanding of networking, storage, and security components related to cloud infrastructure. Solid foundation in data structures, algorithms, and software design with strong analytical and debugging skills. Preferred Qualifications: Hands-on experience with Hadoop ecosystem (HDFS, MapReduce, YARN), Spark, Kafka, Flink and other big data technologies. Proven expertise in cloud-native architectures and services, preferably within Oracle Cloud Infrastructure (OCI), AWS, Azure, or GCP. In-depth understanding of Java and JVM mechanics Good problem-solving skills and the ability to work in a fast-paced, agile environment. Responsibilities Key Responsibilities: Participate in development and maintenance of a scalable and secure Hadoop-based data lake service. Code, integrate, and operationalize open and closed source data ecosystem components for Oracle cloud service offerings Collaborate with cross-functional teams including DevOps, Security, and Product Management to define and execute product roadmaps, service updates, and feature enhancements. Becoming an active member of the Apache open source community when working on open source components Ensure compliance with security protocols and industry best practices when handling large-scale data processing in the cloud. Qualifications Career Level - IC2 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 1 day ago

Apply

1.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description About Oracle Analytics & Big Data Service: Oracle Analytics is a complete platform that supports every role within analytics, offering cloud-native services or on-premises solutions without compromising security or governance. Our platform delivers a unified system for managing everything from data collection to decision-making, with seamless integration of AI and machine learning to help businesses accelerate productivity and uncover critical insights. Oracle Big Data Service, a part of Oracle Analytics, is a fully managed, automated cloud service designed to help enterprises create scalable Hadoop-based data lakes. The service work scope encompasses not just good integration with OCI’s native infrastructure (security, cloud, storage, etc.) but also deep integration with other relevant cloud-native services in OCI. It includes doing cloud-native ways of doing service level patching & upgrades and maintaining high availability of the service in the face of random failures & planned downtimes in the underlying infrastructure (e.g., for things like patching the Linux kernels to take care of a security vulnerability). Developing systems for monitoring and getting telemetry into the service’s runtime characteristics and being able to take actions on the telemetry data is a part of the charter. We are interested in experienced engineers with expertise and passion for solving difficult problems in distributed systems and highly available services to join our Oracle Big Data Service team. In this role, you will be instrumental in building, maintaining, and enhancing our managed, cloud-native Big Data service focused on large-scale data processing and analytics. At Oracle, you can help, shape, design, and build innovative new systems from the ground up. These are exciting times in our space - we are growing fast, still at an early stage, and working on ambitious new initiatives. Engineers at any level can have significant technical and business impact. Minimum Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related technical field. Minimum of 1-2 years of experience in software development, with a focus on large-scale distributed systems, cloud services, or Big Data technologies. US passport holders. This is required by the position to access US Gov regions. Expertise in coding in Java, Python with emphasis on tuning/optimization Experience with Linux systems administration, troubleshooting, and security best practices in cloud environments. Experience with open-source software in the Big Data ecosystem Experience at an organization with operational/dev-ops culture Solid understanding of networking, storage, and security components related to cloud infrastructure. Solid foundation in data structures, algorithms, and software design with strong analytical and debugging skills. Preferred Qualifications: Hands-on experience with Hadoop ecosystem (HDFS, MapReduce, YARN), Spark, Kafka, Flink and other big data technologies. Proven expertise in cloud-native architectures and services, preferably within Oracle Cloud Infrastructure (OCI), AWS, Azure, or GCP. In-depth understanding of Java and JVM mechanics Good problem-solving skills and the ability to work in a fast-paced, agile environment. Responsibilities Key Responsibilities: Participate in development and maintenance of a scalable and secure Hadoop-based data lake service. Code, integrate, and operationalize open and closed source data ecosystem components for Oracle cloud service offerings Collaborate with cross-functional teams including DevOps, Security, and Product Management to define and execute product roadmaps, service updates, and feature enhancements. Becoming an active member of the Apache open source community when working on open source components Ensure compliance with security protocols and industry best practices when handling large-scale data processing in the cloud. Qualifications Career Level - IC2 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 1 day ago

Apply

2.0 - 4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

At Citi we’re not just building technology, we’re building the future of banking. Encompassing a broad range of specialties, roles, and cultures, our teams are creating innovations used across the globe. Citi is constantly growing and progressing through our technology, with laser focused on evolving the ways of doing things. As one of the world’s most global banks we’re changing how the world does business Shape your Career with Citi We’re currently looking for a high caliber professional to join our team as 25883567 Officer- ETL Automation tester -QA - C10 -Hybrid- PUNE based in Pune/Chennai, India. Being part of our team means that we’ll provide you with the resources to meet your unique needs, empower you to make healthy decision and manage your financial well-being to help plan for your future. For instance: We provide programs and services for your physical and mental well-being including access to telehealth options, health advocates, confidential counseling and more. Coverage varies by country. We empower our employees to manage their financial well-being and help them plan for the future. We provide access to an array of learning and development resources to help broaden and deepen your skills and knowledge as your career progresses. The Testing Analyst is a developing professional role. Applies specialty area knowledge in monitoring, assessing, analyzing and/or evaluating processes and data. Identifies policy gaps and formulates policies. Interprets data and makes recommendations. Researches and interprets factual information. Identifies inconsistencies in data or results, defines business issues and formulates recommendations on policies, procedures or practices. Integrates established disciplinary knowledge within own specialty area with basic understanding of related industry practices. Good understanding of how the team interacts with others in accomplishing the objectives of the area. Develops working knowledge of industry practices and standards. Limited but direct impact on the business through the quality of the tasks/services provided. Impact of the job holder is restricted to own team. Candidate is expected to Build Data Pipelines: Extract data from various sources (like databases and data lakes), clean and transform it, and load it into target systems Testing and Validation: Develop automated tests to ensure the data pipelines are working correctly and the data is accurate. This is like quality control, making sure everything meets the bank's standards Work with Hive, HDFS, and Oracle data sources to extract, transform, and load large-scale datasets Leverage AWS services such as S3, Lambda, and Airflow for data ingestion, event-driven processing, and orchestration Create reusable frameworks, libraries, and templates to accelerate automation and testing of ETL jobs Participate in code reviews, CI/CD pipelines , and maintain best practices in Spark and cloud-native development Ensures tooling can be run in CICD providing real-time on demand test execution shortening the feedback loop to fully support Handsfree execution Regression , Integration, Sanity testing, Regression automated suites, reports issues – provide solutions and ensures timely completion Own and drive automation in Data and Analytics Team to achieve 90% automation in Data, ETL space. Design and develop integrated portal to consolidate utilities and cater to user needs. Supports initiatives related to automation on Data & Analytics testing requirements for process and product rollout into production. Specialists who can work with technology team to design and implement appropriate automation scripts/plans for an application testing, meeting required KPI and automation effectiveness. Ensures new utilities are documented and transitioned to testers for execution and supports for troubleshooting in case required. Monitors and reviews code check-ins from peers and helps maintain project repository. Ability to work independently as well as collaborate within groups on various projects assigned. Ability to work in a fast-paced, dynamic environment and manage multiple priorities effectively. Experience and understanding of Wealth domain specifically in private bank(banking) , lending services and related Tech applications.Supports and contributes to automated test data generation and sufficiency. Successful candidate ideally would have following skills and exposure: 2 - 4 years of experience on automation testing across UI Experience in Automation ETL Testing , testing by using SQL queries. Hands on experience on Selenium BDD Cucumber using Java, Python Extensive knowledge on developing and maintaining automation frameworks, AI/ ML related solutions. Experience on automating BI reports e.g., Tableau dashboards and views validation. Data analytics and BI reports in the Financial Service industry Hands on experience in Python for developing utilities for Data Analysis using Pandas, NumPy etc. Exposure and some experience on AI related solutions, ML which can help automate faster. Experience with mobile testing using perfecto, API Testing-SoapUI, Postman/Rest Assured will be added advantage. Detailed knowledge data flows in relational database and Bigdata systems Strong knowledge of Oracle SQL and HiveQL and understanding of ETL/Data Testing. Experience with CI/CD tools like Jenkins. Proficiency in working on Cloudera Hadoop ecosystem (HDFS, Hive, YARN) Hands-on experience with ETL automation and validation framework. Solid understanding of AWS services like S3, Lambda, EKS, Airflow, and Strong problem-solving and debugging skills Excellent communication and collaboration abilities to lead and mentor a large techno-functional team across different geographical locations Strong Acumen and presentation skills. Able to work in an Agile environment and deliver results independently Education: Bachelor’s/University degree or equivalent experience ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Technology Quality ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.

Posted 1 day ago

Apply

10.0 - 15.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

About ISOCRATES Since 2015, iSOCRATES advises on, builds and manages mission-critical Marketing, Advertising and Data technologies, platforms, and processes as the Global Leader in MADTECH Resource Planning and Execution(TM). iSOCRATES delivers globally proven, reliable, and affordable Strategy and Operations Consulting and Managed Services for marketers, agencies, publishers, and the data/tech providers that enable them. iSOCRATES is staffed 24/7/365 with its proven specialists who save partners money, and time and achieve transparent, accountable, performance while delivering extraordinary value. Savings stem from a low-cost, focused global delivery model at scale that benefits from continuous re-investment in technology and specialized training. About MADTECH.AI MADTECH.AI is your Marketing Decision Intelligence platform. Unify, transform, analyze, and visualize all your data in a single, cost-effective AI-powered hub. Gain speed to value by leaving data wrangling, model building, data visualization, and proactive problem solving to MADTECH.AI. Sharper insights, smarter decisions, faster. MADTECH.AI was spun out of well-established Inc. 5000 consultancy iSOCRATES® which advises on, builds, manages, and owns mission-critical Marketing, Advertising and Data platforms, technologies and processes as the Global Leader in MADTECH Resource Planning and Execution™ serving marketers, agencies, publishers, and their data/tech suppliers. Job Description We are currently seeking an experienced Manager, Data Science, to lead our growing Data Science team. The role involves overseeing the development and implementation of advanced data science techniques to improve media campaigns and enhance our AI-powered solutions. The manager will collaborate with cross-functional teams, providing leadership in analyzing and defining audience, campaign, and media trading data. Key Responsibilities Team Leadership & Management: Lead and mentor a team of data scientists, providing guidance in the design, development, and implementation of innovative data solutions. Foster a collaborative and high-performance team culture, ensuring the team is aligned with business goals and technical objectives. Advanced Analytics & Data Science Expertise: Drive the application of statistical, econometric, and Big Data methods to define business requirements, design analytics solutions, and optimize economic outcomes. Utilize advanced modeling techniques, including propensity modeling, Marketing Mix Modeling (MMM), Multi-Touch Attribution (MTA), and Bayesian statistics to enhance campaign effectiveness. Generative AI & NLP Leadership: Lead the implementation and development of Generative AI(GenAI), Large Language Models(LLM), and Natural Language Processing (NLP) techniques for data modeling and predictive analysis. Ensure the integration of AI-driven technologies to improve data science capabilities and results. Data Architecture & Management: Architect and manage data systems, integrating data from diverse sources, ensuring the optimization of audience, pricing, and contextual data for ad-tech applications. Oversee the management and utilization of DSPs, SSPs, DMPs, and other critical systems in the ad-tech ecosystem. Cross-Functional Collaboration: Work closely with teams from Product, System Development, Yield, Operations, Finance, Sales, and Business Development to ensure seamless data quality and predictive outcomes across campaigns. Design and deliver actionable insights and reporting tools for both internal and external business partners. Predictive Modeling & Optimization: Lead the development of predictive models to optimize media campaigns, focusing on revenue, audience behavior, bid actions, and ad inventory optimization. Analyze campaign performance and provide data-driven recommendations for optimization across multiple media channels, including websites, mobile apps, and social media. Data Collection & Quality Assurance: Oversee the collection, management, and quality assurance of data, ensuring high standards and efficient systems for in-depth analysis and reporting. Lead the development of tools and methodologies for complex data analysis, model development, and visualization to support business objectives. Qualifications & Skills Master’s or Ph.D. in Statistics, Engineering, Science, or Business, with a strong foundation in mathematics and statistics. 10 to 15 years of experience in data science, predictive analytics, and digital analytics, with at least 7 years of hands-on experience in modeling, analysis, and optimization within the media, advertising, or tech industry. At least 6 years of hands-on experience with Generative AI, Large Language Models, and Natural Language Processing techniques. Strong proficiency in data collection, machine learning, and deep learning techniques using tools such as Python, R, Pandas, scikit-learn, Hadoop, Spark, MySQL, SQL and AWS S3. Experience working with DSPs, SSPs, DMPs, and other programmatic systems in digital advertising. Expertise in statistical modeling, customer segmentation, persona building, and predictive analytics. Advanced understanding of programmatic media optimization, audience behavior, and pricing strategies. Strong problem-solving skills with the ability to adapt to evolving business needs and deliver solutions proactively. Experience in designing analytics dashboards, visualization tools, and reporting systems. Excellent communication and presentation skills, with the ability to explain complex technical concepts to non-technical stakeholders. Ability to manage multiple tasks and projects effectively, both independently and in collaboration with remote teams. An interest in working in a fast-paced, dynamic environment, focused on revenue and analytics in the digital media space. Relocation to Mysuru or Bengaluru required.

Posted 1 day ago

Apply

5.0 - 8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Title: Data Engineer Location: Hyderabad, India (Onsite) Fulltime. Job Description: We are seeking an experienced Data Engineer with 5-8 years of professional experience to design, build, and optimize robust and scalable data pipelines for our SmartFM platform. The ideal candidate will be instrumental in ingesting, transforming, and managing vast amounts of operational data from various building devices, ensuring high data quality and availability for analytics and AI/ML applications. This role is critical in enabling our platform to generate actionable insights, alerts, and recommendations for optimizing facility operations. ROLES AND RESPONSIBILITIES • Design, develop, and maintain scalable and efficient data ingestion pipelines from diverse sources (e.g., IoT devices, sensors, existing systems) using technologies like IBM StreamSets, Azure Data Factory, Apache Spark, Talend Apache Flink and Kafka. • Implement robust data transformation and processing logic to clean, enrich, and structure raw data into formats suitable for analysis and machine learning models. • Manage and optimize data storage solutions, primarily within MongoDB, ensuring efficient schema design, data indexing, and query performance for large datasets. • Collaborate closely with Data Scientists to understand their data needs, provide high-quality, reliable datasets, and assist in deploying data-driven solutions. • Ensure data quality, consistency, and integrity across all data pipelines and storage systems, implementing monitoring and alerting mechanisms for data anomalies. • Work with cross-functional teams (Software Engineers, Data Scientists, Product Managers) to integrate data solutions with the React frontend and Node.js backend applications. • Contribute to the continuous improvement of data architecture, tooling, and best practices, advocating for scalable and maintainable data solutions. • Troubleshoot and resolve complex data-related issues, optimizing pipeline performance and ensuring data availability. • Stay updated with emerging data engineering technologies and trends, evaluating and recommending new tools and approaches to enhance our data capabilities. REQUIRED TECHNICAL SKILLS AND EXPERIENCE • 5-8 years of professional experience in Data Engineering or a related field. • Proven hands-on experience with data pipeline tools such as IBM StreamSets, Azure Data Factory, Apache Spark, Talend Apache Flink and Apache Kafka. • Strong expertise in database management, particularly with MongoDB, including schema design, data ingestion pipelines, and data aggregation. • Proficiency in at least one programming language commonly used in data engineering, such as Python or Java/Scala. • Experience with big data technologies and distributed processing frameworks (e.g., Apache Spark, Hadoop) is highly desirable. • Familiarity with cloud platforms (Azure, AWS, or GCP) and their data services. • Solid understanding of data warehousing concepts, ETL/ELT processes, and data modeling. • Experience with DevOps practices for data pipelines (CI/CD, monitoring, logging). • Knowledge of Node.js and React environments to facilitate seamless integration with existing applications. ADDITIONAL QUALIFICATIONS • Demonstrated expertise in written and verbal communication, adept at simplifying complex technical concepts for both technical and non-technical audiences. • Strong problem-solving and analytical skills with a meticulous approach to data quality. • Experienced in collaborating and communicating seamlessly with diverse technology roles, including development, support, and product management. • Highly motivated to acquire new skills, explore emerging technologies, and stay updated on the latest trends in data engineering and business needs. • Experience in the facility management domain or IoT data is a plus. EDUCATION REQUIREMENTS / EXPERIENCE • Bachelor’s (BE / BTech) / Master’s degree (MS/MTech) in Computer Science, Information Systems, Mathematics, Statistics, or a related quantitative field.

Posted 1 day ago

Apply

14.0 years

0 Lacs

India

Remote

Who We Are At Twilio, we’re shaping the future of communications, all from the comfort of our homes. We deliver innovative solutions to hundreds of thousands of businesses and empower millions of developers worldwide to craft personalized customer experiences. Our dedication to remote-first work, and strong culture of connection and global inclusion means that no matter your location, you’re part of a vibrant team with diverse experiences making a global impact each day. As we continue to revolutionize how the world interacts, we’re acquiring new skills and experiences that make work feel truly rewarding. Your career at Twilio is in your hands. See yourself at Twilio Join the team as Twilio’s next Senior Engineering Manager on Twilio’s Traffic Intelligence team. About The Job This position is needed to manage the team of machine learning engineers of the Growth & User Intelligence team and closely partner with Product & Engineering teams to execute the roadmap for Twilio’s AI/ML products and services. You will understand customers' needs, build ML and Data Science products that work at a global scale and own end-to-end execution of large scale ML solutions. As a senior manager, you will closely partner with technology and product leaders in the organization to enable the engineers to turn ideas into reality. Responsibilities In this role, you’ll: Build and maintain scalable machine learning solutions for Traffic Intelligence vertical. Be a champion for your team, setting individuals up for success and putting others’ growth first. Understand the architecture and processes required to build and operate always-available complex and scalable distributed systems in cloud environments. Advocate agile processes, continuous integration and test automation. Be a strategic problem solver and thrive operating in broad scope, from conception through continuous operation of 24x7 services. Exhibit strong communication skills: in person, or on paper. You can explain technical concepts to product managers, architects, other engineers, and support. Qualifications Twilio values diverse experiences from all kinds of industries, and we encourage everyone who meets the required qualifications to apply. If your career is just starting or hasn't followed a traditional path, don't let that stop you from considering Twilio. We are always looking for people who will bring something new to the table! Required You have a minimum of 14+ years experience with 5 years of proven track record of leading and managing software teams. Experience managing multiple workstreams within the team Bachelor’s or Master’s degree in Computer Science, Engineering or related field. Technical Experience with: Applied ML models with proficiency in Python Experience in modern data storage, messaging, and processing tools (Kafka, Apache Spark, Hadoop, Presto, DynamoDB etc.) Experience in Cloud technologies like AWS, GCP etc. Experience in ML frameworks like PyTorch, TensorFlow, or Keras etc. SaaS Telemetry and Observability tools such as Datadog, Graphana etc. Excellent problem solving, critical thinking, and communication skills. Broad knowledge of development environments and tools used to implement and build code for deployment. Have strong familiarity with agile processes, continuous integration, and a strong belief in automation over toil. As a pragmatist, you are able to distill complex and ambiguous situations into actionable plans for your team. Owned and operated services end-to-end, from requirements gathering and design, to debugging and testing, to release management and operational monitoring. Desired Experience with Large Language Models Experience designing and implementing highly scalable and performant ML models. Location This role will be remote, and based in India(Karnataka, Tamil Nadu, Telangana, Maharashtra & New Delhi) Travel We prioritize connection and opportunities to build relationships with our customers and each other. For this role, you may be required to travel occasionally to participate in project or team in-person meetings. What We Offer Working at Twilio offers many benefits, including competitive pay, generous time off, ample parental and wellness leave, healthcare, a retirement savings program, and much more. Offerings vary by location. Twilio thinks big. Do you? We like to solve problems, take initiative, pitch in when needed, and are always up for trying new things. That's why we seek out colleagues who embody our values — something we call Twilio Magic. Additionally, we empower employees to build positive change in their communities by supporting their volunteering and donation efforts. So, if you're ready to unleash your full potential, do your best work, and be the best version of yourself, apply now! If this role isn't what you're looking for, please consider other open positions. Twilio is proud to be an equal opportunity employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, reproductive health decisions, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, genetic information, political views or activity, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law. Qualified applicants with arrest or conviction records will be considered for employment in accordance with the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. Additionally, Twilio participates in the E-Verify program in certain locations, as required by law.

Posted 1 day ago

Apply

9.0 years

0 Lacs

India

On-site

Responsibilities Implement and manage data governance/data quality frameworks and processes. Conduct data quality and data product testing to ensure accuracy and reliability. Explore and Analyze data, identify data quality rules, document data dictionaries and business glossaries. Collaborate with stakeholders to understand data requirements and deliver data solutions. Write and optimize complex SQL/HQL queries for data extraction and analysis. Develop shell scripts and analytics dashboards. Work with Hadoop, Hive, HQL, EDW, and Master Data to manage and analyze large datasets. Communicate effectively with technical and business stakeholders to facilitate data-related projects. Should be able to work with minimal guidance, need to plan the deliverables and able to meet timelines. Qualifications Experience: Overall 9+ years of IT experience; 7+ years in data governance and data quality management, implementing frameworks, should be good in data governance consulting. Data Quality and Testing: Proven experience in data quality and data product testing. Data Governance Implementation: Strong background in implementing data governance frameworks. Data Projects: Excellent understanding and experience with data- focused projects. SQL Proficiency: Ability to write complex SQL queries. Technical Knowledge: Good understanding of Hadoop, Hive, HQL, EDW, and Master Data. Ability to develop shell scripts. Experience in using Data Governance tools. Communication: Excellent communication skills.

Posted 1 day ago

Apply

1.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Job Description About Oracle Analytics & Big Data Service: Oracle Analytics is a complete platform that supports every role within analytics, offering cloud-native services or on-premises solutions without compromising security or governance. Our platform delivers a unified system for managing everything from data collection to decision-making, with seamless integration of AI and machine learning to help businesses accelerate productivity and uncover critical insights. Oracle Big Data Service, a part of Oracle Analytics, is a fully managed, automated cloud service designed to help enterprises create scalable Hadoop-based data lakes. The service work scope encompasses not just good integration with OCI’s native infrastructure (security, cloud, storage, etc.) but also deep integration with other relevant cloud-native services in OCI. It includes doing cloud-native ways of doing service level patching & upgrades and maintaining high availability of the service in the face of random failures & planned downtimes in the underlying infrastructure (e.g., for things like patching the Linux kernels to take care of a security vulnerability). Developing systems for monitoring and getting telemetry into the service’s runtime characteristics and being able to take actions on the telemetry data is a part of the charter. We are interested in experienced engineers with expertise and passion for solving difficult problems in distributed systems and highly available services to join our Oracle Big Data Service team. In this role, you will be instrumental in building, maintaining, and enhancing our managed, cloud-native Big Data service focused on large-scale data processing and analytics. At Oracle, you can help, shape, design, and build innovative new systems from the ground up. These are exciting times in our space - we are growing fast, still at an early stage, and working on ambitious new initiatives. Engineers at any level can have significant technical and business impact. Minimum Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related technical field. Minimum of 1-2 years of experience in software development, with a focus on large-scale distributed systems, cloud services, or Big Data technologies. US passport holders. This is required by the position to access US Gov regions. Expertise in coding in Java, Python with emphasis on tuning/optimization Experience with Linux systems administration, troubleshooting, and security best practices in cloud environments. Experience with open-source software in the Big Data ecosystem Experience at an organization with operational/dev-ops culture Solid understanding of networking, storage, and security components related to cloud infrastructure. Solid foundation in data structures, algorithms, and software design with strong analytical and debugging skills. Preferred Qualifications: Hands-on experience with Hadoop ecosystem (HDFS, MapReduce, YARN), Spark, Kafka, Flink and other big data technologies. Proven expertise in cloud-native architectures and services, preferably within Oracle Cloud Infrastructure (OCI), AWS, Azure, or GCP. In-depth understanding of Java and JVM mechanics Good problem-solving skills and the ability to work in a fast-paced, agile environment. Responsibilities Key Responsibilities: Participate in development and maintenance of a scalable and secure Hadoop-based data lake service. Code, integrate, and operationalize open and closed source data ecosystem components for Oracle cloud service offerings Collaborate with cross-functional teams including DevOps, Security, and Product Management to define and execute product roadmaps, service updates, and feature enhancements. Becoming an active member of the Apache open source community when working on open source components Ensure compliance with security protocols and industry best practices when handling large-scale data processing in the cloud. Qualifications Career Level - IC2 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 1 day ago

Apply

1.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Description About Oracle Analytics & Big Data Service: Oracle Analytics is a complete platform that supports every role within analytics, offering cloud-native services or on-premises solutions without compromising security or governance. Our platform delivers a unified system for managing everything from data collection to decision-making, with seamless integration of AI and machine learning to help businesses accelerate productivity and uncover critical insights. Oracle Big Data Service, a part of Oracle Analytics, is a fully managed, automated cloud service designed to help enterprises create scalable Hadoop-based data lakes. The service work scope encompasses not just good integration with OCI’s native infrastructure (security, cloud, storage, etc.) but also deep integration with other relevant cloud-native services in OCI. It includes doing cloud-native ways of doing service level patching & upgrades and maintaining high availability of the service in the face of random failures & planned downtimes in the underlying infrastructure (e.g., for things like patching the Linux kernels to take care of a security vulnerability). Developing systems for monitoring and getting telemetry into the service’s runtime characteristics and being able to take actions on the telemetry data is a part of the charter. We are interested in experienced engineers with expertise and passion for solving difficult problems in distributed systems and highly available services to join our Oracle Big Data Service team. In this role, you will be instrumental in building, maintaining, and enhancing our managed, cloud-native Big Data service focused on large-scale data processing and analytics. At Oracle, you can help, shape, design, and build innovative new systems from the ground up. These are exciting times in our space - we are growing fast, still at an early stage, and working on ambitious new initiatives. Engineers at any level can have significant technical and business impact. Minimum Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related technical field. Minimum of 1-2 years of experience in software development, with a focus on large-scale distributed systems, cloud services, or Big Data technologies. US passport holders. This is required by the position to access US Gov regions. Expertise in coding in Java, Python with emphasis on tuning/optimization Experience with Linux systems administration, troubleshooting, and security best practices in cloud environments. Experience with open-source software in the Big Data ecosystem Experience at an organization with operational/dev-ops culture Solid understanding of networking, storage, and security components related to cloud infrastructure. Solid foundation in data structures, algorithms, and software design with strong analytical and debugging skills. Preferred Qualifications: Hands-on experience with Hadoop ecosystem (HDFS, MapReduce, YARN), Spark, Kafka, Flink and other big data technologies. Proven expertise in cloud-native architectures and services, preferably within Oracle Cloud Infrastructure (OCI), AWS, Azure, or GCP. In-depth understanding of Java and JVM mechanics Good problem-solving skills and the ability to work in a fast-paced, agile environment. Responsibilities Key Responsibilities: Participate in development and maintenance of a scalable and secure Hadoop-based data lake service. Code, integrate, and operationalize open and closed source data ecosystem components for Oracle cloud service offerings Collaborate with cross-functional teams including DevOps, Security, and Product Management to define and execute product roadmaps, service updates, and feature enhancements. Becoming an active member of the Apache open source community when working on open source components Ensure compliance with security protocols and industry best practices when handling large-scale data processing in the cloud. Qualifications Career Level - IC2 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 1 day ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title: Senior Technical Trainer – Cloud, Data & AI/ML Location: Pune Experience Required : 10+ Years About the Role: We’re looking for an experienced and passionate technical trainer who can help elevate our teams’ capabilities in cloud technologies, data engineering, and AI/ML. This role is ideal for someone who enjoys blending hands-on tech skills with a strong ability to simplify, teach, and mentor. As we grow and scale at Meta For Data, building internal expertise is a key part of our strategy—and you’ll be central to that effort. What You’ll Be Doing: Lead and deliver in-depth training sessions (both live and virtual) across areas like cloud architecture, data engineering, and machine learning. Build structured training content including presentations, labs, exercises, and assessments. Develop learning journeys tailored to different experience levels and roles—ranging from new hires to experienced engineers. Continuously update training content to reflect changes in tools, platforms, and best practices. Collaborate with engineering, HR, and L&D teams to roll out training schedules, track attendance, and gather feedback. Support on-going learning post-training through mentoring, labs, and knowledge checks. What We’re Looking For: Around 10 years of experience in a mix of software development, cloud/data/ML engineering, and technical training. Deep familiarity with at least one cloud platform (AWS, Azure, or GCP); AWS or Azure is preferred. Strong grip on data platforms, ETL pipelines, Big Data tools (like Spark or Hadoop), and warehouse systems. Solid understanding of the AI/ML lifecycle—model building, tuning, deployment—with hands-on experience in Python-based libraries (e.g., TensorFlow, PyTorch, Scikit-learn). Confident communicator who’s comfortable speaking to groups and explaining complex concepts simply. Bonus if you hold any relevant certifications like AWS Solutions Architect, Google Data Engineer, or Microsoft AI Engineer. Nice to Have: Experience creating online training modules or managing LMS platforms. Prior experience training diverse audiences: tech teams, analysts, product managers, etc. Familiarity with MLOps and modern deployment practices for AI models. Why Join Us? You’ll have the freedom to shape how technical learning happens at Meta For Data. You’ll be part of a team that values innovation, autonomy, and real impact. Flexible working options and a culture that supports growth - for our teams and our trainers.

Posted 1 day ago

Apply

1.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Job Description About Oracle Analytics & Big Data Service: Oracle Analytics is a complete platform that supports every role within analytics, offering cloud-native services or on-premises solutions without compromising security or governance. Our platform delivers a unified system for managing everything from data collection to decision-making, with seamless integration of AI and machine learning to help businesses accelerate productivity and uncover critical insights. Oracle Big Data Service, a part of Oracle Analytics, is a fully managed, automated cloud service designed to help enterprises create scalable Hadoop-based data lakes. The service work scope encompasses not just good integration with OCI’s native infrastructure (security, cloud, storage, etc.) but also deep integration with other relevant cloud-native services in OCI. It includes doing cloud-native ways of doing service level patching & upgrades and maintaining high availability of the service in the face of random failures & planned downtimes in the underlying infrastructure (e.g., for things like patching the Linux kernels to take care of a security vulnerability). Developing systems for monitoring and getting telemetry into the service’s runtime characteristics and being able to take actions on the telemetry data is a part of the charter. We are interested in experienced engineers with expertise and passion for solving difficult problems in distributed systems and highly available services to join our Oracle Big Data Service team. In this role, you will be instrumental in building, maintaining, and enhancing our managed, cloud-native Big Data service focused on large-scale data processing and analytics. At Oracle, you can help, shape, design, and build innovative new systems from the ground up. These are exciting times in our space - we are growing fast, still at an early stage, and working on ambitious new initiatives. Engineers at any level can have significant technical and business impact. Minimum Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related technical field. Minimum of 1-2 years of experience in software development, with a focus on large-scale distributed systems, cloud services, or Big Data technologies. US passport holders. This is required by the position to access US Gov regions. Expertise in coding in Java, Python with emphasis on tuning/optimization Experience with Linux systems administration, troubleshooting, and security best practices in cloud environments. Experience with open-source software in the Big Data ecosystem Experience at an organization with operational/dev-ops culture Solid understanding of networking, storage, and security components related to cloud infrastructure. Solid foundation in data structures, algorithms, and software design with strong analytical and debugging skills. Preferred Qualifications: Hands-on experience with Hadoop ecosystem (HDFS, MapReduce, YARN), Spark, Kafka, Flink and other big data technologies. Proven expertise in cloud-native architectures and services, preferably within Oracle Cloud Infrastructure (OCI), AWS, Azure, or GCP. In-depth understanding of Java and JVM mechanics Good problem-solving skills and the ability to work in a fast-paced, agile environment. Responsibilities Key Responsibilities: Participate in development and maintenance of a scalable and secure Hadoop-based data lake service. Code, integrate, and operationalize open and closed source data ecosystem components for Oracle cloud service offerings Collaborate with cross-functional teams including DevOps, Security, and Product Management to define and execute product roadmaps, service updates, and feature enhancements. Becoming an active member of the Apache open source community when working on open source components Ensure compliance with security protocols and industry best practices when handling large-scale data processing in the cloud. Qualifications Career Level - IC2 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 1 day ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Job Responsibility Data Handling and Processing: •Proficient in SQL Server and query optimization. •Expertise in application data design and process management. •Extensive knowledge of data modelling. •Hands-on experience with Azure Data Factory, Azure Synapse Analytics, and Microsoft Fabric. •Experience working with Azure Databricks. •Expertise in data warehouse development, including experience with SSIS (SQL Server Integration Services) and SSAS (SQL Server Analysis Services). •Proficiency in ETL processes (data extraction, transformation, and loading), including data cleaning and normalization. •Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka) for large-scale data processing. •Understanding of data governance, compliance, and security measures within Azure environments. Data Analysis and Visualization: •Experience in data analysis, statistical modelling, and machine learning techniques. •Proficiency in analytical tools like Python, R, and libraries such as Pandas, NumPy for data analysis and modelling. •Strong expertise in Power BI for data visualization, data modelling, and DAX queries, with knowledge of best practices. •Experience in implementing Row-Level Security in Power BI. •Ability to work with medium-complex data models and quickly understand application data design and processes. •Familiar with industry best practices for Power BI and experienced in performance optimization of existing implementations. •Understanding of machine learning algorithms, including supervised, unsupervised, and deep learning techniques. Non-Technical Skills: •Ability to lead a team of 4-5 developers and take ownership of deliverables. •Demonstrates a commitment to continuous learning, particularly with new technologies. •Strong communication skills in English, both written and verbal. •Able to effectively interact with customers during project implementation. •Capable of explaining complex technical concepts to non-technical stakeholders. Data Management: SQL, Azure Synapse Analytics, Azure Analysis Service and Data Marts, Microsoft Fabric ETL Tools: Azure Data Factory, Azure Data Bricks, Python, SSIS Data Visualization: Power BI, DAX

Posted 1 day ago

Apply

12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Senior Software Engineer – Backend (Python) 📍 Location: Hyderabad (Hybrid) 🕒 Experience: 5 – 12 years About the Role: We are looking for a Senior Software Engineer – Backend with strong expertise in Python and modern big data technologies. This role involves building scalable backend solutions for a leading healthcare product-based company. Key Skills: Programming: Python, Spark-Scala, PySpark (PySpark API) Big Data: Hadoop, Databricks Data Engineering: SQL, Kafka Strong problem-solving skills and experience in backend architecture Why Join? Hybrid work model in Hyderabad Opportunity to work on innovative healthcare products Collaborative environment with modern tech stack Keywords for Search: Python, PySpark, Spark, Spark-Scala, Hadoop, Databricks, Kafka, SQL, Backend Development, Big Data Engineering, Healthcare Technology

Posted 1 day ago

Apply

3.0 - 8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Notice period 30 days to immediate Role description Myrefers GCP PythonApache beamp3 to 8 years of overall IT experience which includes hands on experience in Big Data technologies Mandatory Hands on experience in Python and PySpark Python as a language is practically usable for anything we are looking for application Development and Extract Transform Load and Data lake curation experience using Python Build pySpark applications using Spark Dataframes in Python using Jupyter notebook and PyCharm IDE Worked on optimizing spark jobs that processes huge volumes of data Hands on experience in version control tools like Git Worked on Amazons Analytics services like Amazon EMR Amazon Athena AWS Glue Worked on Amazons Compute services like Amazon Lambda Amazon EC2 and Amazons Storage service like S3 and few other services like SNS Experience knowledge of bash shell scripting will be a plus Has built ETL processes to take data copy it structurally transform it etc involving a wide variety of formats like CSV TSV XML and JSON Experience in working with fixed width delimited multi record file formats etc Good to have knowledge of datawarehousing concepts dimensions facts schemas snowflake star etc Have worked with columnar storage formats Parquet Avro ORC etc Well versed with compression techniques Snappy Gzip Good to have knowledge of AWS databases atleast one Aurora RDS Redshift ElastiCache DynamoDB Skills Mandatory Skills :GCP, Apache Spark,Python,SparkSQL,Big Data Hadoop Ecosystem

Posted 1 day ago

Apply

12.0 years

5 - 10 Lacs

Hyderābād

On-site

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. [Senior Manager Software Development Engineering] What you will do Let’s do this. Let’s change the world. In this vital role you will be responsible for designing, developing, and maintaining software applications and solutions that meet business needs and ensuring the availability and performance of critical systems and applications. This role involves working closely with product managers, designers, and other engineers to create high-quality, scalable software solutions and automating operations, monitoring system health, and responding to incidents to minimize downtime. Roles & Responsibilities: Provide technical leadership to enhance the culture of innovation, automation, and solving difficult scientific and business challenges. Technical leadership includes providing vision and direction to develop scalable reliable solutions. Provide leadership to select right-sized and appropriate tools and architectures based on requirements, data source format, and current technologies Develop, refactor, research and improve Weave cloud platform capabilities. Understand business drivers and technical needs so our cloud services seamlessly, automatically, and securely provides them the best service. Develop data flow pipelines to extract, transform, and load data from various data sources in various forms, including custom ETL pipelines that enable model and product development Build strong partnership with stakeholder Build data products and service processes which perform data transformation, metadata extraction, workload management and error processing management to ensure high quality data Provide clear documentation for delivered solutions and processes, integrating documentation Collaborate with business partners to understand user stories and ensure technical solution/build can deliver to those needs Work with multi-functional teams to design and document effective and efficient solutions. Develop change management strategies and assist in their implementation. Mentor junior data engineers on standard methodologies in the industry and in the Amgen data landscape What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Basic Qualifications and Experience: Doctorate Degree /Master's degree / Bachelor's degree and 12to 17 years Computer Science, IT or related field experience Preferred Skills: Must-Have Skills: Superb communication and interpersonal skills, with the ability to work cross-functionally with multi-functional GTM, product, and engineering teams. Minimum of 10+ years overall Software Engineer or Cloud Architect experience Minimum 3+ years in architecture role using public cloud solutions such as AWS Experience with AWS Technology stack Good-to-Have Skills: Familiarity with big data technologies, AI platforms, and cloud-based data solutions. Ability to work effectively across matrixed organizations and lead collaboration between data and AI teams. Passion for technology and customer success, particularly in driving innovative AI and data solutions. Experience working with teams of data scientists, software engineers and business experts to drive insights Experience with AWS Services such as EC2, S3, Redshift/Spectrum, Glue, Athena, RDS, Lambda, and API gateway. Experience with Big Data Technologies (Hadoop, Hive, Hbase, Pig, Spark, etc) Good understanding of relevant data standards and industry trends Ability to understand new business requirements and prioritize them for delivery Experience working in biopharma/life sciences industry Proficient in one of the coding languages (Python, Java, Scala) Hands on experience writing SQL using any RDBMS (Redshift, Postgres, MySQL, Teradata, Oracle, etc.). Experience with Schema Design & Dimensional data modeling. Experience with software DevOps CI/CD tools, such Git, Jenkins, Linux, and Shell Script Hands on experience using Databricks/Jupyter or similar notebook environment. Experience working with GxP systems Experience working in an agile environment (i.e. user stories, iterative development, etc.) Experience working with test-driven development and software test automation Experience working in a Product environment Good overall understanding of business, manufacturing, and laboratory systems common in the pharmaceutical industry, as well as the integration of these systems through applicable standards. Soft Skills: Excellent analytical and troubleshooting skills. Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to handle multiple priorities successfully. Team-oriented, with a focus on achieving team goals What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 1 day ago

Apply

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Acuity Knowledge Partners (Acuity) is a leading provider of bespoke research, analytics and technology solutions to the financial services sector, including asset managers, corporate and investment banks, private equity and venture capital firms, hedge funds and consulting firms. Its global network of over 6,000 analysts and industry experts, combined with proprietary technology, supports more than 600 financial institutions and consulting companies to operate more efficiently and unlock their human capital, driving revenue higher and transforming operations. Acuity is headquartered in London and operates from 10 locations worldwide. The company fosters a diverse, equitable and inclusive work environment, nurturing talent, regardless of race, gender, ethnicity or sexual orientation. Acuity was established as a separate business from Moody’s Corporation in 2019, following its acquisition by Equistone Partners Europe (Equistone). In January 2023, funds advised by global private equity firm Permira acquired a majority stake in the business from Equistone, which remains invested as a minority shareholder. For more information, visit www.acuitykp.com Position Title- Associate Director (Senior Architect – Data) Department-IT Location- Gurgaon/ Bangalore Job Summary The Enterprise Data Architect will enhance the company's strategic use of data by designing, developing, and implementing data models for enterprise applications and systems at conceptual, logical, business area, and application layers. This role advocates data modeling methodologies and best practices. We seek a skilled Data Architect with deep knowledge of data architecture principles, extensive data modeling experience, and the ability to create scalable data solutions. Responsibilities include developing and maintaining enterprise data architecture, ensuring data integrity, interoperability, security, and availability, with a focus on ongoing digital transformation projects. Key Responsibilities Strategy & Planning Develop and deliver long-term strategic goals for data architecture vision and standards in conjunction with data users, department managers, clients, and other key stakeholders. Create short-term tactical solutions to achieve long-term objectives and an overall data management roadmap. Establish processes for governing the identification, collection, and use of corporate metadata; take steps to assure metadata accuracy and validity. Establish methods and procedures for tracking data quality, completeness, redundancy, and improvement. Conduct data capacity planning, life cycle, duration, usage requirements, feasibility studies, and other tasks. Create strategies and plans for data security, backup, disaster recovery, business continuity, and archiving. Ensure that data strategies and architectures are aligned with regulatory compliance. Develop a comprehensive data strategy in collaboration with different stakeholders that aligns with the transformational projects’ goals. Ensure effective data management throughout the project lifecycle. Acquisition & Deployment Ensure the success of enterprise-level application rollouts (e.g. ERP, CRM, HCM, FP&A, etc.) Liaise with vendors and service providers to select the products or services that best meet company goals Operational Management o Assess and determine governance, stewardship, and frameworks for managing data across the organization. o Develop and promote data management methodologies and standards. o Document information products from business processes and create data entities o Create entity relationship diagrams to show the digital thread across the value streams and enterprise o Create data normalization across all systems and data base to ensure there is common definition of data entities across the enterprise o Document enterprise reporting needs develop the data strategy to enable single source of truth for all reporting data o Address the regulatory compliance requirements of each country and ensure our data is secure and compliant o Select and implement the appropriate tools, software, applications, and systems to support data technology goals. o Oversee the mapping of data sources, data movement, interfaces, and analytics, with the goal of ensuring data quality. o Collaborate with project managers and business unit leaders for all projects involving enterprise data. o Address data-related problems regarding systems integration, compatibility, and multiple-platform integration. o Act as a leader and advocate of data management, including coaching, training, and career development to staff. o Develop and implement key components as needed to create testing criteria to guarantee the fidelity and performance of data architecture. o Document the data architecture and environment to maintain a current and accurate view of the larger data picture. o Identify and develop opportunities for data reuse, migration, or retirement. Data Architecture Design: Develop and maintain the enterprise data architecture, including data models, databases, data warehouses, and data lakes. Design and implement scalable, high-performance data solutions that meet business requirements. Data Governance: Establish and enforce data governance policies and procedures as agreed with stakeholders. Maintain data integrity, quality, and security within Finance, HR and other such enterprise systems. Data Migration: Oversee the data migration process from legacy systems to the new systems being put in place. Define & Manage data mappings, cleansing, transformation, and validation to ensure accuracy and completeness. Master Data Management: Devise processes to manage master data (e.g., customer, vendor, product information) to ensure consistency and accuracy across enterprise systems and business processes. Provide data management (create, update and delimit) methods to ensure master data is governed Stakeholder Collaboration: Collaborate with various stakeholders, including business users, other system vendors, and stakeholders to understand data requirements. Ensure the enterprise system meets the organization's data needs. Training and Support: Provide training and support to end-users on data entry, retrieval, and reporting within the candidate enterprise systems. Promote user adoption and proper use of data. 10 Data Quality Assurance: Implement data quality assurance measures to identify and correct data issues. Ensure the Oracle Fusion and other enterprise systems contain reliable and up-to-date information. Reporting and Analytics: Facilitate the development of reporting and analytics capabilities within the Oracle Fusion and other systems Enable data-driven decision-making through robust data analysis. Continuous Improvement: Continuously monitor and improve data processes and the Oracle Fusion and other system's data capabilities. Leverage new technologies for enhanced data management to support evolving business needs. Technology and Tools: Oracle Fusion Cloud Data modeling tools (e.g., ER/Studio, ERwin) ETL tools (e.g., Informatica, Talend, Azure Data Factory) Data Pipelines: Understanding of data pipeline tools like Apache Airflow and AWS Glue. Database management systems: Oracle Database, MySQL, SQL Server, PostgreSQL, MongoDB, Cassandra, Couchbase, Redis, Hadoop, Apache Spark, Amazon RDS, Google BigQuery, Microsoft Azure SQL Database, Neo4j, OrientDB, Memcached) Data governance tools (e.g., Collibra, Informatica Axon, Oracle EDM, Oracle MDM) Reporting and analytics tools (e.g., Oracle Analytics Cloud, Power BI, Tableau, Oracle BIP) Hyperscalers / Cloud platforms (e.g., AWS, Azure) Big Data Technologies such as Hadoop, HDFS, MapReduce, and Spark Cloud Platforms such as Amazon Web Services, including RDS, Redshift, and S3, Microsoft Azure services like Azure SQL Database and Cosmos DB and experience in Google Cloud Platform services such as BigQuery and Cloud Storage. Programming Languages: (e.g. using Java, J2EE, EJB, .NET, WebSphere, etc.) SQL: Strong SQL skills for querying and managing databases. Python: Proficiency in Python for data manipulation and analysis. Java: Knowledge of Java for building data-driven applications. Data Security and Protocols: Understanding of data security protocols and compliance standards. Key Competencies Qualifications: Education: Bachelor’s degree in computer science, Information Technology, or a related field. Master’s degree preferred. Experience: 10+ years overall and at least 7 years of experience in data architecture, data modeling, and database design. Proven experience with data warehousing, data lakes, and big data technologies. Expertise in SQL and experience with NoSQL databases. Experience with cloud platforms (e.g., AWS, Azure) and related data services. Experience with Oracle Fusion or similar ERP systems is highly desirable. Skills: Strong understanding of data governance and data security best practices. Excellent problem-solving and analytical skills. Strong communication and interpersonal skills. Ability to work effectively in a collaborative team environment. Leadership experience with a track record of mentoring and developing team members. Excellent in documentation and presentations. Good knowledge of applicable data privacy practices and laws. Certifications: Relevant certifications (e.g., Certified Data Management Professional, AWS Certified Big Data – Specialty) are a plus. Behavioral A self-starter, an excellent planner and executor and above all, a good team player Excellent communication skills and inter-personal skills are a must Must possess organizational skills, including multi-task capability, priority setting and meeting deadlines Ability to build collaborative relationships and effectively leverage networks to mobilize resources Initiative to learn business domain is highly desirable Likes dynamic and constantly evolving environment and requirements

Posted 1 day ago

Apply

5.0 years

5 - 10 Lacs

Hyderābād

On-site

DESCRIPTION The AOP (Analytics Operations and Programs) team is responsible for creating core analytics, insight generation and science capabilities for ROW Ops. We develop scalable analytics applications, AI/ML products and research models to optimize operation processes. You will work with Product Managers, Data Engineers, Data Scientists, Research Scientists, Applied Scientists and Business Intelligence Engineers using rigorous quantitative approaches to ensure high quality data/science products for our customers around the world. We are looking for a Sr.Data Scientist to join our growing Science Team. As Data Scientist, you are able to use a range of science methodologies to solve challenging business problems when the solution is unclear. You will be responsible for building ML models to solve complex business problems and test them in production environment. The scope of role includes defining the charter for the project and proposing solutions which align with org's priorities and production constraints but still create impact. You will achieve this by leveraging strong leadership and communication skills, data science skills and by acquiring domain knowledge pertaining to the delivery operations systems. You will provide ML thought leadership to technical and business leaders, and possess ability to think strategically about business, product, and technical challenges. You will also be expected to contribute to the science community by participating in science reviews and publishing in internal or external ML conferences. Our team solves a broad range of problems that can be scaled across ROW (Rest of the World including countries like India, Australia, Singapore, MENA and LATAM). Here is a glimpse of the problems that this team deals with on a regular basis: Using live package and truck signals to adjust truck capacities in real-time HOTW models for Last Mile Channel Allocation Using LLMs to automate analytical processes and insight generation Ops research to optimize middle mile truck routes Working with global partner science teams to affect Reinforcement Learning based pricing models and estimating Shipments Per Route for $MM savings Deep Learning models to synthesize attributes of addresses Abuse detection models to reduce network losses Key job responsibilities 1. Use machine learning and analytical techniques to create scalable solutions for business problems Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes 2. Design, develop, evaluate and deploy, innovative and highly scalable ML/OR models 3. Work closely with other science and engineering teams to drive real-time model implementations 4. Work closely with Ops/Product partners to identify problems and propose machine learning solutions 5. Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model maintenance 6. Work proactively with engineering teams and product managers to evangelize new algorithms and drive the implementation of large-scale complex ML models in production 7. Leading projects and mentoring other scientists, engineers in the use of ML techniques BASIC QUALIFICATIONS 5+ years of data scientist experience Experience with data scripting languages (e.g. SQL, Python, R etc.) or statistical/mathematical software (e.g. R, SAS, or Matlab) Experience with statistical models e.g. multinomial logistic regression Experience in data applications using large scale distributed systems (e.g., EMR, Spark, Elasticsearch, Hadoop, Pig, and Hive) Experience working with data engineers and business intelligence engineers collaboratively Demonstrated expertise in a wide range of ML techniques PREFERRED QUALIFICATIONS Experience as a leader and mentor on a data science team Master's degree in a quantitative field such as statistics, mathematics, data science, business analytics, economics, finance, engineering, or computer science Expertise in Reinforcement Learning and Gen AI is preferred Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 1 day ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies