Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description: Senior Data Developer I Location: Gurugram, India Employment Type: Full-Time Experience Level: Mid to Senior-Level Department: Data & Analytics / IT Job Summary We are seeking an experienced Data Developer with expertise in Microsoft Fabric, Azure Synapse Analytics, Databricks, and strong SQL development skills. The ideal candidate will work on end-to-end data solutions supporting analytics initiatives across clinical, regulatory, and commercial domains in the Life Sciences industry. Familiarity with Azure DevOps, and relevant certifications such as DP-700 and Databricks Data Engineer Associate/Professional are preferred. Power BI knowledge is highly preferable to support integrated analytics and reporting. Key Responsibilities Design, develop, and maintain scalable and secure data pipelines using Microsoft Fabric, Azure Synapse Analytics, and Azure Databricks to support critical business processes. Develop curated datasets for clinical, regulatory, and commercial analytics using SQL and PySpark. Create and support dashboards and reports using Power BI (highly preferred). Collaborate with cross-functional stakeholders to understand data needs and translate them into technical solutions. Work closely with ERP teams such as Salesforce.com and SAP S/4HANA to integrate and transform business-critical data into analytic-ready formats. Partner with Data Scientists to enable advanced analytics and machine learning initiatives by providing clean, reliable, and well-structured data. Ensure data quality, lineage, and documentation in accordance with GxP, 21 CFR Part 11, and industry best practices. Use Azure DevOps to manage code repositories, track tasks, and support agile delivery processes. Monitor, troubleshoot, and optimize data workflows for reliability and performance. Contribute to the design of scalable, compliant data models and architecture. Required Qualifications Bachelor’s or Master’s degree in Computer Science. 5+ years of experience in data development or data engineering roles. Hands-on Experience With Microsoft Fabric (Lakehouse, Pipelines, Dataflows) Azure Synapse Analytics (Dedicated/Serverless SQL Pools, Pipelines) Experience with Azure Data Factory, Apache Spark Azure Databricks (Notebooks, Delta Lake, Unity Catalog) SQL (complex queries, optimization, transformation logic) Familiarity with Azure DevOps (Repos, Pipelines, Boards). Understanding of data governance, security, and compliance in the Life Sciences domain. Certifications (Preferred) Microsoft Certified: DP-700 – Fabric Analytics Engineer Associate Databricks Certified Data Engineer Associate or Professional Preferred Skills Preferred Skills: Strong knowledge of Power BI (highly preferred) Familiarity with HIPAA, GxP, and 21 CFR Part 11 compliance Experience working with ERP data from Salesforce.com and SAP S/4HANA Exposure to clinical trial, regulatory submission, or quality management data Good understanding of AI and ML concepts Experience working with APIs Excellent communication skills and the ability to collaborate across global teams Location - Gurugram Mode - Hybrid
Posted 2 days ago
8.0 years
0 Lacs
Kochi, Kerala, India
On-site
🚀 We’re Hiring: Big Data Engineer (4–8 Years Experience) 📍 Location: Kochi | 🏢 Mode: On-site | 💼 Employment Type: Full-time Are you passionate about building scalable big data solutions? Do you thrive in a high-performance environment where innovation meets impact? We’re looking for an experienced Big Data Engineer to join our team and help drive our data-driven transformation. You'll design and implement robust data pipelines, optimize distributed systems, and contribute to cutting-edge analytics and ML use cases. 🔧 Key Responsibilities Design and develop scalable big data processing pipelines. Implement data ingestion, transformation, and validation. Collaborate across teams to deliver data solutions for analytics & ML. Optimize systems for performance, reliability, and scalability. Monitor and troubleshoot performance bottlenecks. Document workflows, specs, and technical decisions. 🎓 Required Qualifications Bachelor’s in Computer Science, Engineering, or related field (Master’s preferred). 3+ years of experience in Big Data Engineering. Strong in Python, Java, or Scala. Hands-on with Apache Spark, Hadoop, Kafka, or Flink. Solid knowledge of SQL and relational databases (MySQL, PostgreSQL). Experience with ETL, data modeling, and data warehousing. Exposure to distributed computing and cloud platforms (AWS/GCP/Azure). Familiar with Docker, Kubernetes, and DevOps practices. ⚙️ Tools & Technologies IDEs: IntelliJ, Eclipse Build: Maven, Gradle Testing: JUnit, TestNG, Mockito Monitoring: Prometheus, Grafana, ELK APIs: Swagger, OpenAPI Messaging: Kafka Databases: MySQL, PostgreSQL, MongoDB, Redis ORM: Hibernate, Spring Data 📩 Ready to build the future with us? Apply now or tag someone who fits the role! If anyone interested share your updated resume to vishnu@narrowlabs.in #BigData #DataEngineer #ApacheSpark #Kafka #Hadoop #ETL #HiringNow #KochiJobs #OnsiteOpportunity #DataEngineeringJobs #TechJobsIndia #WeAreHiring #infopark #infoparkKochi #BigDataEngineer #Kochi
Posted 2 days ago
3.0 - 8.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Position: QA testing Location - Indore Exp level - 3-8 years Job Summary: We are seeking a skilled and experienced Q&A Engineer with a strong technical background in networking, automation, API testing, and performance testing. The ideal candidate will have proficiency in Postman API testing, Java programming, and testing frameworks like JMeter, Selenium, REST Assured, and Robot Framework. The candidate familiar with network architecture, including ORAN, SMO, RIC, and OSS/BSS is Plus. Key Responsibilities: Perform functional, performance, and load testing of web applications using tools such as JMeter and Postman. Develop, maintain, and execute automated test scripts using Selenium with Java for web application testing. Design and implement tests for RESTful APIs using REST Assured (Java library) for testing HTTP responses and ensuring proper API functionality. Collaborate with development teams to identify and resolve software defects through effective debugging and testing. Utilize the Robot Framework with Python for acceptance testing and acceptance test-driven development. Conduct end-to-end testing and ensure that systems meet all functional requirements. Ensure quality and compliance of software releases by conducting thorough test cases and evaluating product quality. Required Skill set: Postman API Testing: Experience in testing RESTful APIs and web services using Postman. Java: Strong knowledge of Java for test script development, particularly with Selenium and REST Assured. JMeter: Experience in performance, functional, and load testing using Apache JMeter. Selenium with Java: Expertise in Selenium WebDriver for automated functional testing, including script development and maintenance using Java. REST Assured: Proficient in using the REST Assured framework (Java library) for testing REST APIs and validating HTTP responses. Robot Framework: Hands-on experience with the Robot Framework for acceptance testing and test-driven development (TDD) in Python. Good to have Skill Set: Networking Knowledge: Deep understanding of networking concepts, specifically around RAN elements and network architectures (ORAN, SMO, RIC, OSS). ORAN/SMO/RIC/OSS Architecture: In-depth knowledge of ORAN (Open Radio Access Network), SMO (Service Management Orchestration), RIC (RAN Intelligent Controller), and OSS (Operations Support Systems) architectures. Monitoring Tools: Experience with Prometheus, Grafana, and Kafka for real-time monitoring and performance tracking of applications and systems. Keycloak: Familiarity with Keycloak for identity and access management.
Posted 2 days ago
8.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
About Media.net : Media.net is a leading, global ad tech company that focuses on creating the most transparent and efficient path for advertiser budgets to become publisher revenue. Our proprietary contextual technology is at the forefront of enhancing Programmatic buying, the latest industry standard in ad buying for digital platforms. The Media.net platform powers major global publishers and ad-tech businesses at scale across ad formats like display, video, mobile, native, as well as search. Media.net’s U.S. HQ is based in New York, and the Global HQ is in Dubai. With office locations and consultant partners across the world, Media.net takes pride in the value-add it offers to its 50+ demand and 21K+ publisher partners, in terms of both products and services. Role Overview As a Data Applications Lead, you'll own end-to-end delivery of data modules and platforms that power core business use cases across real-time bidding, reporting, and analytics. You will lead a small team of engineers and work closely with data science, analytics, and platform engineering teams. You’ll need a strong foundation in SQL and databases, coupled with working experience on big data systems. This is a hands-on leadership role for someone who enjoys solving high-scale data challenges while mentoring others and driving execution. What You’ll Be Doing Lead the development and enhancement of data-heavy modules across ad serving, analytics, and reporting systems. Design, optimize, and maintain scalable data pipelines using a mix of SQL , NoSQL and Big Data tools. Work with streaming platforms like Apache Kafka and Flink for real-time data processing. Ensure data reliability, low-latency access, and robustness across internal and external-facing systems. Collaborate with product managers, analysts, and other engineering teams to translate business needs into data solutions. Drive code reviews, mentor junior engineers, and contribute to architectural discussions. Ensure best practices around data governance, schema evolution, and performance tuning. Requirements: 5–8 years of experience working in data engineering or backend roles with a strong data focus. Solid command over SQL and RDBMS concepts – query optimization, indexing, schema design. Hands-on experience with Big Data and NoSQL technologies – Hive, Spark, Presto, HDFS, Kafka, MongoDB, Cassandra, etc. Strong programming skills in Python, Java, or Scala. Experience with large-scale data processing, batch & stream pipelines, and performance tuning. Exposure to distributed systems or cloud-based data platforms (AWS, GCP, or Azure) is a plus. Ability to take ownership and drive outcomes while mentoring and growing a small team. What is in it for you Work on internet-scale data systems – billions of rows processed on a regular basis. Solve high-impact problems at the intersection of advertising, machine learning, and data platforms. Be part of a high-calibre team with ownership and autonomy from day one. A tech-first culture that encourages learning, experimentation, and innovation.
Posted 2 days ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within PWC Responsibilities About the role: As a Junior/Senior Data Engineer, you'll be taking the lead in designing and maintaining complex data ecosystems. Your experience will be instrumental in optimizing data processes, ensuring data quality, and driving data-driven decision-making within the organization. Architecting and designing complex data systems and pipelines. Leading and mentoring junior data engineers and team members. Collaborating with cross-functional teams to define data requirements. Implementing advanced data quality checks and ensuring data integrity. Optimizing data processes for efficiency and scalability. Overseeing data security and compliance measures. Evaluating and recommending new technologies to enhance data infrastructure. Providing technical expertise and guidance for critical data projects. Required Skills & Experience Proficiency in designing and building complex data pipelines and data processing systems. Leadership and mentorship capabilities to guide junior data engineers and foster skill development. Strong expertise in data modeling and database design for optimal performance. Skill in optimizing data processes and infrastructure for efficiency, scalability, and cost-effectiveness. Knowledge of data governance principles, ensuring data quality, security, and compliance. Familiarity with big data technologies like Hadoop, Spark, or NoSQL. Expertise in implementing robust data security measures and access controls. Effective communication and collaboration skills for cross-functional teamwork and defining data requirements. Skills Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc., Mandatory Skill Sets Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc. Preferred Skill Sets Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc. Years Of Experience Required 4-7years Education Qualification BE/BTECH, ME/MTECH, MBA, MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Bachelor of Engineering, Master of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Data Engineering Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline, Data Quality, Data Transformation, Data Validation {+ 19 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 2 days ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within PWC Responsibilities About the role: As a Junior/Senior Data Engineer, you'll be taking the lead in designing and maintaining complex data ecosystems. Your experience will be instrumental in optimizing data processes, ensuring data quality, and driving data-driven decision-making within the organization. Architecting and designing complex data systems and pipelines. Leading and mentoring junior data engineers and team members. Collaborating with cross-functional teams to define data requirements. Implementing advanced data quality checks and ensuring data integrity. Optimizing data processes for efficiency and scalability. Overseeing data security and compliance measures. Evaluating and recommending new technologies to enhance data infrastructure. Providing technical expertise and guidance for critical data projects. Required Skills & Experience Proficiency in designing and building complex data pipelines and data processing systems. Leadership and mentorship capabilities to guide junior data engineers and foster skill development. Strong expertise in data modeling and database design for optimal performance. Skill in optimizing data processes and infrastructure for efficiency, scalability, and cost-effectiveness. Knowledge of data governance principles, ensuring data quality, security, and compliance. Familiarity with big data technologies like Hadoop, Spark, or NoSQL. Expertise in implementing robust data security measures and access controls. Effective communication and collaboration skills for cross-functional teamwork and defining data requirements. Skills Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc., Mandatory Skill Sets Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc. Preferred Skill Sets Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc. Years Of Experience Required 4-7years Education Qualification BE/BTECH, ME/MTECH, MBA, MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Bachelor of Engineering, Master of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline, Data Quality, Data Transformation, Data Validation {+ 19 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 2 days ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within PWC Responsibilities About the role: As a Junior/Senior Data Engineer, you'll be taking the lead in designing and maintaining complex data ecosystems. Your experience will be instrumental in optimizing data processes, ensuring data quality, and driving data-driven decision-making within the organization. Architecting and designing complex data systems and pipelines. Leading and mentoring junior data engineers and team members. Collaborating with cross-functional teams to define data requirements. Implementing advanced data quality checks and ensuring data integrity. Optimizing data processes for efficiency and scalability. Overseeing data security and compliance measures. Evaluating and recommending new technologies to enhance data infrastructure. Providing technical expertise and guidance for critical data projects. Required Skills & Experience Proficiency in designing and building complex data pipelines and data processing systems. Leadership and mentorship capabilities to guide junior data engineers and foster skill development. Strong expertise in data modeling and database design for optimal performance. Skill in optimizing data processes and infrastructure for efficiency, scalability, and cost-effectiveness. Knowledge of data governance principles, ensuring data quality, security, and compliance. Familiarity with big data technologies like Hadoop, Spark, or NoSQL. Expertise in implementing robust data security measures and access controls. Effective communication and collaboration skills for cross-functional teamwork and defining data requirements. Skills Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc., Mandatory Skill Sets Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc. Preferred Skill Sets Cloud: Azure/GCP/AWS DE Technologies: ADF, Big Query, AWS Glue etc., Data Lake: Snowflake, Data Bricks etc. Years Of Experience Required 4-7years Education Qualification BE/BTECH, ME/MTECH, MBA, MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Master of Engineering, Bachelor of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline, Data Quality, Data Transformation, Data Validation {+ 19 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date
Posted 2 days ago
4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Achieving our goals starts with supporting yours. Grow your career, access top-tier health and wellness benefits, build lasting connections with your team and our customers, and travel the world using our extensive route network. Come join us to create what’s next. Let’s define tomorrow, together. Description As an airline, safety is our most important principle. And our Corporate Safety team is responsible for making sure safety is top of mind in every action we take. From conducting flight safety investigations and educating pilots on potential safety threats to implementing medical programs and helping prevent employee injuries, our team is instrumental in running a safe and successful airline for our customers and employees. Job Overview And Responsibilities Corporate safety is integral for ensuring a safe workplace for our employees and travel experience for our customers. This role is responsible for supporting the development and implementation of a cohesive safety data strategy and supporting the Director of Safety Management Systems (SMS) in growing United’s Corporate Safety Predictive Analytics capabilities. This Senior Analyst will serve as a subject matter expert for corporate safety data analytics and predictive insight strategy and execution. This position will be responsible for supporting new efforts to deliver insightful data analysis and build new key metrics for use by the entire United Safety organization, with the goal of enabling data driven decision making and understanding for corporate safety. The Senior Analyst will be responsible for becoming the subject matter expert in several corporate safety specific data streams and leveraging this expertise to deliver insights which are actionable and allow for a predictive approach to safety risk mitigation. Develop and implement predictive/prescriptive data analytics workflows for Safety Data Management and streamlining processes Collaborate with Digital Technology and United Operational teams to analyze, predict and reduce safety risks and provide measurable solutions Partner with Digital Technology team to develop streamlined and comprehensive data analytics workstreams Support United’s Safety Management System (SMS) with predictive data analytics by designing and developing statistical models Manage and maintain the project portfolio of SMS data team Areas of focus will include, but are not limited to: Predictive and prescriptive analytics Train and validate models Creation and maintenance of standardized corporate safety performance metrics Design and implementation of new data pipelines Delivery of prescriptive analysis insights to internal stakeholders Design and maintain new and existing corporate safety data pipelines and analytical workflows Create and manage new methods for data analysis which provide prescriptive and predictive insights on corporate safety data Partner with US and India based internal partners to establish new data analysis workflows and provide analytical support to corporate and divisional work groups Collaborate with corporate and divisional safety partners to ensure standardization and consistency between all safety analytics efforts enterprise wide Provide support and ongoing subject matter expertise regarding a set of high priority corporate safety datasets and ongoing analytics efforts on those datasets Provide tracking and status update reporting on ongoing assignments, projects, and efforts to US and India based leaders This position is offered on local terms and conditions. Expatriate assignments and sponsorship for employment visas, even on a time-limited visa status, will not be awarded. This position is for United Airlines Business Services Pvt. Ltd - a wholly owned subsidiary of United Airlines Inc. Qualifications What’s needed to succeed (Minimum Qualifications): Bachelor's degree Bachelor's degree in computer science, data science, information sytems, engineering, or another quantitative field (i.e. mathematics, statistics, economics, etc.) 4+ years experience in data analytics, predictive modeling, or statistics Expert level SQL skills Experience with Microsoft SQL Server Management Studio and hands-on experience working with massive data sets Proficiency writing complex code using both traditional and modern technologies/languages (i.e. Python, HTML, Javascript, Power Automate, Spark Node, etc.) for queries, procedures, and analytic processing to create useable data insight Ability to study/understand business needs, then design a data/technology solution that connects business processes with quantifiable outcomes Strong project management and communication skills 3-4 years working with complex data (data analytics, information science, data visualization or other relevant quantitative field Must be legally authorized to work in India for any employer without sponsorship Must be fluent in English (written and spoken) Successful completion of interview required to meet job qualification Reliable, punctual attendance is an essential function of the position What will help you propel from the pack (Preferred Qualifications): Master's degree ML / AI experience Experience with PySpark, Apache, or Hadoop to deal with massive data sets
Posted 2 days ago
0.0 - 1.0 years
1 - 5 Lacs
Niranjanpur, Indore, Madhya Pradesh
Remote
Job Title : Jr. Data Engineer Location : Indore (Onsite) Experience : 0–2 Years Industry : Information Technology Employment Type : Full-time Job Summary : We are looking for a motivated and detail-oriented Junior Data Engineer to join our team onsite in Indore. The ideal candidate should have a solid understanding of Python and SQL, with a passion for data processing, transformation, and analytics. Strong communication skills, confidence, and the ability to learn quickly are key for success in this role. Key Responsibilities : Assist in designing, developing, and maintaining ETL pipelines and data workflows. Work with structured and unstructured data using Python and SQL . Support data collection, cleansing, transformation, and validation activities. Collaborate with data scientists, analysts, and software engineers to support data needs. Troubleshoot data-related issues and ensure high data quality and integrity. Create and maintain documentation for data pipelines and workflows. Continuously improve data engineering processes and performance. Key Requirements : 0–2 years of experience in a Data Engineering or related role. Good knowledge of Python and SQL is a must. Familiarity with databases like MySQL, PostgreSQL, or SQL Server . Understanding of data structures, algorithms, and basic ETL concepts. Strong analytical, problem-solving , and communication skills . Ability to work independently and collaboratively in a fast-paced environment. Self-motivated, confident, and eager to learn new technologies. Nice to Have : Exposure to cloud platforms like AWS, Azure, or GCP . Experience with data visualization tools like Power BI, Tableau , or Excel dashboards . Basic understanding of data warehousing , big data , or streaming technologies . Familiarity with tools like Airflow , Apache Spark , or Pandas . Perks & Benefits : Competitive salary with growth opportunities. Mentorship from experienced data professionals. Hands-on experience in real-world projects. Onsite work in a collaborative office environment. Performance-based incentives and learning support. Job Types: Full-time, Permanent Pay: ₹180,000.00 - ₹500,000.00 per year Benefits: Paid sick time Provident Fund Work from home Schedule: Day shift Monday to Friday Weekend availability Supplemental Pay: Performance bonus Ability to commute/relocate: Niranjanpur, Indore, Madhya Pradesh: Reliably commute or planning to relocate before starting work (Preferred) Experience: Data Engineer: 1 year (Preferred) Work Location: In person Application Deadline: 30/08/2025
Posted 2 days ago
5.0 years
0 Lacs
Greater Chennai Area
On-site
Customers trust the Alation Data Intelligence Platform for self-service analytics, cloud transformation, data governance, and AI-ready data, fostering data-driven innovation at scale. With more than $340M in funding – valued at over $1.7 billion and nearly 600 customers, including 40% of the Fortune 100 — Alation helps organizations realize value from data and AI initiatives. Alation has been recognized in 2024 as one of Inc. Magazine's Best Workplaces for the fifth time, a testament to our commitment to creating an inclusive, innovative, and collaborative environment. Collaboration is at the forefront of everything we do. We strive to bring diverse perspectives together and empower each team member to contribute their unique strengths to live out our values each day. These are: Move the Ball, Build for the Long Term, Listen Like You’re Wrong, and Measure Through Customer Impact. Joining Alation means being part of a fast-paced, high-growth company where every voice matters, and where we’re shaping the future of data intelligence with AI-ready data. Join us on our journey to build a world where data culture thrives and curiosity is celebrated each day! Job Description Customers trust the Alation Data Intelligence Platform for self-service analytics, cloud transformation, data governance, and AI-ready data, fostering data-driven innovation at scale. With more than $340M in funding – valued at over $1.7 billion and nearly 600 customers, including 40% of the Fortune 100 — Alation helps organizations realize value from data and AI initiatives. Alation has been recognized in 2024 as one of Inc. Magazine's Best Workplaces for the fifth time, a testament to our commitment to creating an inclusive, innovative, and collaborative environment. Collaboration is at the forefront of everything we do. We strive to bring diverse perspectives together and empower each team member to contribute their unique strengths to live out our values each day. These are: Move the Ball, Build for the Long Term, Listen Like You’re Wrong, and Measure Through Customer Impact. Joining Alation means being part of a fast-paced, high-growth company where every voice matters, and where we’re shaping the future of data intelligence with AI-ready data. Join us on our journey to build a world where data culture thrives and curiosity is celebrated each day! Join us! We are looking for an experienced Staff Technical Support Engineer to join our advanced support team. You will provide advanced-level technical support, helping our customers integrate with the Alation platform. You will be responsible for troubleshooting and debugging complex issues as well as acting as an escalation point with customers and internal teams. What You'll Be Doing Provide advanced-level technical support to Alation customers, partners, prospects, and other support engineers. Specialize in at least one of the support specialization areas and serve as SME for internal and external customers. Contribute to the Alation Support Knowledge Base by regularly authoring, editing and updating technical documentation such as KB articles, runbooks, community FAQs, product documentation, etc. Facilitate internal and external technical enablement sessions. Build and utilize complex lab setups to replicate and resolve problems. You Should Have CS degree and at least 5 years of experience as a support engineer providing enterprise software application support. Experience troubleshooting Linux and running shell commands. Experience with Relational Databases, such Oracle and Postgres. SQL is a must. Ability to diagnose and debug applications written in Java and/or Python. Experience with Web servers, such as Apache and Nginx. Experience with REST APIs A big plus if you have experience in the following areas: Postgres (DB internals) JDBC drivers Elasticsearch, NoSQL, MongoDB Hadoop Ecosystem (Hive, HBase) Cloud technologies and frameworks such as Kubernetes and Docker Alation, Inc. is an Equal Employment Opportunity employer. All qualified applicants will receive consideration for employment without regards to that individual’s race, color, religion or creed, national origin or ancestry, sex (including pregnancy), sexual orientation, gender identity, age, physical or mental disability, veteran status, genetic information, ethnicity, citizenship, or any other characteristic protected by law. The Company will strive to provide reasonable accommodations to permit qualified applicants who have a need for an accommodation to participate in the hiring process (e.g., accommodations for a job interview) if so requested. This company participates in E-Verify. Click on any of the links below to view or print the full poster. E-Verify and Right to Work. Alation, Inc. is an Equal Employment Opportunity employer. All qualified applicants will receive consideration for employment without regards to that individual’s race, color, religion or creed, national origin or ancestry, sex (including pregnancy), sexual orientation, gender identity, age, physical or mental disability, veteran status, genetic information, ethnicity, citizenship, or any other characteristic protected by law. The Company will strive to provide reasonable accommodations to permit qualified applicants who have a need for an accommodation to participate in the hiring process (e.g., accommodations for a job interview) if so requested. This company participates in E-Verify. Click on any of the links below to view or print the full poster. E-Verify and Right to Work.
Posted 2 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Software Engineer Consultant/Expert – GCP Data Engineer Location: Chennai (Onsite) 34350 Employment Type: Contract Budget: Up to ₹18 LPA Assessment: Google Cloud Platform Engineer (HackerRank or equivalent) Notice Period: Immediate Joiners Preferred Role Summary We are seeking a highly skilled GCP Data Engineer to support the modernization of enterprise data platforms. The ideal candidate will be responsible for designing and implementing scalable, high-performance data pipelines and solutions on Google Cloud Platform (GCP) . You will work with large-scale datasets, integrating legacy and modern systems to enable advanced analytics and AI/ML capabilities. The role requires a deep understanding of GCP services, strong data engineering skills, and the ability to collaborate across teams to deliver robust data solutions. Key Responsibilities Design and develop production-grade data engineering solutions using GCP services such as: BigQuery, Dataflow, Dataform, Dataproc, Cloud Composer, Cloud SQL, Airflow, Compute Engine, Cloud Functions, Cloud Run, Cloud Build, Pub/Sub, App Engine Develop batch and real-time streaming pipelines for data ingestion, transformation, and processing. Integrate data from multiple sources including legacy and cloud-based systems. Collaborate with stakeholders and product teams to gather data requirements and align technical solutions to business needs. Conduct in-depth data analysis and impact assessments for data migrations and transformations. Implement CI/CD pipelines using tools like Tekton, Terraform, and GitHub. Optimize data workflows for performance, scalability, and cost-effectiveness. Lead and mentor junior engineers; contribute to knowledge sharing and documentation. Champion data governance, data quality, security, and compliance best practices. Utilize monitoring/logging tools to proactively address system issues. Deliver high-quality code using Agile methodologies including TDD and pair programming. Required Skills & Experience GCP Data Engineer Certification. Minimum 5+ years of experience designing and implementing complex data pipelines. 3+ years of hands-on experience with GCP. Strong expertise in: SQL, Python, Java, or Apache Beam Airflow, Dataflow, Dataproc, Dataform, Data Fusion, BigQuery, Cloud SQL, Pub/Sub Infrastructure-as-Code tools such as Terraform DevOps tools: GitHub, Tekton, Docker Solid understanding of microservice architecture, CI/CD integration, and container orchestration. Experience with data security, governance, and compliance in cloud environments. Preferred Qualifications Experience with real-time data streaming using Apache Kafka or Pub/Sub. Exposure to AI/ML tools or integration with AI/ML pipelines. Working knowledge of data science principles applied on large datasets. Experience in a regulated domain (e.g., financial services or insurance). Experience with project management and agile tools (e.g., JIRA, Confluence). Strong analytical and problem-solving mindset. Effective communication skills and ability to collaborate with cross-functional teams. Education Required: Bachelor's degree in Computer Science, Engineering, or a related technical field. Preferred: Master's degree or certifications in relevant domains. Skills: github,bigquery,airflow,ml,pub/sub,terraform,python,apache beam,dataflow,gcp,gcp data engineer certification,tekton,java,dataform,docker,data fusion,sql,dataproc,cloud sql,cloud
Posted 2 days ago
0.0 - 2.0 years
3 - 10 Lacs
Niranjanpur, Indore, Madhya Pradesh
Remote
Job Title - Sr. Data Engineer Experience - 2+ Years Location - Indpre (onsite) Industry - IT Job Type - Full ime Roles and Responsibilities- 1. Design and develop scalable data pipelines and workflows for data ingestion, transformation, and integration. 2. Build and maintain data storage systems, including data warehouses, data lakes, and relational databases. 3. Ensure data accuracy, integrity, and consistency through validation and quality assurance processes. 4. Collaborate with data scientists, analysts, and business teams to understand data needs and deliver tailored solutions. 5. Optimize database performance and manage large-scale datasets for efficient processing. 6. Leverage cloud platforms (AWS, Azure, or GCP) and big data technologies (Hadoop, Spark, Kafka) for building robust data solutions. 7. Automate and monitor data workflows using orchestration frameworks such as Apache Airflow. 8. Implement and enforce data governance policies to ensure compliance and data security. 9. Troubleshoot and resolve data-related issues to maintain seamless operations. 10. Stay updated on emerging tools, technologies, and trends in data engineering. Skills and Knowledge- 1. Core Skills: ● Proficient in Python (libraries: Pandas, NumPy) and SQL. ● Knowledge of data modeling techniques, including: ○ Entity-Relationship (ER) Diagrams ○ Dimensional Modeling ○ Data Normalization ● Familiarity with ETL processes and tools like: ○ Azure Data Factory (ADF) ○ SSIS (SQL Server Integration Services) 2. Cloud Expertise: ● AWS Services: Glue, Redshift, Lambda, EKS, RDS, Athena ● Azure Services: Databricks, Key Vault, ADLS Gen2, ADF, Azure SQL ● Snowflake 3. Big Data and Workflow Automation: ● Hands-on experience with big data technologies like Hadoop, Spark, and Kafka. ● Experience with workflow automation tools like Apache Airflow (or similar). Qualifications and Requirements- ● Education: ○ Bachelor’s degree (or equivalent) in Computer Science, Information Technology, Engineering, or a related field. ● Experience: ○ Freshers with strong understanding, internships and relevant academic projects are welcome. ○ 2+ years of experience working with Python, SQL, and data integration or visualization tools is preferred. ● Other Skills: ○ Strong communication skills, especially the ability to explain technical concepts to non-technical stakeholders. ○ Ability to work in a dynamic, research-oriented team with concurrent projects. Job Types: Full-time, Permanent Pay: ₹300,000.00 - ₹1,000,000.00 per year Benefits: Paid sick time Provident Fund Work from home Schedule: Day shift Monday to Friday Weekend availability Supplemental Pay: Performance bonus Ability to commute/relocate: Niranjanpur, Indore, Madhya Pradesh: Reliably commute or planning to relocate before starting work (Preferred) Experience: Data Engineer: 2 years (Preferred) Work Location: In person Application Deadline: 31/08/2025
Posted 2 days ago
4.0 - 6.0 years
0 Lacs
New Delhi, Delhi, India
On-site
Max. Salary: 80K per month Working Model: Hybrid (2-3 day from Bangalore office ( tentative location- Indiranagar, HSR, Bellandur). Min. Exp: 4-6 Years Role: Full Time Must have GCP, or any cloud experience Docker and/or k8's experience Nice to have CI/CD pipelines Apache Airflow, dbt's Working experience of building data platforms. Experience integrating many open source platforms and coordinating workflows Vertical AI solutions development - using LLM's Working exposure to AI assisted coding - cursor, claude-code, github co-pilot etc. They'll be working along with my US team (Chicago based), so expect a few hours overlap. Initial couple of weeks this will be a must. Later on can be based on requirements and work basis. Daily global stand-ups, so expect them to be available for the call daily Job Type: Full-time Pay: ₹70,000.00 - ₹80,000.00 per month
Posted 2 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are seeking a highly skilled and motivated Python , AWS, Big Data Engineer to join our data engineering team. The ideal candidate will have hands-on experience with Hadoop ecosystem, Apache Spark, and programming expertise in Python (PySpark), Scala, and Java. You will be responsible for designing, developing, and optimizing scalable data pipelines and big data solutions to support analytics and business intelligence initiatives.
Posted 2 days ago
2.0 years
2 - 5 Lacs
Cochin
On-site
Job description Unleash Your WordPress Magic at SciWiz! We're a leading web design and development company in Kerala, crafting digital experiences for clients ranging from local startups to Fortune 500s. If you're a WordPress whiz who thrives on challenges and wants to make a real impact , join our passionate team! Here's what you'll do: Build stunning and functional websites and web applications using WordPress, WooCommerce, HTML5, CSS3, and JavaScript. Develop and maintain both internal and external web applications, diving into code to solve problems and build cool features. Collaborate with designers and other departments to bring ideas to life. Implement features, fix bugs, and optimize performance with your ninja-level debugging skills. We're looking for someone who: Lives and breathes WordPress: You have at least 2 years of experience building with WordPress and understand both front-end and back-end development. Masters the LAMP stack: PHP, MySQL, and familiarity with Linux & Apache are your second language. Enjoys a challenge: You're eager to learn unfamiliar technologies and frameworks like React, Vue, Bootstrap, jQuery, and Git. Collaborates like a pro: You communicate effectively, work well in a team, and thrive in a fast-paced environment. Has a passion for quality: You write clean, efficient code, follow best practices, and test thoroughly. Bonus points if you have: Experience with Magento and other PHP frameworks. A track record of contributing to open-source projects. A portfolio showcasing your awesome WordPress creations. Why join us? Make a difference: Be part of a team that builds impactful projects for amazing clients. Learn and grow: We encourage continuous learning and offer opportunities to master new skills. Great work environment: Collaborative, supportive, and fun – we take work-life balance seriously. Competitive salary and benefits: We value your talent and offer perks to match. Ready to craft amazing WordPress experiences with us? Send us your resume and portfolio – we can't wait to meet you! Job Types: Full-time, Permanent Pay: ₹20,000.00 - ₹45,000.00 per month Education: Bachelor's (Preferred) Experience: total work: 1 year (Preferred) WordPress: 2 years (Required)
Posted 2 days ago
8.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. EY-Consulting - Data and Analytics – Manager - Data Integration Architect – Medidata Platform Integration EY's Consulting Services is a unique, industry-focused business unit that provides a broad range of integrated services that leverage deep industry experience with strong functional and technical capabilities and product knowledge. EY’s financial services practice provides integrated Consulting services to financial institutions and other capital markets participants, including commercial banks, retail banks, investment banks, broker-dealers & asset management firms, and insurance firms from leading Fortune 500 Companies. Within EY’s Consulting Practice, Data and Analytics team solves big, complex issues and capitalize on opportunities to deliver better working outcomes that help expand and safeguard the businesses, now and in the future. This way we help create a compelling business case for embedding the right analytical practice at the heart of client’s decision-making. The opportunity We’re looking for an experienced Data Integration Architect with 8+ years in clinical or life sciences domains to lead the integration of Medidata platforms into enterprise clinical trial systems. This role offers the chance to design scalable, compliant data integration solutions, collaborate across global R&D systems, and contribute to data-driven innovation in the healthcare and life sciences space. You will play a key role in aligning integration efforts with organizational architecture and compliance standards while engaging with stakeholders to ensure successful project delivery. Your Key Responsibilities Design and implement scalable integration solutions for large-scale clinical trial systems involving Medidata platforms. Ensure integration solutions comply with regulatory standards such as GxP and CSV. Establish and maintain seamless system-to-system data exchange using middleware platforms (e.g., Apache Kafka, Informatica) or direct API interactions. Collaborate with cross-functional business and IT teams to gather integration requirements and translate them into technical specifications. Align integration strategies with enterprise architecture and data governance frameworks. Provide support to program management through data analysis, integration status reporting, and risk assessment contributions. Interface with global stakeholders to ensure smooth integration delivery and resolve technical challenges. Mentor junior team members and contribute to knowledge sharing and internal learning initiatives. Participate in architectural reviews and provide recommendations for continuous improvement and innovation in integration approaches. Support business development efforts by contributing to solution proposals, proof of concepts (POCs), and client presentations. Skills And Attributes For Success Use a solution-driven approach to design and implement compliant integration strategies for clinical data platforms like Medidata. Strong communication, stakeholder engagement, and documentation skills, with experience presenting complex integration concepts clearly. Proven ability to manage system-to-system data flows using APIs or middleware, ensuring alignment with enterprise architecture and regulatory standards To qualify for the role, you must have Experience: Minimum 8 years in data integration or architecture roles, with a strong preference for experience in clinical research or life sciences domains. Education: Must be a graduate preferrable BE/B.Tech/BCA/Bsc IT Technical Skills: Hands-on expertise in one or more integration platforms such as Apache Kafka, Informatica, or similar middleware technologies; experience in implementing API-based integrations. Domain Knowledge: In-depth understanding of clinical trial data workflows, integration strategies, and regulatory frameworks including GxP and CSV compliance. Soft Skills: Strong analytical thinking, effective communication, and stakeholder management skills with the ability to collaborate across business and technical teams. Additional Attributes: Ability to work independently in a fast-paced environment, lead integration initiatives, and contribute to solution design and architecture discussions. Ideally, you’ll also have Hands-on experience with ETL tools and clinical data pipeline orchestration frameworks. Familiarity with broader clinical R&D platforms such as Oracle Clinical, RAVE, or other EDC systems. Prior experience leading small integration teams and working directly with cross-functional stakeholders in regulated environments What We Look For A Team of people with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment An opportunity to be a part of market-leading, multi-disciplinary team of 1400 + professionals, in the only integrated global transaction business worldwide. Opportunities to work with EY Consulting practices globally with leading businesses across a range of industries What working at EY offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you About EY As a global leader in assurance, tax, transaction and Consulting services, we’re using the finance products, expertise and systems we’ve developed to build a better working world. That starts with a culture that believes in giving you the training, opportunities and creative freedom to make things better. Whenever you join, however long you stay, the exceptional EY experience lasts a lifetime. And with a commitment to hiring and developing the most passionate people, we’ll make our ambition to be the best employer by 2020 a reality. If you can confidently demonstrate that you meet the criteria above, please contact us as soon as possible. Join us in building a better working world. Apply now EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.
Posted 2 days ago
3.0 years
5 - 10 Lacs
Kazhakuttam
On-site
About the Role You will architect, build and maintain end-to-end data pipelines that ingest 100 GB+ of NGINX/web-server logs from Elasticsearch, transform them into high-quality features, and surface actionable insights and visualisations for security analysts and ML models. Acting as both a Data Engineer and a Behavioural Data Analyst, you will collaborate with security, AI and frontend teams to ensure low-latency data delivery, rich feature sets and compelling dashboards that spot anomalies in real time. Key Responsibilities ETL & Pipeline Engineering: Design and orchestrate scalable batch / near-real-time ETL workflows to extract raw logs from Elasticsearch. Clean, normalize and partition logs for long-term storage and fast retrieval. Optimize Elasticsearch indices, queries and retention policies for performance and cost. Feature Engineering & Feature Store: Assist in the development of robust feature-engineering code in Python and/or PySpark. Define schemas and loaders for a feature store (Feast or similar). Manage historical back-fills and real-time feature look-ups ensuring versioning and reproducibility. Behaviour & Anomaly Analysis: Perform exploratory data analysis (EDA) to uncover traffic patterns, bursts, outliers and security events across IPs, headers, user agents and geo data. Translate findings into new or refined ML features and anomaly indicators. Visualisation & Dashboards: Create time-series, geo-distribution and behaviour-pattern visualisations for internal dashboards. Partner with frontend engineers to test UI requirements. Monitoring & Scaling: Implement health and latency monitoring for pipelines; automate alerts and failure recovery. Scale infrastructure to support rapidly growing log volumes. Collaboration & Documentation: Work closely with ML, security and product teams to align data strategy with platform goals. Document data lineage, dictionaries, transformation logic and behavioural assumptions. Minimum Qualifications: Education – Bachelor’s or Master’s in Computer Science, Data Engineering, Analytics, Cybersecurity or related field. Experience – 3 + years building data pipelines and/or performing data analysis on large log datasets. Core Skills Python (pandas, numpy, elasticsearch-py, Matplotlib, plotly, seaborn; PySpark desirable) Elasticsearch & ELK stack query optimisation SQL for ad-hoc analysis Workflow orchestration (Apache Airflow, Prefect or similar) Data modelling, versioning and time-series handling Familiarity with visualisation tools (Kibana, Grafana). DevOps – Docker, Git, CI/CD best practices. Nice-to-Have Kafka, Fluentd or Logstash experience for high-throughput log streaming. Web-server log expertise (NGINX / Apache, HTTP semantics) Cloud data platform deployment on AWS / GCP / Azure. Hands-on exposure to feature stores (Feast, Tecton) and MLOps. Prior work on anomaly-detection or cybersecurity analytics systems. Why Join Us? You’ll sit at the nexus of data engineering and behavioural analytics, turning raw traffic logs into the lifeblood of a cutting-edge AI security product. If you thrive on building resilient pipelines and diving into the data to uncover hidden patterns, we’d love to meet you. Job Type: Full-time Pay: ₹500,000.00 - ₹1,000,000.00 per year Benefits: Health insurance Provident Fund Schedule: Day shift Monday to Friday Work Location: In person
Posted 2 days ago
10.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. EY GDS – Data and Analytics (D&A) – Cloud Architect - Manager As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We’re looking for SeniorManagers (GTM +Cloud/ Big Data Architects) with strong technology and data understanding having proven delivery capability in delivery and pre sales. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team. Your Key Responsibilities Have proven experience in driving Analytics GTM/Pre-Sales by collaborating with senior stakeholder/s in the client and partner organization in BCM, WAM, Insurance. Activities will include pipeline building, RFP responses, creating new solutions and offerings, conducting workshops as well as managing in flight projects focused on cloud and big data. Need to work with client in converting business problems/challenges to technical solutions considering security, performance, scalability etc. [ 10- 15 years] Need to understand current & Future state enterprise architecture. Need to contribute in various technical streams during implementation of the project. Provide product and design level technical best practices Interact with senior client technology leaders, understand their business goals, create, architect, propose, develop and deliver technology solutions Define and develop client specific best practices around data management within a Hadoop environment or cloud environment Recommend design alternatives for data ingestion, processing and provisioning layers Design and develop data ingestion programs to process large data sets in Batch mode using HIVE, Pig and Sqoop, Spark Develop data ingestion programs to ingest real-time data from LIVE sources using Apache Kafka, Spark Streaming and related technologies Skills And Attributes For Success Architect in designing highly scalable solutions Azure, AWS and GCP. Strong understanding & familiarity with all Azure/AWS/GCP /Bigdata Ecosystem components Strong understanding of underlying Azure/AWS/GCP Architectural concepts and distributed computing paradigms Hands-on programming experience in Apache Spark using Python/Scala and Spark Streaming Hands on experience with major components like cloud ETLs,Spark, Databricks Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Knowledge of Spark and Kafka integration with multiple Spark jobs to consume messages from multiple Kafka partitions Solid understanding of ETL methodologies in a multi-tiered stack, integrating with Big Data systems like Cloudera and Databricks. Strong understanding of underlying Hadoop Architectural concepts and distributed computing paradigms Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Good knowledge in apache Kafka & Apache Flume Experience in Enterprise grade solution implementations. Experience in performance bench marking enterprise applications Experience in Data security [on the move, at rest] Strong UNIX operating system concepts and shell scripting knowledge To qualify for the role, you must have Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Ability to multi-task under pressure and work independently with minimal supervision. Strong verbal and written communication skills. Must be a team player and enjoy working in a cooperative and collaborative team environment. Adaptable to new technologies and standards. Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support Responsible for the evaluation of technical risks and map out mitigation strategies Experience in Data security [on the move, at rest] Experience in performance bench marking enterprise applications Working knowledge in any of the cloud platform, AWS or Azure or GCP Excellent business communication, Consulting, Quality process skills Excellent Consulting Skills Excellence in leading Solution Architecture, Design, Build and Execute for leading clients in Banking, Wealth Asset Management, or Insurance domain. Minimum 7 years hand-on experience in one or more of the above areas. Minimum 10 years industry experience Ideally, you’ll also have Strong project management skills Client management skills Solutioning skills What We Look For People with technical experience and enthusiasm to learn new things in this fast-moving environment What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.
Posted 2 days ago
6.0 years
6 - 9 Lacs
Hyderābād
On-site
CACI International Inc is an American multinational professional services and information technology company headquartered in Northern Virginia. CACI provides expertise and technology to enterprise and mission customers in support of national security missions and government transformation for defense, intelligence, and civilian customers. CACI has approximately 23,000 employees worldwide. Headquartered in London, CACI Ltd is a wholly owned subsidiary of CACI International Inc., a publicly listed company on the NYSE with annual revenue in excess of US $6.2bn. Founded in 2022, CACI India is an exciting, growing and progressive business unit of CACI Ltd. CACI Ltd currently has over 2000 intelligent professionals and are now adding many more from our Hyderabad and Pune offices. Through a rigorous emphasis on quality, the CACI India has grown considerably to become one of the UKs most well-respected Technology centres. About Data Platform: The Data Platform will be built and managed “as a Product” to support a Data Mesh organization. The Data Platform focusses on enabling decentralized management, processing, analysis and delivery of data, while enforcing corporate wide federated governance on data, and project environments across business domains. The goal is to empower multiple teams to create and manage high integrity data and data products that are analytics and AI ready, and consumed internally and externally. What does a Data Infrastructure Engineer do? A Data Infrastructure Engineer will be responsible to develop, maintain and monitor the data platform infrastructure and operations. The infrastructure and pipelines you build will support data processing, data analytics, data science and data management across the CACI business. The data platform infrastructure will conform to a zero trust, least privilege architecture, with a strict adherence to data and infrastructure governance and control in a multi-account, multi-region AWS environment. You will use Infrastructure as Code and CI/CD to continuously improve, evolve and repair the platform. You will be able to design architectures and create re-useable solutions to reflect the business needs. Responsibilities will include: Collaborating across CACI departments to develop and maintain the data platform Building infrastructure and data architectures in Cloud Formation, and SAM. Designing and implementing data processing environments and integrations using AWS PaaS such as Glue, EMR, Sagemaker, Redshift, Aurora and Snowflake Building data processing and analytics pipelines as code, using python, SQL, PySpark, spark, CloudFormation, lambda, step functions, Apache Airflow Monitoring and reporting on the data platform performance, usage and security Designing and applying security and access control architectures to secure sensitive data You will have: 6+ years of experience in a Data Engineering role. Strong experience and knowledge of data architectures implemented in AWS using native AWS services such as S3, DataZone, Glue, EMR, Sagemaker, Aurora and Redshift. Experience administrating databases and data platforms Good coding discipline in terms of style, structure, versioning, documentation and unit tests Strong proficiency in Cloud Formation, Python and SQL Knowledge and experience of relational databases such as Postgres, Redshift Experience using Git for code versioning, and lifecycle management Experience operating to Agile principles and ceremonies Hands-on experience with CI/CD tools such as GitLab Strong problem-solving skills and ability to work independently or in a team environment. Excellent communication and collaboration skills. A keen eye for detail, and a passion for accuracy and correctness in numbers Whilst not essential, the following skills would also be useful: Experience using Jira, or other agile project management and issue tracking software Experience with Snowflake Experience with Spatial Data Processing
Posted 2 days ago
10.0 years
0 Lacs
Telangana
On-site
About Chubb Chubb is a world leader in insurance. With operations in 54 countries and territories, Chubb provides commercial and personal property and casualty insurance, personal accident and supplemental health insurance, reinsurance and life insurance to a diverse group of clients. The company is defined by its extensive product and service offerings, broad distribution capabilities, exceptional financial strength and local operations globally. Parent company Chubb Limited is listed on the New York Stock Exchange (NYSE: CB) and is a component of the S&P 500 index. Chubb employs approximately 40,000 people worldwide. Additional information can be found at: www.chubb.com. About Chubb India At Chubb India, we are on an exciting journey of digital transformation driven by a commitment to engineering excellence and analytics. We are proud to share that we have been officially certified as a Great Place to Work® for the third consecutive year, a reflection of the culture at Chubb where we believe in fostering an environment where everyone can thrive, innovate, and grow With a team of over 2500 talented professionals, we encourage a start-up mindset that promotes collaboration, diverse perspectives, and a solution-driven attitude. We are dedicated to building expertise in engineering, analytics, and automation, empowering our teams to excel in a dynamic digital landscape. We offer an environment where you will be part of an organization that is dedicated to solving real-world challenges in the insurance industry. Together, we will work to shape the future through innovation and continuous learning. Reporting to the VP COG ECM enterprise Forms Portfolio Delivery Manager, this role will be responsible for managing and supporting Implementation of a new Document solution for identified applications with the CCM landscape, in APAC. OpenText xPression and Duckcreek has been the corporate document generation tool of choice within Chubb. But xPression going end of life and be unsupported from 2025. A new Customer Communications Management (CCM) platform – Quadient Inspire - has been selected to replace xPression by a global working group and implementation of this new tool (including migration of existing forms/templates from xPression where applicable). Apart from migrating from xPression, there are multiple existing applications to be replaced with Quadient Inspire The role is based in Hyderabad/India with some travel to other Chubb offices. Although there are no direct line management responsibilities within this role, the successful applicant will be responsible for task management of Business Analysts and an Onshore/Offshore development team. The role will require the ability to manage multiple project/enhancement streams with a variety of levels of technical/functional scope and across a number of different technologies. In this role, you will: Lead the design and development of comprehensive data engineering frameworks and patterns. Establish engineering design standards and guidelines for the creation, usage, and maintenance of data across COG (Chubb overseas general) Derive innovation and build highly scalable real-time data pipelines and data platforms to support the business needs. Act as mentor and lead for the data engineering organization that is business-focused, proactive, and resilient. Promote data governance and master/reference data management as a strategic discipline. Implement strategies to monitor the effectiveness of data management. Be an engineering leader and coach data engineers and be an active member of the data leadership team. Evaluate emerging data technologies and determine their business benefits and impact on the future-state data platform. Develop and promote a strong data management framework, emphasizing data quality, governance, and compliance with regulatory requirements Collaborate with Data Modelers to create data models (conceptual, logical, and physical) Architect meta-data management processes to ensure data lineage, data definitions, and ownership are well-documented and understood Collaborate closely with business leaders, IT teams, and external partners to understand data requirements and ensure alignment with strategic goals Act as a primary point of contact for data engineering discussions and inquiries from various stakeholders Lead the implementation of data architectures on cloud platforms (AWS, Azure, Google Cloud) to improve efficiency and scalability Qualifications Bachelor’s degree in Computer Science, Information Systems, Data Engineering, or a related field; Master’s degree preferred Minimum of 10 years’ experience in data architecture or data engineering roles, with a significant focus in P&C insurance domains preferred. Proven track record of successful implementation of data architecture within large-scale transformation programs or projects Comprehensive knowledge of data modelling techniques and methodologies, including data normalization and denormalization practices Hands on expertise across a wide variety of database (Azure SQL, MongoDB, Cosmos), data transformation (Informatica IICS, Databricks), change data capture and data streaming (Apache Kafka, Apache Flink) technologies Proven Expertise with data warehousing concepts, ETL processes, and data integration tools (e.g., Informatica, Databricks, Talend, Apache Nifi) Experience with cloud-based data architectures and platforms (e.g., AWS Redshift, Google BigQuery, Snowflake, Azure SQL Database) Expertise in ensuring data security patterns (e.g. tokenization, encryption, obfuscation) Knowledge of insurance policy operations, regulations, and compliance frameworks specific to Consumer lines Familiarity with Agile methodologies and experience working in Agile project environments Understanding of advanced analytics, AI, and machine learning concepts as they pertain to data architecture Why Chubb? Join Chubb to be part of a leading global insurance company! Our constant focus on employee experience along with a start-up-like culture empowers you to achieve impactful results. Industry leader: Chubb is a world leader in the insurance industry, powered by underwriting and engineering excellence A Great Place to work: Chubb India has been recognized as a Great Place to Work® for the years 2023-2024, 2024-2025 and 2025-2026 Laser focus on excellence: At Chubb we pride ourselves on our culture of greatness where excellence is a mindset and a way of being. We constantly seek new and innovative ways to excel at work and deliver outstanding results Start-Up Culture: Embracing the spirit of a start-up, our focus on speed and agility enables us to respond swiftly to market requirements, while a culture of ownership empowers employees to drive results that matter Growth and success: As we continue to grow, we are steadfast in our commitment to provide our employees with the best work experience, enabling them to advance their careers in a conducive environment Employee Benefits Our company offers a comprehensive benefits package designed to support our employees’ health, well-being, and professional growth. Employees enjoy flexible work options, generous paid time off, and robust health coverage, including treatment for dental and vision related requirements. We invest in the future of our employees through continuous learning opportunities and career advancement programs, while fostering a supportive and inclusive work environment. Our benefits include: Savings and Investment plans: We provide specialized benefits like Corporate NPS (National Pension Scheme), Employee Stock Purchase Plan (ESPP), Long-Term Incentive Plan (LTIP), Retiral Benefits and Car Lease that help employees optimally plan their finances Upskilling and career growth opportunities: With a focus on continuous learning, we offer customized programs that support upskilling like Education Reimbursement Programs, Certification programs and access to global learning programs. Health and Welfare Benefits: We care about our employees’ well-being in and out of work and have benefits like Employee Assistance Program (EAP), Yearly Free Health campaigns and comprehensive Insurance benefits. Application Process Our recruitment process is designed to be transparent, and inclusive. Step 1: Submit your application via the Chubb Careers Portal. Step 2: Engage with our recruitment team for an initial discussion. Step 3: Participate in HackerRank assessments/technical/functional interviews and assessments (if applicable). Step 4: Final interaction with Chubb leadership. Join Us With you Chubb is better. Whether you are solving challenges on a global stage or creating innovative solutions for local markets, your contributions will help shape the future. If you value integrity, innovation, and inclusion, and are ready to make a difference, we invite you to be part of Chubb India’s journey. Apply Now: Chubb External Careers
Posted 2 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are seeking a highly skilled and experienced Java Developer to join our engineering team. The ideal candidate will be responsible for designing, developing, and maintaining high-performance, scalable, and reliable applications. This role requires a strong understanding of core Java principles, modern frameworks like Spring Boot, microservices architecture, and cloud technologies. You will play a key role in building the next generation of our platform. Key Responsibilities: Design, develop, and implement robust and scalable backend services using Java, Spring, and Spring Boot. Build and maintain microservices-based applications, ensuring high availability, performance, and fault tolerance. Utilize Object-Oriented Programming (OOPs) principles and design patterns to write clean, reusable, and maintainable code. Develop multi-threaded applications to handle concurrent processes and optimize performance. Integrate with messaging systems like Apache Kafka for real-time data processing and asynchronous communication. Work with cloud services, primarily AWS, for deploying and managing applications (e.g., EC2, S3, RDS). Design and interact with relational databases, writing complex SQL queries and optimizing database performance. Collaborate with cross-functional teams, including product managers, designers, and other engineers, to define, design, and ship new features. Write and execute unit, integration, and end-to-end tests to ensure code quality and reliability. Participate in code reviews, providing constructive feedback to peers to maintain high coding standards. Required Skills and Qualifications: Bachelor’s degree in Computer Science, Engineering, or a related field, or equivalent practical experience. 5-8+ years of experience in software development with a strong focus on Java. Expertise in Core Java, including a deep understanding of Object-Oriented Programming (OOPs) principles and concurrent programming (Multi-threading). Extensive hands-on experience with Spring and Spring Boot frameworks. Solid experience in designing and developing microservices-based architectures. Proven experience working with messaging systems, particularly Apache Kafka. Hands-on experience with AWS services for building and deploying applications. Proficiency in database technologies, including writing efficient SQL queries. Strong understanding of version control systems (e.g., Git). Experience with build tools like Maven or Gradle. Excellent problem-solving skills and the ability to work independently or as part of a team. Strong communication and collaboration skills. Preferred Skills (Nice to Have): Experience with containers and orchestration tools like Docker and Kubernetes. Familiarity with CI/CD pipelines (e.g., Jenkins, GitLab CI). Knowledge of other databases (e.g., NoSQL databases like MongoDB). Experience with RESTful API design.
Posted 2 days ago
10.0 years
0 Lacs
Telangana
On-site
About Chubb Chubb is a world leader in insurance. With operations in 54 countries and territories, Chubb provides commercial and personal property and casualty insurance, personal accident and supplemental health insurance, reinsurance and life insurance to a diverse group of clients. The company is defined by its extensive product and service offerings, broad distribution capabilities, exceptional financial strength and local operations globally. Parent company Chubb Limited is listed on the New York Stock Exchange (NYSE: CB) and is a component of the S&P 500 index. Chubb employs approximately 40,000 people worldwide. Additional information can be found at: www.chubb.com. About Chubb India At Chubb India, we are on an exciting journey of digital transformation driven by a commitment to engineering excellence and analytics. We are proud to share that we have been officially certified as a Great Place to Work® for the third consecutive year, a reflection of the culture at Chubb where we believe in fostering an environment where everyone can thrive, innovate, and grow With a team of over 2500 talented professionals, we encourage a start-up mindset that promotes collaboration, diverse perspectives, and a solution-driven attitude. We are dedicated to building expertise in engineering, analytics, and automation, empowering our teams to excel in a dynamic digital landscape. We offer an environment where you will be part of an organization that is dedicated to solving real-world challenges in the insurance industry. Together, we will work to shape the future through innovation and continuous learning. Reporting to the VP COG ECM enterprise Forms Portfolio Delivery Manager, this role will be responsible for managing and supporting Implementation of a new Document solution for identified applications with the CCM landscape, in APAC. OpenText xPression and Duckcreek has been the corporate document generation tool of choice within Chubb. But xPression going end of life and be unsupported from 2025. A new Customer Communications Management (CCM) platform – Quadient Inspire - has been selected to replace xPression by a global working group and implementation of this new tool (including migration of existing forms/templates from xPression where applicable). Apart from migrating from xPression, there are multiple existing applications to be replaced with Quadient Inspire The role is based in Hyderabad/India with some travel to other Chubb offices. Although there are no direct line management responsibilities within this role, the successful applicant will be responsible for task management of Business Analysts and an Onshore/Offshore development team. The role will require the ability to manage multiple project/enhancement streams with a variety of levels of technical/functional scope and across a number of different technologies. In this role, you will: Lead the design and development of comprehensive data engineering frameworks and patterns. Establish engineering design standards and guidelines for the creation, usage, and maintenance of data across COG (Chubb overseas general) Derive innovation and build highly scalable real-time data pipelines and data platforms to support the business needs. Act as mentor and lead for the data engineering organization that is business-focused, proactive, and resilient. Promote data governance and master/reference data management as a strategic discipline. Implement strategies to monitor the effectiveness of data management. Be an engineering leader and coach data engineers and be an active member of the data leadership team. Evaluate emerging data technologies and determine their business benefits and impact on the future-state data platform. Develop and promote a strong data management framework, emphasizing data quality, governance, and compliance with regulatory requirements Collaborate with Data Modelers to create data models (conceptual, logical, and physical) Architect meta-data management processes to ensure data lineage, data definitions, and ownership are well-documented and understood Collaborate closely with business leaders, IT teams, and external partners to understand data requirements and ensure alignment with strategic goals Act as a primary point of contact for data engineering discussions and inquiries from various stakeholders Lead the implementation of data architectures on cloud platforms (AWS, Azure, Google Cloud) to improve efficiency and scalability Qualifications Bachelor’s degree in Computer Science, Information Systems, Data Engineering, or a related field; Master’s degree preferred Minimum of 10 years’ experience in data architecture or data engineering roles, with a significant focus in P&C insurance domains preferred. Proven track record of successful implementation of data architecture within large-scale transformation programs or projects Comprehensive knowledge of data modelling techniques and methodologies, including data normalization and denormalization practices Hands on expertise across a wide variety of database (Azure SQL, MongoDB, Cosmos), data transformation (Informatica IICS, Databricks), change data capture and data streaming (Apache Kafka, Apache Flink) technologies Proven Expertise with data warehousing concepts, ETL processes, and data integration tools (e.g., Informatica, Databricks, Talend, Apache Nifi) Experience with cloud-based data architectures and platforms (e.g., AWS Redshift, Google BigQuery, Snowflake, Azure SQL Database) Expertise in ensuring data security patterns (e.g. tokenization, encryption, obfuscation) Knowledge of insurance policy operations, regulations, and compliance frameworks specific to Consumer lines Familiarity with Agile methodologies and experience working in Agile project environments Understanding of advanced analytics, AI, and machine learning concepts as they pertain to data architecture Why Chubb? Join Chubb to be part of a leading global insurance company! Our constant focus on employee experience along with a start-up-like culture empowers you to achieve impactful results. Industry leader: Chubb is a world leader in the insurance industry, powered by underwriting and engineering excellence A Great Place to work: Chubb India has been recognized as a Great Place to Work® for the years 2023-2024, 2024-2025 and 2025-2026 Laser focus on excellence: At Chubb we pride ourselves on our culture of greatness where excellence is a mindset and a way of being. We constantly seek new and innovative ways to excel at work and deliver outstanding results Start-Up Culture: Embracing the spirit of a start-up, our focus on speed and agility enables us to respond swiftly to market requirements, while a culture of ownership empowers employees to drive results that matter Growth and success: As we continue to grow, we are steadfast in our commitment to provide our employees with the best work experience, enabling them to advance their careers in a conducive environment Employee Benefits Our company offers a comprehensive benefits package designed to support our employees’ health, well-being, and professional growth. Employees enjoy flexible work options, generous paid time off, and robust health coverage, including treatment for dental and vision related requirements. We invest in the future of our employees through continuous learning opportunities and career advancement programs, while fostering a supportive and inclusive work environment. Our benefits include: Savings and Investment plans: We provide specialized benefits like Corporate NPS (National Pension Scheme), Employee Stock Purchase Plan (ESPP), Long-Term Incentive Plan (LTIP), Retiral Benefits and Car Lease that help employees optimally plan their finances Upskilling and career growth opportunities: With a focus on continuous learning, we offer customized programs that support upskilling like Education Reimbursement Programs, Certification programs and access to global learning programs. Health and Welfare Benefits: We care about our employees’ well-being in and out of work and have benefits like Employee Assistance Program (EAP), Yearly Free Health campaigns and comprehensive Insurance benefits. Application Process Our recruitment process is designed to be transparent, and inclusive. Step 1: Submit your application via the Chubb Careers Portal. Step 2: Engage with our recruitment team for an initial discussion. Step 3: Participate in HackerRank assessments/technical/functional interviews and assessments (if applicable). Step 4: Final interaction with Chubb leadership. Join Us With you Chubb is better. Whether you are solving challenges on a global stage or creating innovative solutions for local markets, your contributions will help shape the future. If you value integrity, innovation, and inclusion, and are ready to make a difference, we invite you to be part of Chubb India’s journey. Apply Now: Chubb External Careers
Posted 2 days ago
8.0 years
4 - 7 Lacs
Hyderābād
On-site
Job Description: Senior Developer (Apache Flink, Kafka) Job Location: Hyderabad / Bangalore / Chennai / Kolkata / Noida/ Gurgaon / Pune / Indore / Mumbai Job Details Overall 8+ years of experience in data processing. 2 to 3 years of experience in Apache Flink & Kafka experience along with Microsoft Azure Cloud Write and optimize Flink jobs, which are applications that process data streams. This includes using Flink's APIs (e.g., DataStream API, Table API) to create complex data processing workflows. Fine-tune Flink applications for performance, ensuring they handle high throughput and low latency. This involves adjusting parameters, optimizing resource usage, and addressing performance bottlenecks. Design and build data pipelines to process batch and analyze streaming data in real time. This involves setting up data ingestion, transformation and output processes. The candidate will provide practical experience in java as a full stack developer experience in providing technical leadership. Data analytics experience using SQL Preferred knowledge in Mongo DB / Kubernatives / Graph QL The candidate will provide practical experience on the Azure cloud platform in development and deployment with data migration projects. At DXC Technology, we believe strong connections and community are key to our success. Our work model prioritizes in-person collaboration while offering flexibility to support wellbeing, productivity, individual work styles, and life circumstances. We’re committed to fostering an inclusive environment where everyone can thrive. Recruitment fraud is a scheme in which fictitious job opportunities are offered to job seekers typically through online services, such as false websites, or through unsolicited emails claiming to be from the company. These emails may request recipients to provide personal information or to make payments as part of their illegitimate recruiting process. DXC does not make offers of employment via social media networks and DXC never asks for any money or payments from applicants at any point in the recruitment process, nor ask a job seeker to purchase IT or other equipment on our behalf. More information on employment scams is available here .
Posted 2 days ago
5.0 years
6 - 9 Lacs
Hyderābād
On-site
Senior Software Engineer Company Profile LSEG (London Stock Exchange Group) is a world-leading financial markets infrastructure and data business. We deliver services across Data & Analytics, Capital Markets, and Post Trade. Backed by three hundred years of experience, innovative technologies, and a team of over 23,000 people in 70 countries, our purpose is driving financial stability, empowering economies, and enabling customers to create sustainable growth. Role Profile In this role, you'll be joining our Primary Markets Services Team responsible for developing and supporting cloud-based products and services crafted by the LSE. LSE offers flexible working with most engineers spending 3 days in the office a week. Tech Profile 5+ years coding experience and be proficient in a high level language such as Java. Knowledge of cloud especially AWS. Experience of backend system development and support. 2+ years of working with at least one RDBMS (Oracle, SQL Server or PostgreSQL). Familiarity with Linux/Unix platform, backend system, web application servers (e.g. Apache, NGINX), and DevOps tools. (e.g. CI/CD, GIT, etc.). Good understanding of software engineering principles and object-oriented design. Familiar with Agile development process and tools (e.g. JIRA, Confluence etc.) Ability to take ownership of complex problems, come up with good solutions and deliver them. Preferred Skills and Experience Experience of handling production issues. Experience of mentoring junior team members. Education and Professional Skills BS/MS degree in Computer Science, Engineering or a related subject. Advanced English reading/writing capability required. Strong communication & articulation skills. Curious about new technologies and tools, creative thinking and initiative taking. Detailed Responsibilities Work on any number of engineering projects within Primary Markets. Tackle engineering tasks with little oversight. Develop tools and applications by producing clean and efficient code, review others' code regularly. Build relationships with team and colleagues, collaborating closely with team. Communicates with transparency and precision in a concise format. Take initiative to improve systems regularly. Takes initiative to develop knowledge in technology products and tools through on the job learning, certifications and projects. LSEG is a leading global financial markets infrastructure and data provider. Our purpose is driving financial stability, empowering economies and enabling customers to create sustainable growth. Our purpose is the foundation on which our culture is built. Our values of Integrity, Partnership, Excellence and Change underpin our purpose and set the standard for everything we do, every day. They go to the heart of who we are and guide our decision making and everyday actions. Working with us means that you will be part of a dynamic organisation of 25,000 people across 65 countries. However, we will value your individuality and enable you to bring your true self to work so you can help enrich our diverse workforce. You will be part of a collaborative and creative culture where we encourage new ideas and are committed to sustainability across our global business. You will experience the critical role we have in helping to re-engineer the financial ecosystem to support and drive sustainable economic growth. Together, we are aiming to achieve this growth by accelerating the just transition to net zero, enabling growth of the green economy and creating inclusive economic opportunity. LSEG offers a range of tailored benefits and support, including healthcare, retirement planning, paid volunteering days and wellbeing initiatives. We are proud to be an equal opportunities employer. This means that we do not discriminate on the basis of anyone’s race, religion, colour, national origin, gender, sexual orientation, gender identity, gender expression, age, marital status, veteran status, pregnancy or disability, or any other basis protected under applicable law. Conforming with applicable law, we can reasonably accommodate applicants' and employees' religious practices and beliefs, as well as mental health or physical disability needs. Please take a moment to read this privacy notice carefully, as it describes what personal information London Stock Exchange Group (LSEG) (we) may hold about you, what it’s used for, and how it’s obtained, your rights and how to contact us as a data subject . If you are submitting as a Recruitment Agency Partner, it is essential and your responsibility to ensure that candidates applying to LSEG are aware of this privacy notice.
Posted 2 days ago
5.0 years
7 - 9 Lacs
Hyderābād
On-site
Location: Hyderabad, Telangana Time type: Full time Job level: Senior Associate Job type: Regular Category: Technology Consulting ID: JR111910 About us We are the leading provider of professional services to the middle market globally, our purpose is to instill confidence in a world of change, empowering our clients and people to realize their full potential. Our exceptional people are the key to our unrivaled, inclusive culture and talent experience and our ability to be compelling to our clients. You’ll find an environment that inspires and empowers you to thrive both personally and professionally. There’s no one like you and that’s why there’s nowhere like RSM. Snowflake Engineer We are currently seeking an experienced Snowflake Engineer for our Data Analytics team. This role involves designing, building, and maintaining our Snowflake cloud data warehouse. Candidates should have strong Snowflake, SQL, and cloud data solutions experience. Responsibilities Design, develop, and maintain efficient and scalable data pipelines in Snowflake, encompassing data ingestion, transformation, and loading (ETL/ELT). Implement and manage Snowflake security, including role-based access control, network policies, and data encryption. Develop and maintain data models optimized for analytical reporting and business intelligence. Collaborate with data analysts, scientists, and stakeholders to understand data requirements and translate them into technical solutions. Monitor and troubleshoot Snowflake performance, identifying and resolving bottlenecks. Automate data engineering processes using scripting languages (e.g., Python, SQL) and orchestration tools (e.g., Airflow, dbt). Designing, developing, and deploying APIs within Snowflake using stored procedures and user-defined functions (UDFs) Lead and mentor a team of data engineers and analysts, providing technical guidance, coaching, and professional development opportunities. Stay current with the latest Snowflake features and best practices. Contribute to the development of data engineering standards and best practices. Document data pipelines, data models, and other technical specifications. Qualifications Bachelor’s degree or higher in computer science, Information Technology, or a related field. A minimum of 5 years of experience in data engineering and management, including over 3 years of working with Snowflake. Strong understanding of data warehousing concepts, including dimensional modeling, star schemas, and snowflake schemas. Proficiency in SQL and experience with data transformation and manipulation. Experience with ETL/ELT tools and processes. Experience with Apache Iceberg. Strong analytical and problem-solving skills. Excellent communication and collaboration skills. Preferred qualifications Snowflake certifications (e.g., SnowPro Core Certification). Experience with scripting languages (e.g., Python) and automation tools (e.g., Airflow, dbt). Experience with cloud platforms (e.g., AWS, Azure, GCP). Experience with data visualization tools (e.g., Tableau, Power BI). Experience with Agile development methodologies. Experience with Snowflake Cortex, including Cortex Analyst, Arctic TILT, and Snowflake AI & ML Studio. At RSM, we offer a competitive benefits and compensation package for all our people. We offer flexibility in your schedule, empowering you to balance life’s demands, while also maintaining your ability to serve clients. Learn more about our total rewards at https://rsmus.com/careers/india.html. RSM does not tolerate discrimination and/or harassment based on race; colour; creed; sincerely held religious beliefs, practices or observances; sex (including pregnancy or disabilities related to nursing); gender (including gender identity and/or gender expression); sexual orientation; HIV Status; national origin; ancestry; familial or marital status; age; physical or mental disability; citizenship; political affiliation; medical condition (including family and medical leave); domestic violence victim status; past, current or prospective service in the Indian Armed Forces; Indian Armed Forces Veterans, and Indian Armed Forces Personnel status; pre-disposing genetic characteristics or any other characteristic protected under applicable provincial employment legislation. Accommodation for applicants with disabilities is available upon request in connection with the recruitment process and/or employment/partnership. RSM is committed to providing equal opportunity and reasonable accommodation for people with disabilities. If you require a reasonable accommodation to complete an application, interview, or otherwise participate in the recruiting process, please send us an email at careers@rsmus.com.
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough