Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 10.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Staff Technical Product Manager Are you excited about the opportunity to lead a team within an industry leader in Energy Technology? Are you passionate about improving capabilities, efficiency and performance? Join our Digital Technology Team! As a Staff Technical Product Manager, this position will operate in lock-step with product management to create a clear strategic direction for build needs, and convey that vision to the service’s scrum team. You will direct the team with a clear and descriptive set of requirements captured as stories and partner with the team to determine what can be delivered through balancing the need for new features, defects, and technical debt.. Partner with the best As a Staff Technical Product Manager, we are seeking a strong background in business analysis, team leadership, and data architecture and hands-on development skills. The ideal candidate will excel in creating roadmap, planning with prioritization, resource allocation, key item delivery, and seamless integration of perspectives from various stakeholders, including Product Managers, Technical Anchors, Service Owners, and Developers. Results - oriented leader, and capable of building and executing an aligned strategy leading data team and cross functional teams to meet deliverable timelines. As a Staff Technical Product Manager, you will be responsible for: Demonstrating wide and deep knowledge in data engineering, data architecture, and data science. Ability to guide, lead, and work with the team to drive to the right solution Engaging frequently (80%) with the development team; facilitate discussions, provide clarification, story acceptance and refinement, testing and validation; contribute to design activities and decisions; familiar with waterfall, Agile scrum framework; Owning and manage the backlog; continuously order and prioritize to ensure that 1-2 sprints/iterations of backlog are always ready. Collaborating with UX in design decisions, demonstrating deep understanding of technology stack and impact on final product. Conducting customer and stakeholder interviews and elaborate on personas. Demonstrating expert-level skill in problem decomposition and ability to navigate through ambiguity. Partnering with the Service Owner to ensure a healthy development process and clear tracking metric to form standard and trustworthy way of providing customer support Designing and implementing scalable and robust data pipelines to collect, process, and store data from various sources. Developing and maintaining data warehouse and ETL (Extract, Transform, Load) processes for data integration and transformation. Optimizing and tuning the performance of data systems to ensure efficient data processing and analysis. Collaborating with product managers and analysts to understand data requirements and implement solutions for data modeling and analysis. Identifying and resolving data quality issues, ensuring data accuracy, consistency, and completeness Implementing and maintaining data governance and security measures to protect sensitive data. Monitoring and troubleshoot data infrastructure, perform root cause analysis, and implement necessary fixes. Fuel your passion: To be successful in this role you will require: Have a Bachelor's or higher degree in Computer Science, Information Systems, or a related field. Have minimum 6-10 years of proven experience as a Data Engineer or similar role, working with large-scale data processing and storage systems. Have Proficiency in SQL and database management systems (e.g., MySQL, PostgreSQL, or Oracle). Have Extensive knowledge working with SAP systems, Tcode, data pipelines in SAP, Databricks related technologies. Have Experience with building complex jobs for building SCD type mappings using ETL tools like PySpark, Talend, Informatica, etc. Have Experience with data visualization and reporting tools (e.g., Tableau, Power BI). Have Strong problem-solving and analytical skills, with the ability to handle complex data challenges. Have Excellent communication and collaboration skills to work effectively in a team environment. Have Experience in data modeling, data warehousing, and ETL principles. Have familiarity with cloud platforms like AWS, Azure, or GCP, and their data services (e.g., S3, Redshift, BigQuery). Have advanced knowledge of distributed computing and parallel processing. Experience with real-time data processing and streaming technologies (e.g., Apache Kafka, Apache Flink). Knowledge of containerization and orchestration technologies (e.g., Docker, Kubernetes). Certification in relevant technologies or data engineering disciplines. Having working knowledge in Databricks, Dremio, and SAP is highly preferred. Work in a way that works for you We recognize that everyone is different and that the way in which people want to work and deliver at their best is different for everyone too. In this role, we can offer the following flexible working patterns (where applicable): Working flexible hours - flexing the times when you work in the day to help you fit everything in and work when you are the most productive Working with us Our people are at the heart of what we do at Baker Hughes. We know we are better when all our people are developed, engaged, and able to bring their whole authentic selves to work. We invest in the health and well-being of our workforce, train and reward talent, and develop leaders at all levels to bring out the best in each other. Working for you Our inventions have revolutionized energy for over a century. But to keep going forward tomorrow, we know we must push the boundaries today. We prioritize rewarding those who embrace challenge with a package that reflects how much we value their input. Join us, and you can expect: Contemporary work-life balance policies and wellbeing activities. About Us With operations in over 120 countries, we provide better solutions for our customers and richer opportunities for our people. As a leading partner to the energy industry, we're committed to achieving net-zero carbon emissions by 2050 and we're always looking for the right people to help us get there. People who are as passionate as we are about making energy safer, cleaner, and more efficient. Join Us Are you seeking an opportunity to make a real difference in a company with a global reach and exciting services and clients? Come join us and grow with a team of people who will challenge and inspire you! About Us: We are an energy technology company that provides solutions to energy and industrial customers worldwide. Built on a century of experience and conducting business in over 120 countries, our innovative technologies and services are taking energy forward – making it safer, cleaner and more efficient for people and the planet. Join Us: Are you seeking an opportunity to make a real difference in a company that values innovation and progress? Join us and become part of a team of people who will challenge and inspire you! Let’s come together and take energy forward. Baker Hughes Company is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, national or ethnic origin, sex, sexual orientation, gender identity or expression, age, disability, protected veteran status or other characteristics protected by law. R147951
Posted 3 weeks ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Lead Software Engineer – Hyderabad / Gurugram (Onsite/Hybrid) Experience: 10+ Years Employment Type: Full-Time Industry: Software / Product Development Job Summary We are hiring an experienced Lead Software Engineer (Tech Lead) to lead the development and architecture of scalable, high-performance software solutions for SmartFM —an advanced facility management platform powered by AI/ML. SmartFM processes large volumes of data from various sources to provide a unified operational dashboard with real-time metrics, alerts, insights, and actionable recommendations for optimizing building operations. This role involves defining the technical roadmap, writing high-quality code, guiding application modernization, and leading a cross-functional engineering team. Key Responsibilities Lead the architecture, design, and implementation of scalable, maintainable solutions. Write, review, and optimize clean, efficient code using modern tech stacks. Define and execute the product's technical roadmap in alignment with business goals. Collaborate closely with Product, QA, DevOps, and Support teams. Drive best practices in coding, automated testing, and deployment. Troubleshoot and resolve technical issues in production environments. Guide modernization efforts including migration from monolith to microservices. Mentor team members and foster a culture of engineering excellence. Evaluate emerging technologies and recommend improvements. Technical Stack Frontend: React.js Backend: Node.js, Nest.js, C#, Java/Spring Databases: MS SQL, T-SQL, MongoDB (schema design, pipelines, aggregation) Cloud Platforms: Microsoft Azure (primary), AWS DevOps Tools: Azure DevOps (ADO), Jenkins, Docker, Azure Pipelines Architecture: Microservices, REST APIs, always-on systems Mobile Development: Experience building and maintaining mobile apps AI/ML: Familiarity with AI/ML-driven platforms (hands-on ML not mandatory) Additional Skills (Preferred) Experience with data engineering tools such as Hadoop, Spark, Kafka Familiarity with data warehouses: Redshift, BigQuery, Snowflake Exposure to other languages and frameworks: .NET, AngularJS, Python, Go, Swift Experience with IBM StreamSets is a plus Strong understanding of Agile methodologies Qualifications Bachelor’s or Master’s degree in Computer Science, Information Systems, Mathematics, or a related field Proven experience in leading engineering teams and delivering enterprise-grade software Strong communication, problem-solving, and leadership skills High adaptability to new technologies and a continuous learning mindset
Posted 3 weeks ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are looking for an Associate Architect / Architect / Technical Lead – Data Engineering with strong hands-on expertise in Data Warehousing (DWH), ETL, and cloud platforms. The ideal candidate will bring deep technical knowledge, strategic thinking, and leadership capabilities to help drive data engineering initiatives across cloud and on-premise environments. Experience: Relevant Data Architecture: 8+ Years Relevant DWH/ETL/BI: 10+ Years Overall IT Experience: 10 to 15 Years Qualification: Bachelor’s/Master’s Degree in Computer Science or related field. Roles and responsibilities: Solid database understanding of SQL experience and data modelling . Understanding of DWH architecture (EDW / DM concepts, Star and snowflake schema) Strong Experience working on designing and building data pipelines on Azure Cloud stack Strong experience on one of the Cloud Services Azure: Data Fabric, Data Explorer, Data factory, Data Bricks, Synapse and Azure SQL Datawarehouse AWS: Glue, Athena, S3, Redshift, RDS GCP: Big Query, Big Table, Data Proc, Data Prep Expert knowledge and experience working with any one of the ETL tools/Technologies: Informatica, SSIS, Talend etc., Knowledge and experience on DevOps and CI/CD Pipelines Expertise with on premise database environments such as Oracle, SQL Server , MySQL and/or Postgres. Experience or Knowledge on Cloud Datawarehouses: SNOWFLAKE Experience with data visualization utilizing Power BI or SSRS or Tableau Hands-on Experience building or working with Medallion, Data Lake and Lakehouse Architectures Demonstrated problem solving with experience providing business insights from large data sets. Interpret data, analyze results using statistical techniques and provide ongoing reports. Excellent oral and written communication skills, and ability to provide strategic direction to both technical and business organizations. Ability to articulate complex data solutions concisely and with clarity at senior management level. Experience implementing, managing, supporting data warehouse projects or applications. Experience in leading and delivering full-cycle implementation projects related to Business Intelligence. Strong team management and stake holder management skills Strong attention to detail and accuracy with ability to meet deadlines with short lead times. Knowledge on Application development, APIs, Microservices, Integration components
Posted 3 weeks ago
7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
🚀 We're Hiring: Data Analyst – Immediate Joiners Preferred! 📍 Location: Chennai (On-site) 🕒 Experience: 7+ years 🛠️ Skills: SQL, Tableau, Airflow, Python programming, Cloud data services exposure, File formats 🏢 Preferred Background: Product-based companies 🎓 Alumni Preference: Anna University Job Description: We are seeking a Data Analyst with strong expertise in SQL and Tableau to join our growing team. The ideal candidate will have a passion for data, an analytical mindset, and the ability to transform complex data sets into actionable insights that support business decisions. Key Responsibilities: ✅ Extract, clean, and analyze large datasets using SQL ✅ Develop and maintain interactive dashboards and reports in Tableau ✅ Collaborate with cross-functional teams to understand data needs and provide insights ✅ Monitor data quality and ensure data accuracy across all reports ✅ Support ad hoc analysis and data requests from stakeholders ✅ Present findings and recommendations to non-technical audiences ✅ Ingest and process data from diverse sources including APIs, files (CSV, JSON, Parquet), and databases. ✅ Work with cloud data services (AWS/GCP/Azure) for scalable data processing. ✅ Build and schedule workflows using Apache Airflow . ✅ Ensure data quality, consistency, and reliability across datasets. ✅ Collaborate with engineering, product, and business teams to deliver actionable insights. 📌 What we’re looking for: 🔹 8+ years of experience as a Data Analyst or in a similar analytical role 🔹 Strong command of SQL for querying and manipulating data 🔹 Proficient in building dashboards and visualizations using Tableau 🔹 Excellent analytical, problem-solving, and communication skills 🔹 Excellent Communication skills with the ability to convey complex data insights 🔹 Experience working in a product-based company is preferred 🔹 Bachelor's degree in a related field; Anna University alumni preferred 🔹 Experience with cloud platforms and services (e.g., AWS S3, Redshift, BigQuery). 🔹 Hands-on experience with Airflow or similar orchestration tools. 🔹 Solid understanding of file formats like Parquet , JSON , and CSV. 🔹 Experience integrating data from various sources: APIs, flat files, RDBMS, etc. 🔹Immediate joiners strongly preferred Why Join Us? Opportunity to work on impactful, data-driven projects Collaborative and innovative work environment Growth-oriented culture with strong career development support To Apply: Send your updated resume to 📧 gowri.k@invictusdata.ai Ready to turn data into decisions? Let’s talk! 📩 Apply now or reach out directly if you or someone you know is a great fit. (Mail: Gowr.k@invictusdata.ai) #DataAnalyst #HiringNow #SQL #Tableau #ChennaiJobs #ImmediateJoiners #AnnaUniversity #ProductBased #Careers #InvictusData #Invictushiring
Posted 3 weeks ago
8.0 years
0 Lacs
Greater Bengaluru Area
On-site
About the Role The newly created Global Business Technology (GBT) team at Chargebee is at the forefront of all major Chargebee growth and strategic initiatives. As such, we are looking to staff the team with the top talent at the organization. We are looking for a Senior Business Analyst with deep experience in Finance operations , data infrastructure , and a strong understanding of how AI/ML can drive smarter decision-making and higher efficiency in a high-growth SaaS/Fintech environment. Reporting to the Senior Manager of the Business Systems Solutions team, this hands-on role bridges the gap between business stakeholders and technical teams, helping define and execute data-driven solutions that power our strategic goals. The role will also encompass a degree of hands-on configuration and testing of changes to these systems for the company to allow for future scalability, growth and standardization. Key Responsibilities Partner with Finance, Revenue Operations, and GTM teams to translate business requirements into scalable technology, process and data solutions. Develop comprehensive business requirements documentation into user stories and process maps for system enhancements and data initiatives. Promote standardised/out of the box solutions where possible, and partner with engineering and product teams where these solutions are non-standard Lead initiatives to improve financial analytics, forecasting models, and reporting accuracy using cloud-based data warehouses (e.g., Snowflake, Redshift, BigQuery). Drive AI/ML adoption by identifying use cases for automation, predictive analytics, and optimization (e.g., churn prediction, dynamic pricing). Collaborate with Data Engineering and BI teams to ensure data models and pipelines support evolving business needs. Champion self-service analytics and data literacy across departments. Conduct root cause analysis, opportunity sizing, and scenario modeling to inform high-stakes decisions. Provide analytical support during audits, budgeting, and board-level reporting cycles. Required Qualifications 5–8+ years of experience as a Business Analyst or similar role, preferably in a SaaS or Fintech company. Strong understanding of Finance functions (FP&A, Revenue Recognition, Billing, SaaS metrics like LTV, CAC, ARR). Hands-on experience with data warehousing tools (Snowflake, BigQuery, Redshift) and SQL proficiency. Familiarity with AI/ML concepts, models, and their practical application in business workflows. Proven ability to work across cross-functional teams, including Engineering, Finance, Product, and Ops. Ability to pivot between high level business discussions and in-depth technical discussions, keeping strategic goals in mind at all times Advanced Excel/Google Sheets skills, experience with Jira as well as BI tools (Tableau, Looker, Power BI) Excellent communication, storytelling, and documentation skills. Preferred Qualifications Experience with AI platforms or LLM-based tools and leveraging them in line with strategic business goals Exposure to financial systems like NetSuite, Chargebee, Stripe Experience working in Agile/Scrum environments. Knowledge of regulatory and compliance requirements relevant to financial data.
Posted 3 weeks ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Senior Database Administrator Job Type: Full Time Experience Required: 8+ Years Job Description : We are seeking an experienced and strategic Senior Database Administrator (DBA) with deep expertise in SQL/MySQL , AWS Redshift , and infrastructure automation using Terraform . This role requires someone who can design scalable data solutions, lead database optimization efforts, and support modern data platforms in a cloud-native environment. Key Responsibilities: Design, deploy, and manage highly available, scalable databases — with a strong emphasis on SQL, MySQL , and AWS Redshift . Implement and maintain infrastructure as code using Terraform for automating database and AWS infrastructure provisioning. Optimize performance and reliability across relational and NoSQL databases including Redshift, MySQL, SQL Server, DynamoDB, and Neo4j . Lead data platform integration efforts with applications developed in Node.js and other backend technologies. Manage real-time and batch data pipelines using tools like Qlik Replicate and Kafka . Architect and maintain workflows using a range of AWS services, such as: · Kinesis, Lambda, Glue, S3, Step Functions, SNS, SQS, EventBridge, EC2, CloudFormation, and API Gateway . Establish robust observability using tools like New Relic for database monitoring and performance Required Skills and Qualifications: · 8+ years of professional experience in database administration and data engineering. · Extensive hands-on experience with SQL and MySQL , and managing AWS Redshift in production environments. · Strong command of Terraform for infrastructure automation and provisioning. · Proficiency in PowerShell and Python for scripting and automation. · Solid experience with Node.js or a similar programming language for integration. · Working knowledge of Neo4j , DynamoDB , and SQL Server . · Experience with Qlik Replicate and Kafka for data replication and streaming. · Deep understanding of cloud architecture, event-driven systems, and serverless AWS environments. · Proficiency with monitoring and observability tools such as New Relic . · Familiarity with Okta for identity and access management. · Excellent problem-solving and communication skills; ability to lead initiatives and mentor junior team members. Education: · Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field is required.
Posted 3 weeks ago
3.0 - 5.0 years
0 Lacs
India
Remote
Title: Data Engineer Location: Remote, India Experience: 3 to 6 yrs Responsibilities: Work as part of a team to develop and implement procedures for secure and effective data management. Implement data definitions, data mappings, and provide support across ongoing integration activities. Work collaboratively with our Product Owners and Software Engineering teams to understand and define application and platform data requirements. Demonstrate an in-depth understanding of database structures and principles. Be an advocate for the work done by the team and share achievements both at a technical and business level with the wider technology group. Translate non-technical requirements to technical requirements Provide clean, transformed data ready for analysis Maintain data documentation and definitions Train business users on how to use data visualisation tools Required Technical Knowledge: 3-5 Years of Experience. Good working knowledge of AWS data analytics services (Kinesis, Redshift, DataDog) Experience with scripting languages such as Python Familiar with serverless, event-driven technology such as AWS Lambda Experience with AWS S3 bucket and AWS Glue Must have used Python for data processing. Building data, processing and data quality checks. Experience working with both SQL and NoSQL databases Familiar working with Gitlab CI/CD to support the deployment of applications across cloud infrastructure Good working knowledge of JIRA and Confluence Familiar working with cloud log management and monitoring data platforms Good understanding of Terraform The ability and appetite to learn and use a wide variety of open-source technologies and tools
Posted 3 weeks ago
2.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Greeting from Infosys BPM Ltd, We are hiring for Walkme, ETL Testing + Python Programming, Automation Testing with Java, Selenium, BDD, Cucumber, Test Automation using Java and Selenium, with knowledge on testing process, SQL, ETL DB Testing, ETL Testing Automation skills. Please walk-in for interview on 17th July 2025 at Pune location Note: Please carry copy of this email to the venue and make sure you register your application before attending the walk-in. Please use below link to apply and register your application. Please mention Candidate ID on top of the Resume *** https://career.infosys.com/jobdesc?jobReferenceCode=PROGEN-HRODIRECT-217814 Interview details Interview Date: 17th July 2025 Interview Time: 10 AM till 1 PM Interview Venue: Pune:: Hinjewadi Phase 1 Infosys BPM Limited, Plot No. 1, Building B1, Ground floor, Hinjewadi Rajiv Gandhi Infotech Park, Hinjewadi Phase 1, Pune, Maharashtra-411057 Please find below Job Description for your reference: Work from Office*** Min 2 years of experience on project is mandate*** Job Description: Walkme Design, develop, and deploy WalkMe solutions to enhance user experience and drive digital adoption. Experience in task-based documentation, training and content strategy Experience working in a multi-disciplined team with geographically distributed co-workers Working knowledge technologies such as CSS and JavaScript Project management and/or Jira experience Experience in developing in-app guidance using tools such as WalkMe, Strong experience in technical writing, instructional video or guided learning experience in a software company Job Description: ETL Testing + Python Programming Experience in Data Migration Testing (ETL Testing), Manual & Automation with Python Programming. Strong on writing complex SQLs for data migration validations. Work experience with Agile Scrum Methodology Functional Testing- UI Test Automation using Selenium, Java Financial domain experience Good to have AWS knowledge Job Description: Automation Testing with Java, Selenium, BDD, Cucumber Hands on exp in Automation. Java, Selenium, BDD , Cucumber expertise is mandatory. Banking Domian Experience is good. Financial domain experience Automation Talent with TOSCA skills, Payment domain skills is preferable. Job Description: Test Automation using Java and Selenium, with knowledge on testing process, SQL Java, Selenium automation, SQL, Testing concepts, Agile. Tools: Jira and ALM, Intellij Functional Testing: UI Test Automation using Selenium, Java Financial domain experience Job Description: ETL DB Testing Strong experience in ETL testing, data warehousing, and business intelligence. Strong proficiency in SQL. Experience with ETL tools (e.g., Informatica, Talend, AWS Glue, Azure Data Factory). Solid understanding of Data Warehousing concepts, Database Systems and Quality Assurance. Experience with test planning, test case development, and test execution. Experience writing complex SQL Queries and using SQL tools is a must, exposure to various data analytical functions. Familiarity with defect tracking tools (e.g., Jira). Experience with cloud platforms like AWS, Azure, or GCP is a plus. Experience with Python or other scripting languages for test automation is a plus. Experience with data quality tools is a plus. Experience in testing of large datasets. Experience in agile development is must Understanding of Oracle Database and UNIX/VMC systems is a must Job Description: ETL Testing Automation Strong experience in ETL testing and automation. Strong proficiency in SQL and experience with relational databases (e.g., Oracle, MySQL, PostgreSQL, SQL Server). Experience with ETL tools and technologies (e.g., Informatica, Talend, DataStage, Apache Spark). Hands-on experience in developing and maintaining test automation frameworks. Proficiency in at least one programming language (e.g., Python, Java). Experience with test automation tools (e.g., Selenium, PyTest, JUnit). Strong understanding of data warehousing concepts and methodologies. Experience with CI/CD pipelines and version control systems (e.g., Git). Experience with cloud-based data warehouses like Snowflake, Redshift, BigQuery is a plus. Experience with data quality tools is a plus. REGISTRATION PROCESS: The Candidate ID & SHL Test(AMCAT ID) is mandatory to attend the interview. Please follow the below instructions to successfully complete the registration. (Talents without registration & assessment will not be allowed for the Interview). Candidate ID Registration process: STEP 1: Visit: https://career.infosys.com/joblist STEP 2: Click on "Register" and provide the required details and submit. STEP 3: Once submitted, Your Candidate ID(100XXXXXXXX) will be generated. STEP 4: The candidate ID will be shared to the registered Email ID. SHL Test(AMCAT ID) Registration process: This assessment is proctored, and talent gets evaluated on Basic analytics, English Comprehension and writex (email writing). STEP 1: Visit: https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fautologin-talentcentral.shl.com%2F%3Flink%3Dhttps%3A%2F%2Famcatglobal.aspiringminds.com%2F%3Fdata%3DJTdCJTIybG9naW4lMjIlM0ElN0IlMjJsYW5ndWFnZSUyMiUzQSUyMmVuLVVTJTIyJTJDJTIyaXNBdXRvbG9naW4lMjIlM0ExJTJDJTIycGFydG5lcklkJTIyJTNBJTIyNDE4MjQlMjIlMkMlMjJhdXRoa2V5JTIyJTNBJTIyWm1abFpUazFPV1JsTnpJeU1HVTFObU5qWWpRNU5HWTFOVEU1Wm1JeE16TSUzRCUyMiUyQyUyMnVzZXJuYW1lJTIyJTNBJTIydXNlcm5hbWVfc3E5QmgxSWI5NEVmQkkzN2UlMjIlMkMlMjJwYXNzd29yZCUyMiUzQSUyMnBhc3N3b3JkJTIyJTJDJTIycmV0dXJuVXJsJTIyJTNBJTIyJTIyJTdEJTJDJTIycmVnaW9uJTIyJTNBJTIyVVMlMjIlN0Q%3D%26apn%3Dcom.shl.talentcentral%26ibi%3Dcom.shl.talentcentral%26isi%3D1551117793%26efr%3D1&data=05%7C02%7Comar.muqtar%40infosys.com%7Ca7ffe71a4fe4404f3dac08dca01c0bb3%7C63ce7d592f3e42cda8ccbe764cff5eb6%7C0%7C0%7C638561289526257677%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=s28G3ArC9nR5S7J4j%2FV1ZujEnmYCbysbYke41r5svPw%3D&reserved=0 STEP 2: Click on "Start new test" and follow the instructions to complete the assessment. STEP 3: Once completed, please make a note of the AMCAT ID( Access you Amcat id by clicking 3 dots on top right corner of screen). NOTE: During registration, you'll be asked to provide the following information: Personal Details: Name, Email Address, Mobile Number, PAN number. Availability: Acknowledgement of work schedule preferences (Shifts, Work from Office, Rotational Weekends, 24/7 availability, Transport Boundary) and reason for career change. Employment Details: Current notice period and total annual compensation (CTC) in the format 390000 - 4 LPA (example). Candidate Information: 10-digit candidate ID starting with 100XXXXXXX, Gender, Source (e.g., Vendor name, Naukri/LinkedIn/Found it, or Direct), and Location Interview Mode: Walk-in Attempt all questions in the SHL Assessment app. The assessment is proctored, so choose a quiet environment. Use a headset or Bluetooth headphones for clear communication. A passing score is required for further interview rounds. 5 or above toggles, multi face detected, face not detected, or any malpractice will be considered rejected Once you've finished, submit the assessment and make a note of the AMCAT ID (15 Digit) used for the assessment. Documents to Carry: Please have a note of Candidate ID & AMCAT ID along with registered Email ID. Please do not carry laptops/cameras to the venue as these will not be allowed due to security restrictions. Please carry 2 set of updated Resume/CV (Hard Copy). Please carry original ID proof for security clearance. Please carry individual headphone/Bluetooth for the interview. Pointers to note: Please do not carry laptops/cameras to the venue as these will not be allowed due to security restrictions. Original Government ID card is must for Security Clearance. Regards, Infosys BPM Recruitment team.
Posted 3 weeks ago
2.0 - 7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Experience : 2-7 Years Location : Noida, Gurugram, Indore, Pune, Bangalore Notice Period : Currently Serving or Immediate Joiners 2-6 years of good hands on exposure with Big Data technologies – PySpark (Data frame and SparkSQL), Hadoop, and Hive Good hands on experience of python and Bash Scripts Good understanding of SQL and data warehouse concepts Strong analytical, problem-solving, data analysis and research skills Demonstrable ability to think outside of the box and not be dependent on readily available tools Excellent communication, presentation and interpersonal skills are a must Good to have: Hands-on experience with using Cloud Platform provided Big Data technologies (i.e. IAM, Glue, EMR, RedShift, S3, Kinesis) Orchestration with Airflow and Any job scheduler experience Experience in migrating workload from on-premise to cloud and cloud to cloud migrations
Posted 3 weeks ago
7.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Experience: 7+ Years Location: Noida-Sector 64 Contract to hire Key Responsibilities: Data Architecture Design: Design, develop, and maintain the enterprise data architecture, including data models, database schemas, and data flow diagrams. Develop a data strategy and roadmap that aligns with the business objectives and ensures the scalability of data systems. Architect both transactional (OLTP) and analytical (OLAP) databases, ensuring optimal performance and data consistency. Data Integration & Management: Oversee the integration of disparate data sources into a unified data platform, leveraging ETL/ELT processes and data integration tools. Design and implement data warehousing solutions, data lakes, and/or data marts that enable efficient storage and retrieval of large datasets. Ensure proper data governance, including the definition of data ownership, security, and privacy controls in accordance with compliance standards (GDPR, HIPAA, etc.). Collaboration with Stakeholders: Work closely with business stakeholders, including analysts, developers, and executives, to understand data requirements and ensure that the architecture supports analytics and reporting needs. Collaborate with DevOps and engineering teams to optimize database performance and support large-scale data processing pipelines. Technology Leadership: Guide the selection of data technologies, including databases (SQL/NoSQL), data processing frameworks (Hadoop, Spark), cloud platforms (Azure is a must), and analytics tools. Stay updated on emerging data management technologies, trends, and best practices, and assess their potential application within the organization. Data Quality & Security: Define data quality standards and implement processes to ensure the accuracy, completeness, and consistency of data across all systems. Establish protocols for data security, encryption, and backup/recovery to protect data assets and ensure business continuity. Mentorship & Leadership: Lead and mentor data engineers, data modelers, and other technical staff in best practices for data architecture and management. Provide strategic guidance on data-related projects and initiatives, ensuring that all efforts are aligned with the enterprise data strategy. Required Skills & Experience: Extensive Data Architecture Expertise: Over 7 years of experience in data architecture, data modeling, and database management. Proficiency in designing and implementing relational (SQL) and non-relational (NoSQL) database solutions. Strong experience with data integration tools (Azure Tools are a must + any other third party tools), ETL/ELT processes, and data pipelines. Advanced Knowledge of Data Platforms: Expertise in Azure cloud data platform is a must. Other platforms such as AWS (Redshift, S3), Azure (Data Lake, Synapse), and/or Google Cloud Platform (BigQuery, Dataproc) is a bonus. Experience with big data technologies (Hadoop, Spark) and distributed systems for large-scale data processing. Hands-on experience with data warehousing solutions and BI tools (e.g., Power BI, Tableau, Looker). Data Governance & Compliance: Strong understanding of data governance principles, data lineage, and data stewardship. Knowledge of industry standards and compliance requirements (e.g., GDPR, HIPAA, SOX) and the ability to architect solutions that meet these standards. Technical Leadership: Proven ability to lead data-driven projects, manage stakeholders, and drive data strategies across the enterprise. Strong programming skills in languages such as Python, SQL, R, or Scala. Certification: Azure Certified Solution Architect, Data Engineer, Data Scientist certifications are mandatory. Pre-Sales Responsibilities: Stakeholder Engagement: Work with product stakeholders to analyze functional and non-functional requirements, ensuring alignment with business objectives. Solution Development: Develop end-to-end solutions involving multiple products, ensuring security and performance benchmarks are established, achieved, and maintained. Proof of Concepts (POCs): Develop POCs to demonstrate the feasibility and benefits of proposed solutions. Client Communication: Communicate system requirements and solution architecture to clients and stakeholders, providing technical assistance and guidance throughout the pre-sales process. Technical Presentations: Prepare and deliver technical presentations to prospective clients, demonstrating how proposed solutions meet their needs and requirements. Additional Responsibilities: Stakeholder Collaboration: Engage with stakeholders to understand their requirements and translate them into effective technical solutions. Technology Leadership: Provide technical leadership and guidance to development teams, ensuring the use of best practices and innovative solutions. Integration Management: Oversee the integration of solutions with existing systems and third-party applications, ensuring seamless interoperability and data flow. Performance Optimization: Ensure solutions are optimized for performance, scalability, and security, addressing any technical challenges that arise. Quality Assurance: Establish and enforce quality assurance standards, conducting regular reviews and testing to ensure robustness and reliability. Documentation: Maintain comprehensive documentation of the architecture, design decisions, and technical specifications. Mentoring: Mentor fellow developers and team leads, fostering a collaborative and growth-oriented environment. Qualifications: Education: Bachelor’s or master’s degree in computer science, Information Technology, or a related field. Experience: Minimum of 7 years of experience in data architecture, with a focus on developing scalable and high-performance solutions. Technical Expertise: Proficient in architectural frameworks, cloud computing, database management, and web technologies. Analytical Thinking: Strong problem-solving skills, with the ability to analyze complex requirements and design scalable solutions. Leadership Skills: Demonstrated ability to lead and mentor technical teams, with excellent project management skills. Communication: Excellent verbal and written communication skills, with the ability to convey technical concepts to both technical and non-technical stakeholders.
Posted 3 weeks ago
5.0 years
0 Lacs
Greater Hyderabad Area
On-site
We are scaling an AI/ML enabled Enterprise SAAS solution to help manage cash performance of large enterprises, including multiple Fortune-500 companies. You would be owning the architecture responsibility during the 1-10 journey of the product in the FinTech AI space. Senior Data Engineer (ETL) | 6-9 Y | Hyderabad (Hybrid) | B2B / SaaS - Fintech Exp | Preferences: Fintech Exp and Locals, F2F Required - Final Round @ Hyderabad Office Engineering & CS Graduates from Premium Colleges - IIT / NIT / BITS / REC / NIT Interview Process: 3 Technical Sessions + 1 CTO Round + 1 F2F - Managerial Round (MUST) Job Role: Design, build, and optimize data pipelines to ingest, process, transform, and load data from various sources into our data platform Implement and maintain ETL workflows using tools like Debezium, Kafka, Airflow, and Jenkins to ensure reliable and timely data processing Develop and optimize SQL and NoSQL database schemas, queries, and stored procedures for efficient data retrieval and processing Work with both relational databases (MySQL, PostgreSQL) and NoSQL databases (MongoDB, DocumentDB) to build scalable data solutions Design and implement data warehouse solutions that support analytical needs and machine learning applications Collaborate with data scientists and ML engineers to prepare data for AI/ML models and implement data-driven features Implement data quality checks, monitoring, and alerting to ensure data accuracy and reliability Optimize query performance across various database systems through indexing, partitioning, and query refactoring Develop and maintain documentation for data models, pipelines, and processes Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs Stay current with emerging technologies and best practices in data engineering Ability to perform independent research to understand the product requirements and customer needs Communicates effectively with the project teams and other stakeholders. Translate technical details to non-technical audience. Expert at creating architectural artifacts for Data Warehouse. Team, effort management. Ability to set expectations for the client and the team. Ensure all deliverables are delivered in time at highest quality. Required Skills: 5+ years of experience in data engineering or related roles with a proven track record of building data pipelines and infrastructure Strong proficiency in SQL and experience with relational databases like MySQL and PostgreSQL Hands-on experience with NoSQL databases such as MongoDB or AWS DocumentDB Expertise in designing, implementing, and optimizing ETL processes using tools like Kafka, Debezium, Airflow, or similar technologies Experience with data warehousing concepts and technologies Solid understanding of data modeling principles and best practices for both operational and analytical systems Proven ability to optimize database performance, including query optimization, indexing strategies, and database tuning Experience with AWS data services such as RDS, Redshift, S3, Glue, Kinesis, and ELK stack Proficiency in at least one programming language (Python, Node.js, Java) Experience with version control systems (Git) and CI/CD pipelines Bachelor's degree in computer science, Engineering, or related field from Premium Colleges - IIT / NIT / BITS / REC / NIT
Posted 3 weeks ago
2.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
About Us: As India's fastest-growing D2C brand, we are at the forefront of innovation and transformation in the market. We’re a well-funded, rapidly growing (we have recently launched our 100th store), omnichannel D2C brand with a passionate and innovative team. Job Summary: We are seeking a Data Engineer to help us design, build and maintain our BigQuery data warehouse by performing ETL operations and creating unified data models. You will work across various data sources to create a cohesive data infrastructure that supports our omnichannel D2C strategy. Why Join Us: Experience the exciting world of India's billion-dollar D2C market. As a well-funded, rapidly growing omnichannel D2C brand, we are committed to changing the way India sleeps and sits. You'll have the opportunity to work with a passionate and innovative team and make a real impact on our success. Key Responsibilities: ETL Operations: Design, implement, and manage ETL processes to extract, transform, and load data from various sources into BigQuery. Data Warehousing: Build and maintain a robust data warehouse in BigQuery, ensuring data integrity, security, and performance. Data Modeling: Create and manage flat, unified data models using SQL and DBT to support business analytics and reporting needs. Performance Optimization: Optimize ETL processes and data models to ensure timely data delivery for reporting and analytics. Collaboration: Work closely with data analysts, product managers, and other stakeholders to understand data requirements and deliver actionable. Documentation: Maintain comprehensive documentation of data workflows, ETL processes, and data models for reference and onboarding Troubleshoot : Monitor and troubleshoot data pipeline issues, ensuring timely resolution to minimize disruption to business operations. Skills and Qualifications: Proficiency in SQL and experience with BigQuery Minimum 2 years of experience in data engineering or a similar role Experience with data pipeline and ETL tools (e.g., Apache Airflow, Talend, AWS Glue) Familiarity with cloud platforms (e.g., AWS, Google Cloud, Azure) and their data services Experience with data warehousing solutions (e.g., Amazon Redshift, Google BigQuery, Snowflake) Knowledge of data modeling, data architecture, and data governance best practices. Excellent problem-solving skills and attention to detail Knowledge of DBT (Data Build Tool) for data transformation Self-motivated, proactive, and highly accountable Excellent communication skills to effectively convey technical concepts and solutions Bonus point - Prior experience in E-commerce or D2C space
Posted 3 weeks ago
8.0 years
0 Lacs
Gurugram, Haryana, India
Remote
Analytics Data Engineer Location: Gurgaon Experience: 4–8 Years Work Mode: [Hybrid / Remote] Start Date: Immediate Joiner Preferred Role Overview : We are looking for an Analytics Data Engineer with strong hands-on experience in Looker and proven ability to extract meaningful insights from product and behavioral data. The ideal candidate will also bring solid experience in Analytics Engineering tools like DBT, Snowflake, Airflow, and Fivetran (or similar) to build scalable and reliable data pipelines. This is a backfill for a critical role, and we are looking for someone who can hit the ground running and experiment with product data to uncover actionable insights for stakeholders. Key Responsibilities: 1. Build and maintain Looker dashboards and reports for business and product teams. 2. Perform data modeling using LookML and manage Looker’s semantic layer. 3. Collaborate with stakeholders to understand analytics needs and translate them into scalable solutions. 4. Design, build, and optimize data transformation workflows using DBT. 5. Develop and maintain ETL/ELT pipelines using Fivetran or similar tools. 6. Work with large datasets in Snowflake, ensuring performance and data integrity. 7. Conduct deep-dive analysis on product usage data to drive experimentation and decision-making. 8. Ensure best practices in code versioning, documentation, testing, and deployment. Required Skills & Qualifications: 1. 4–8 years of experience in Business Intelligence and Data Engineering roles. 2. Strong hands-on experience with Looker (LookML, Dashboarding, Embedded Analytics). 3 Solid understanding of data modeling and transformation using DBT. 4. Proficiency in Snowflake (or similar cloud data warehouses like Redshift or BigQuery). 5. Experience with Fivetran or other ELT tools. 6. Strong SQL skills and understanding of modern data stack architecture. 7. Ability to work independently in a fast-paced, product-focused environment.
Posted 3 weeks ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
At EXL, we go beyond capabilities to focus on collaboration and character, tailoring solutions to your unique needs, culture, goals, and technology environments. We specialize in transformation, data science, and change management to enhance efficiency, improve customer relationships, and drive revenue growth. Our expertise in analytics, digital interventions, and operations management helps you outperform the competition with sustainable models at scale. As your business evolution partner, we optimize data leverage for better business decisions and intelligence-driven operations. For more information, visit www.exlservice.com. Job Title - Data Engineer - PySpark, Python, SQL, Git, AWS Services – Glue, Lambda, Step Functions, S3, Athena. Role Description We are seeking a talented Data Engineer with expertise in PySpark, Python, SQL, Git, and AWS to join our dynamic team. The ideal candidate will have a strong background in data engineering, data processing, and cloud technologies. You will play a crucial role in designing, developing, and maintaining our data infrastructure to support our analytics Responsibilities: 1. Develop and maintain ETL pipelines using PySpark and AWS Glue to process and transform large volumes of data efficiently. 2. Collaborate with analysts to understand data requirements and ensure data availability and quality. 3. Write and optimize SQL queries for data extraction, transformation, and loading. 4. Utilize Git for version control, ensuring proper documentation and tracking of code changes. 5. Design, implement, and manage scalable data lakes on AWS, including S3, or other relevant services for efficient data storage and retrieval. 6. Develop and optimize high-performance, scalable databases using Amazon DynamoDB. 7. Proficiency in Amazon QuickSight for creating interactive dashboards and data visualizations. 8. Automate workflows using AWS Cloud services like event bridge, step functions. 9. Monitor and optimize data processing workflows for performance and scalability. 10. Troubleshoot data-related issues and provide timely resolution. 11. Stay up-to-date with industry best practices and emerging technologies in data engineering. Qualifications: 1. Bachelor's degree in Computer Science, Data Science, or a related field. Master's degree is a plus. 2. Strong proficiency in PySpark and Python for data processing and analysis. 3. Proficiency in SQL for data manipulation and querying. 4. Experience with version control systems, preferably Git. 5. Familiarity with AWS services, including S3, Redshift, Glue, Step Functions, Event Bridge, CloudWatch, Lambda, Quicksight, DynamoDB, Athena, CodeCommit etc. 6. Familiarity with Databricks and it’s concepts. 7. Excellent problem-solving skills and attention to detail. 8. Strong communication and collaboration skills to work effectively within a team. 9. Ability to manage multiple tasks and prioritize effectively in a fast-paced environment. Preferred Skills: 1. Knowledge of data warehousing concepts and data modeling. 2. Familiarity with big data technologies like Hadoop and Spark. 3. AWS certifications related to data engineering.
Posted 3 weeks ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Are you ready to make an impact at DTCC? Do you want to work on innovative projects, collaborate with a dynamic and supportive team, and receive investment in your professional development? At DTCC, we are at the forefront of innovation in the financial markets. We are committed to helping our employees grow and succeed. We believe that you have the skills and drive to make a real impact. We foster a thriving internal community and are committed to creating a workplace that looks like the world that we serve. Pay and Benefits: Competitive compensation, including base pay and annual incentive. Comprehensive health and life insurance and well-being benefits, based on location. Pension / Retirement benefits Paid Time Off and Personal/Family Care, and other leaves of absence when needed to support your physical, financial, and emotional well-being. DTCC offers a flexible/hybrid model of 3 days onsite and 2 days remote (onsite Tuesdays, Wednesdays and a third day unique to each team or employee). The Impact you will have in this role: The Development family is responsible for crafting, designing, deploying, and supporting applications, programs, and software solutions. May include research, new development, prototyping, modification, reuse, re-engineering, maintenance, or any other activities related to software products used internally or externally on product platforms supported by the firm. The software development process requires in-depth domain expertise in existing and emerging development methodologies, tools, and programming languages. Software Developers work closely with business partners and / or external clients in defining requirements and implementing solutions. The Software Engineering role specializes in planning, detailing technical requirements, designing, developing, and testing all software systems and applications for the firm. Works closely with architects, product managers, project management, and end-users in the development and improvement of existing software systems and applications, proposing and recommending solutions that solve complex business problems. Your Primary Responsibilities: Act as a technical expert on one or more applications utilized by DTCC Work with the Business System Analyst to ensure designs satisfy functional requirements Partner with Infrastructure to identify and deploy optimal hosting environments Tune application performance to eliminate and reduce issues Research and evaluate technical solutions consistent with DTCC technology standards Align risk and control processes into day to day responsibilities to monitor and mitigate risk; escalates appropriately Apply different software development methodologies dependent on project needs Contribute expertise to the design of components or individual programs, and participate in the construction and functional testing Support development teams, testing, solving, and production support Build applications and construct unit test cases that ensure compliance with functional and non-functional requirements Work with peers to mature ways of working, continuous integration, and continuous delivery Aligns risk and control processes into day to day responsibilities to monitor and mitigate risk; raises appropriately Qualifications: Minimum of 7+ years of related experience Bachelor's degree preferred or equivalent experience Talents Needed for Success: Expert in Java/JEE and Coding standard methodologies Expert knowledge in development concepts. Good design and coding skills in Web Services, Spring/Spring Boot, Soap/Rest APIs, and Java Script Frameworks for modern web applications Builds collaborative teams across the organization. Communicates openly keeping everyone across the organization informed. Solid understanding of HTML, CSS, and modern JavaScript Experience with Angular V15+ and/or React. Experience integrating with database technologies such as Oracle, PostgreSQL, etc. Ability to write quality and self-validating code using unit tests and following TDD. Experience with Agile methodology and ability to collaborate with other team members. Bachelor's degree in technical field or equivalent experience. Fosters a culture where integrity and clarity are expected. Stays ahead of on changes in their own specialist area and seeks out learning opportunities to ensure knowledge is up-to-date. Nice to Have: Experience in developing and using Container, AWS cloud stack (S3, SQS, Redshift, Lambda etc.) is a big plus. Ability to demonstrate DevOps techniques and practices like Continuous Integration, Continuous Deployment, Test Automation, Build Automation and Test-Driven Development to enable the rapid delivery of working code utilizing tools like Jenkins, Cloudbees, Git, etc. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 3 weeks ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Bloom Energy faces an unprecedented opportunity to change the world and how energy is generated and delivered. Our mission is to make clean, reliable energy affordable globally. Bloom’s Energy Server delivers highly reliable, resilient, always-on electric power that is clean, cost-effective, and ideal for microgrid applications. We are helping our customers power their operations without disruption and combustion. We seek an Business Intelligence Intern to join our team in one of today’s most exciting technologies. This role would report to the Business Intelligence Senior Manager in Mumbai, India Responsibilities: Design, develop, and maintain automated tools for helping with forecasting and tracking actuals for spares Build and improve visual tools for various business cases Test software to ensure responsiveness and efficiency Help with Data Validation Rapidly fix bugs, solve problems, and proactively strive to improve our products and technologies Collaborate with multidisciplinary team of product management, developers, data scientists, data analysts, and system administrators Requirements: Proficiency with Python Familiarity with databases / datalakes (e.g., PostgreSQL, Cassandra, AWS RDS, Redshift, S3) Experience with Git or other version control software
Posted 3 weeks ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
𝐑𝐨𝐥𝐞: 𝐀𝐖𝐒 𝐁𝐈 𝐀𝐫𝐜𝐡𝐢𝐭𝐞𝐜𝐭 𝐋𝐨𝐜𝐚𝐭𝐢𝐨𝐧: 𝐇𝐲𝐝𝐞𝐫𝐚𝐛𝐚𝐝/𝐁𝐞𝐧𝐠𝐚𝐥𝐮𝐫𝐮/𝐂𝐡𝐞𝐧𝐧𝐚𝐢 𝐍𝐨𝐭𝐢𝐜𝐞 𝐏𝐞𝐫𝐢𝐨𝐝: 𝟐𝟎 𝐝𝐚𝐲𝐬 𝐨𝐫 𝐥𝐞𝐬𝐬 We are looking for a seasoned AWS BI Architect with 6+ years of experience. 𝐊𝐞𝐲 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬: *Design scalable data platform and analytics architectures. *Lead technical design discussions and ensure successful implementation. *Create clear, detailed documentation and collaborate with stakeholders. 𝐖𝐡𝐚𝐭 𝐖𝐞’𝐫𝐞 𝐋𝐨𝐨𝐤𝐢𝐧𝐠 𝐅𝐨𝐫: *6+ years of experience in BI and data architecture. *Strong hands-on expertise in Amazon Redshift (Serverless and/or Provisioned) *Experience with Redshift performance tuning, RPU sizing, workload isolation, and concurrency scaling. *Familiarity with Redshift Data Sharing and cross-cluster access. *Background in BI/reporting with a strong preference for MicroStrategy. *Excellent communication and documentation.
Posted 3 weeks ago
1.0 years
0 Lacs
India
Remote
Job Title: SQL Developer (Redshift, Power BI, Sales Analytics) Location: Remote (India) Type: Contract Approved for 1 year 40 hours a week and will extend past a year Compensation: ₹22LPA (Paid Hourly) Working Hours 2:30 PM IST 10 PM IST Start Date: Immediate (No Notice Period Preferred) Immediate Interviews Available! Priority scheduling for candidates who: Submit their resume promptly Are available for immediate interviews Connect via LinkedIn with resume and CTC rate Openings: 1 Job Overview Insight Global’s client is hiring a SQL Developer with deep expertise in Amazon Redshift , custom SQL development , and BI reporting . This role is focused on building and supporting dashboards and scorecards for sales agent performance , sales metrics , and revenue analytics . The ideal candidate will be a fast learner, an excellent communicator, and capable of working independently in a fast-paced, data-driven environment. Key Responsibilities Design and develop custom SQL queries (not tool-generated) to support business reporting needs. Work extensively with Amazon Redshift and Teradata for data modeling and transformation. Build and maintain dashboards and post-reporting using Power BI , MicroStrategy , and Tableau . Analyze and visualize sales performance , agent scorecards , and revenue metrics . Collaborate with business stakeholders to gather requirements and translate them into actionable insights. Ramp up quickly on existing data models and reporting frameworks. Integrate AI-driven predictive analytics into reporting solutions. Troubleshoot data issues and ensure high data quality and performance. Communicate clearly and effectively with both technical and non-technical teams. Required Qualifications Bachelor’s degree in Information Technology, Computer Science, or a related field (or equivalent experience). Strong hands-on experience with Amazon Redshift (this is a must-have ). Proficiency in Teradata and advanced SQL coding (not just using visual tools). Experience with Power BI , MicroStrategy , and Tableau . Strong understanding of sales analytics , performance metrics , and revenue reporting . Excellent communication skills and ability to work independently. Ability to ramp up quickly and deliver in a fast-paced environment. Preferred Qualifications Experience in the Telecom/Cable MSO industry. Familiarity with ETL tools and workflow schedulers (e.g., Informatica, UC4, Composite). Exposure to AI/ML concepts in a BI context.
Posted 3 weeks ago
5.0 - 8.0 years
0 Lacs
India
Remote
Solutions Architect – GenAI, AI/ML & AWS Cloud Architect the Future of AI with goML At goML, we design and build cutting-edge Generative AI, AI/ML, and Data Engineering solutions that help businesses unlock the full potential of their data, drive intelligent automation, and create transformative AI-powered experiences. Our mission is to bridge the gap between state-of-the-art AI research and real-world enterprise applications - helping organizations innovate faster, make smarter decisions, and scale AI solutions seamlessly. We’re looking for a Solutions Architect with deep expertise in designing AI/ML and GenAI architectures on AWS. In this role, you’ll be responsible for crafting scalable, high-performance, and cost-effective AI solutions - leveraging AWS AI/ML services, modern data infrastructure, and cloud-native best practices. If you thrive in architecting intelligent, data-driven systems and love solving complex technical challenges, we’d love to hear from you! Why You? Why Now? Generative AI is reshaping industries, and businesses are looking for scalable, cost-efficient, and production-ready AI/ML solutions. This role is perfect for someone who loves solutioning AI/ML workloads, optimizing cloud infrastructure, and working directly with clients to drive real-world impact. At goML, you will: Own the architecture and solutioning of AI/ML and GenAI applications Work with sales & engineering leaders to scope customer needs & build proposals Design scalable architectures using AWS services like SageMaker, Bedrock, Lambda, and Redshift Influence high-impact AI/ML decisions while working closely with the co-founders What You’ll Do (Key Responsibilities) First 30 Days: Foundation & Orientation Deep dive into goML’s AI/ML & GenAI solutions, architecture frameworks, and customer engagements Familiarize yourself with goML and AWS partnership workflows Work alongside sales leaders to understand customer pain points & proposal workflows Review and refine best practices for deploying AI/ML workloads on AWS Start contributing to solution architectures, and lead client discussions First 60 Days: Execution & Impact Own customer AI/ML solutioning, including LLMOps, inference optimization, and MLOps pipelines Collaborate with engineering teams to develop reference architectures & POCs for AI workloads Build strategies to optimize AI/ML model deployment, GPU utilization, and cost-efficiency in AWS Assist in sizing and optimizing AWS infrastructure for AI/ML-heavy workloads Work closely with customers to translate GenAI and AI/ML requirements into scalable architectures First 180 Days: Ownership & Transformation Lead AI/ML architectural decisions for complex enterprise-scale AI projects Optimize multi-cloud and hybrid AI/ML deployments across AWS, Azure, and GCP Mentor team members on best practices for GenAI & cloud AI deployments Define long-term strategies for AI-driven data platforms, model lifecycle management, and cloud AI acceleration Represent goML in technical conferences, blogs, and AI/ML meetups What You Bring (Qualifications & Skills) Must-Have: 5-8 years of experience designing AI/ML and data-driven architectures on AWS At least 2 years of hands-on experience in GenAI, LLMOps, or advanced AI/ML workloads Deep expertise in AWS AI/ML services (SageMaker, Bedrock, Lambda, Inferentia, Trainium) Strong knowledge of AWS Data Services (S3, Redshift, Glue, Lake Formation, DynamoDB) Experience in optimizing AI/ML inference, GPU utilization, and MLOps pipelines Excellent client-facing communication skills with experience in proposal writing Nice-to-Have: Familiarity with Azure ML, GCP Vertex AI, and NVIDIA AI/ML services Experience in LangChain, RAG architectures, and multi-modal AI models Knowledge of MLOps automation, CI/CD for AI models, and scaling inference workloads Meet Your Hiring Manager You’ll report to Prashanna Hanumantha Rao, who runs practice teams, engineering, delivery, and operations for goML Expect a high-autonomy, high-impact environment, working closely with the co-founders and senior leadership to drive AI/ML innovation at goML. Why Work With Us? Remote-first, with offices in Coimbatore for in-person collaboration Work on cutting-edge GenAI & AI/ML challenges at scale Direct impact on enterprise AI solutioning & technical strategy Competitive salary, leadership growth opportunities and ESOPs down the line
Posted 3 weeks ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Overall Experience Level: 5 years of Manual testing experience AWS Expertise: Proficient in AWS services and tools such as Lambdas, S3, DynamoDB, Redshift, Athena, Cloud watch logs. Additionally, knowledge of SNS, SQS, Kinesis, VPC is desirable. Database and ETL Testing: Thorough understanding of database concepts and hands-on experience with database and ETL testing, including validation of data integrity, performance, and security. Programming Skills: Minimum of 1 year of experience in Java programming. API Testing: Familiarity with POSTMAN Scripts. Testing Tools: Strong knowledge of testing tools for functional, load, and security testing. Experience with tools like X Ray, Jira, and Confluence for defect tracking and information flow during QA cycles. Agile/Scrum Methodology: Participation in Scrum ceremonies. Automation & Functional Testing: Expertise in developing automation scripts to align with acceptance criteria and functional testing of new products. Experience in regression testing existing products during each QA release cycle. Communication & Collaboration: Excellent communication and teamwork skills. Problem-Solving Skills: Ability to resolve technical challenges.
Posted 3 weeks ago
0.0 - 10.0 years
0 Lacs
Mumbai, Maharashtra
On-site
Location Mumbai, Maharashtra, India Category Digital Technology Job ID: R147951 Posted: Jul 14th 2025 Job Available In 5 Locations Staff Technical Product Manager Are you excited about the opportunity to lead a team within an industry leader in Energy Technology? Are you passionate about improving capabilities, efficiency and performance? Join our Digital Technology Team! As a Staff Technical Product Manager, this position will operate in lock-step with product management to create a clear strategic direction for build needs, and convey that vision to the service’s scrum team. You will direct the team with a clear and descriptive set of requirements captured as stories and partner with the team to determine what can be delivered through balancing the need for new features, defects, and technical debt.. Partner with the best As a Staff Technical Product Manager, we are seeking a strong background in business analysis, team leadership, and data architecture and hands-on development skills. The ideal candidate will excel in creating roadmap, planning with prioritization, resource allocation, key item delivery, and seamless integration of perspectives from various stakeholders, including Product Managers, Technical Anchors, Service Owners, and Developers. Results - oriented leader, and capable of building and executing an aligned strategy leading data team and cross functional teams to meet deliverable timelines. As a Staff Technical Product Manager, you will be responsible for: Demonstrating wide and deep knowledge in data engineering, data architecture, and data science. Ability to guide, lead, and work with the team to drive to the right solution Engaging frequently (80%) with the development team; facilitate discussions, provide clarification, story acceptance and refinement, testing and validation; contribute to design activities and decisions; familiar with waterfall, Agile scrum framework; Owning and manage the backlog; continuously order and prioritize to ensure that 1-2 sprints/iterations of backlog are always ready. Collaborating with UX in design decisions, demonstrating deep understanding of technology stack and impact on final product. Conducting customer and stakeholder interviews and elaborate on personas. Demonstrating expert-level skill in problem decomposition and ability to navigate through ambiguity. Partnering with the Service Owner to ensure a healthy development process and clear tracking metric to form standard and trustworthy way of providing customer support Designing and implementing scalable and robust data pipelines to collect, process, and store data from various sources. Developing and maintaining data warehouse and ETL (Extract, Transform, Load) processes for data integration and transformation. Optimizing and tuning the performance of data systems to ensure efficient data processing and analysis. Collaborating with product managers and analysts to understand data requirements and implement solutions for data modeling and analysis. Identifying and resolving data quality issues, ensuring data accuracy, consistency, and completeness Implementing and maintaining data governance and security measures to protect sensitive data. Monitoring and troubleshoot data infrastructure, perform root cause analysis, and implement necessary fixes. Fuel your passion: To be successful in this role you will require: Have a Bachelor's or higher degree in Computer Science, Information Systems, or a related field. Have minimum 6-10 years of proven experience as a Data Engineer or similar role, working with large-scale data processing and storage systems. Have Proficiency in SQL and database management systems (e.g., MySQL, PostgreSQL, or Oracle). Have Extensive knowledge working with SAP systems, Tcode, data pipelines in SAP, Databricks related technologies. Have Experience with building complex jobs for building SCD type mappings using ETL tools like PySpark, Talend, Informatica, etc. Have Experience with data visualization and reporting tools (e.g., Tableau, Power BI). Have Strong problem-solving and analytical skills, with the ability to handle complex data challenges. Have Excellent communication and collaboration skills to work effectively in a team environment. Have Experience in data modeling, data warehousing, and ETL principles. Have familiarity with cloud platforms like AWS, Azure, or GCP, and their data services (e.g., S3, Redshift, BigQuery). Have advanced knowledge of distributed computing and parallel processing. Experience with real-time data processing and streaming technologies (e.g., Apache Kafka, Apache Flink). Knowledge of containerization and orchestration technologies (e.g., Docker, Kubernetes). Certification in relevant technologies or data engineering disciplines. Having working knowledge in Databricks, Dremio, and SAP is highly preferred. Work in a way that works for you We recognize that everyone is different and that the way in which people want to work and deliver at their best is different for everyone too. In this role, we can offer the following flexible working patterns (where applicable): Working flexible hours - flexing the times when you work in the day to help you fit everything in and work when you are the most productive Working with us Our people are at the heart of what we do at Baker Hughes. We know we are better when all our people are developed, engaged, and able to bring their whole authentic selves to work. We invest in the health and well-being of our workforce, train and reward talent, and develop leaders at all levels to bring out the best in each other. Working for you Our inventions have revolutionized energy for over a century. But to keep going forward tomorrow, we know we must push the boundaries today. We prioritize rewarding those who embrace challenge with a package that reflects how much we value their input. Join us, and you can expect: Contemporary work-life balance policies and wellbeing activities. About Us With operations in over 120 countries, we provide better solutions for our customers and richer opportunities for our people. As a leading partner to the energy industry, we're committed to achieving net-zero carbon emissions by 2050 and we're always looking for the right people to help us get there. People who are as passionate as we are about making energy safer, cleaner, and more efficient. Join Us Are you seeking an opportunity to make a real difference in a company with a global reach and exciting services and clients? Come join us and grow with a team of people who will challenge and inspire you! About Us: We are an energy technology company that provides solutions to energy and industrial customers worldwide. Built on a century of experience and conducting business in over 120 countries, our innovative technologies and services are taking energy forward – making it safer, cleaner and more efficient for people and the planet. Join Us: Are you seeking an opportunity to make a real difference in a company that values innovation and progress? Join us and become part of a team of people who will challenge and inspire you! Let’s come together and take energy forward. Baker Hughes Company is an Equal Opportunity Employer. Employment decisions are made without regard to race, color, religion, national or ethnic origin, sex, sexual orientation, gender identity or expression, age, disability, protected veteran status or other characteristics protected by law.
Posted 3 weeks ago
0.0 - 5.0 years
0 Lacs
Noida, Uttar Pradesh
On-site
Noida, Uttar Pradesh, India Business Intelligence BOLD is seeking an ETL Specialist who will help build the architecture and maintain a robust, scalable, and sustainable business intelligence platform. Assisted by the Data Team, this role will work with highly scalable systems, complex data models, and a large amount of transactional data. Job Description ABOUT THIS TEAM BOLD Business Intelligence(BI) team is a centralized team responsible for managing all aspects of the organization's BI strategy, projects and systems. BI team enables business leaders to make data-driven decisions by providing reports and analysis. The team is responsible for developing and manage a latency-free credible enterprise data warehouse which is a data source for decision making and input to various functions of the organization like Product, Finance, Marketing, Customer Support etc. BI team has four sub-components as Data analysis, ETL, Data Visualization and QA. It manages deliveries through Snowflake, Sisense and Microstrategy as main infrastructure solutions. Other technologies including Python, R, Airflow are also used in ETL, QA and data visualizations. WHAT YOU’LL DO Architect, develop, and maintain a highly scalable data warehouse and build/maintain ETL processes. Utilize Python and Airflow to integrate data from across the business into data warehouse. Integrate third party data into the data warehouse like google analytics, google ads, Iterable WHAT YOU’LL NEED Experience working as an ETL developer in a Data Engineering, Data Warehousing or Business Intelligence team Understanding of data integration/data engineering architecture and should be aware of ETL standards, methodologies, guidelines and techniques Hands on with python programming language and its packages like Pandas, NumPy Strong understanding of SQL queries, aggregate functions, Complex joins and performance tuning Should have good exposure of Databases like Redshift/SQL Server/Oracle/PostgreSQL (any one of these) Broad understanding of data warehousing and dimensional modelling concepts EXPERIENCE- Software Engineer, ETL: 2.5 years+ Senior Software Engineer, ETL- 4.5 years+ Benefits Outstanding Compensation Competitive salary Tax-friendly compensation structure Bi-annual bonus Annual Appraisal Equity in company 100% Full Health Benefits Group Mediclaim, personal accident, & term life insurance Group Mediclaim benefit (including parents' coverage) Practo Plus health membership for employees and family Personal accident and term life insurance coverage Flexible Time Away 24 days paid leaves Declared fixed holidays Paternity and maternity leave Compassionate and marriage leave Covid leave (up to 7 days) Additional Benefits Internet and home office reimbursement In-office catered lunch, meals, and snacks Certification policy Cab pick-up and drop-off facility About BOLD We Transform Work Lives As an established global organization, BOLD helps people find jobs. Our story is one of growth, success, and professional fulfillment. We create digital products that have empowered millions of people in 180 countries to build stronger resumes, cover letters, and CVs. The result of our work helps people interview confidently, finding the right job in less time. Our employees are experts, learners, contributors, and creatives. We Celebrate And Promote Diversity And Inclusion We value our position as an Equal Opportunity Employer. We hire based on qualifications, merit, and our business needs. We don't discriminate regarding race, color, religion, gender, pregnancy, national origin or citizenship, ancestry, age, physical or mental disability, veteran status, sexual orientation, gender identity or expression, marital status, genetic information, or any other applicable characteristic protected by law.
Posted 3 weeks ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Are you excited by the prospect of wrangling data, helping develop information systems/sources/tools, and shaping the way businesses make decisions? The Go-To-Markets Data Analytics team is looking for a skilled Data Engineer who is motivated to deliver top notch data-engineering solutions to support business intelligence, data science, and self-service data solutions. About the Role: In this role as a Data Engineer, you will: Design, develop, optimize, and automate data pipelines that blend and transform data across different sources to help drive business intelligence, data science, and self-service data solutions. Work closely with data scientists and data visualization teams to understand data requirements to ensure the availability of high-quality data for analytics, modelling, and reporting. Build pipelines that source, transform, and load data that’s both structured and unstructured keeping in mind data security and access controls. Explore large volumes of data with curiosity and conviction. Contribute to the strategy and architecture of data management systems and solutions. Proactively troubleshoot and resolve data-related and performance bottlenecks in a timely manner. Be open to learning and working on emerging technologies in the data engineering, data science and cloud computing space. Have the curiosity to interrogate data, conduct independent research, utilize various techniques, and tackle ambiguous problems. Shift Timings: 12 PM to 9 PM (IST) Work from office for 2 days in a week (Mandatory) About You You’re a fit for the role of Data Engineer, if your background includes: Must have at least 4-6 years of total work experience with at least 2+ years in data engineering or analytics domains. Graduates in data analytics, data science, computer science, software engineering or other data centric disciplines. SQL Proficiency a must. Experience with data pipeline and transformation tools such as dbt, Glue, FiveTran, Alteryx or similar solutions. Experience using cloud-based data warehouse solutions such as Snowflake, Redshift, Azure. Experience with orchestration tools like Airflow or Dagster. Preferred experience using Amazon Web Services (S3, Glue, Athena, Quick sight). Data modelling knowledge of various schemas like snowflake and star. Has built data pipelines and other custom automated solutions to speed the ingestion, analysis, and visualization of large volumes of data. Knowledge building ETL workflows, database design, and query optimization. Has experience of a scripting language like Python. Works well within a team and collaborates with colleagues across domains and geographies. Excellent oral, written, and visual communication skills. Has a demonstrable ability to assimilate new information thoroughly and quickly. Strong logical and scientific approach to problem-solving. Can articulate complex results in a simple and concise manner to all levels within the organization. What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com.
Posted 3 weeks ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Are you excited by the prospect of wrangling data, helping develop information systems/sources/tools, and shaping the way businesses make decisions? The Go-To-Markets Data Analytics team is looking for a skilled Senior Data Engineer who is motivated to deliver top notch data-engineering solutions to support business intelligence, data science, and self-service data solutions. About the Role: In this role as a Senior Data Engineer, you will: Design, develop, optimize, and automate data pipelines that blend and transform data across different sources to help drive business intelligence, data science, and self-service data solutions. Work closely with data scientists and data visualization teams to understand data requirements to ensure the availability of high-quality data for analytics, modelling, and reporting. Build pipelines that source, transform, and load data that’s both structured and unstructured keeping in mind data security and access controls. Explore large volumes of data with curiosity and conviction. Contribute to the strategy and architecture of data management systems and solutions. Proactively troubleshoot and resolve data-related and performance bottlenecks in a timely manner. Be open to learning and working on emerging technologies in the data engineering, data science and cloud computing space. Have the curiosity to interrogate data, conduct independent research, utilize various techniques, and tackle ambiguous problems. Shift Timings: 12 PM to 9 PM (IST) Work from office for 2 days in a week (Mandatory) About You You’re a fit for the role of Senior Data Engineer, if your background includes: Must have at least 6-7 years of total work experience with at least 3+ years in data engineering or analytics domains. Graduates in data analytics, data science, computer science, software engineering or other data centric disciplines. SQL Proficiency a must. Experience with data pipeline and transformation tools such as dbt, Glue, FiveTran, Alteryx or similar solutions. Experience using cloud-based data warehouse solutions such as Snowflake, Redshift, Azure. Experience with orchestration tools like Airflow or Dagster. Preferred experience using Amazon Web Services (S3, Glue, Athena, Quick sight). Data modelling knowledge of various schemas like snowflake and star. Has built data pipelines and other custom automated solutions to speed the ingestion, analysis, and visualization of large volumes of data. Knowledge building ETL workflows, database design, and query optimization. Has experience of a scripting language like Python. Works well within a team and collaborates with colleagues across domains and geographies. Excellent oral, written, and visual communication skills. Has a demonstrable ability to assimilate new information thoroughly and quickly. Strong logical and scientific approach to problem-solving. Can articulate complex results in a simple and concise manner to all levels within the organization. What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com.
Posted 3 weeks ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Are you excited by the prospect of wrangling data, helping develop information systems/sources/tools, and shaping the way businesses make decisions? The Go-To-Markets Data Analytics team is looking for a skilled Data Engineer who is motivated to deliver top notch data-engineering solutions to support business intelligence, data science, and self-service data solutions. About the Role: In this role as a Data Engineer, you will: Design, develop, optimize, and automate data pipelines that blend and transform data across different sources to help drive business intelligence, data science, and self-service data solutions. Work closely with data scientists and data visualization teams to understand data requirements to ensure the availability of high-quality data for analytics, modelling, and reporting. Build pipelines that source, transform, and load data that’s both structured and unstructured keeping in mind data security and access controls. Explore large volumes of data with curiosity and conviction. Contribute to the strategy and architecture of data management systems and solutions. Proactively troubleshoot and resolve data-related and performance bottlenecks in a timely manner. Be open to learning and working on emerging technologies in the data engineering, data science and cloud computing space. Have the curiosity to interrogate data, conduct independent research, utilize various techniques, and tackle ambiguous problems. Shift Timings: 12 PM to 9 PM (IST) Work from office for 2 days in a week (Mandatory) About You You’re a fit for the role of Data Engineer, if your background includes: Must have at least 4-6 years of total work experience with at least 2+ years in data engineering or analytics domains. Graduates in data analytics, data science, computer science, software engineering or other data centric disciplines. SQL Proficiency a must. Experience with data pipeline and transformation tools such as dbt, Glue, FiveTran, Alteryx or similar solutions. Experience using cloud-based data warehouse solutions such as Snowflake, Redshift, Azure. Experience with orchestration tools like Airflow or Dagster. Preferred experience using Amazon Web Services (S3, Glue, Athena, Quick sight). Data modelling knowledge of various schemas like snowflake and star. Has built data pipelines and other custom automated solutions to speed the ingestion, analysis, and visualization of large volumes of data. Knowledge building ETL workflows, database design, and query optimization. Has experience of a scripting language like Python. Works well within a team and collaborates with colleagues across domains and geographies. Excellent oral, written, and visual communication skills. Has a demonstrable ability to assimilate new information thoroughly and quickly. Strong logical and scientific approach to problem-solving. Can articulate complex results in a simple and concise manner to all levels within the organization. What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com.
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France