Home
Jobs

1697 Querying Jobs - Page 17

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Greater Kolkata Area

Remote

Linkedin logo

Location : Remote. Contract Duration : 12 Months. Total Positions : 2. Experience : 5+ years. Job Overview We are looking for a Senior Full Stack Software Engineer with strong expertise in C#/.NET, Angular, SQL, and Azure to join our dynamic development team. This is a contract position, and you will work remotely for the next 12 months. As a Senior Full Stack Software Engineer, you will be responsible for building and maintaining both frontend and backend components of our enterprise applications. You will play a key role in the design, implementation, and deployment of software solutions while collaborating with cross-functional teams. Key Responsibilities Design and Develop Full-Stack Solutions : Create scalable, efficient, and maintainable code for both frontend (Angular) and backend (C#/.NET) components. Backend Development : Develop and manage APIs, microservices, and database structures (SQL, Azure) to handle large-scale systems. Frontend Development: Build dynamic and responsive web interfaces using Angular, ensuring a seamless user experience across all devices. Cloud Development : Leverage Azure for deployment, scaling, and performance tuning of applications. Collaborate and Communicate : Work closely with project managers, designers, and other developers to ensure timely delivery of high-quality software. Code Reviews : Review and provide feedback on code to maintain high standards of coding practices. Troubleshooting : Diagnose and resolve technical issues and bugs in both frontend and backend systems. Maintain Documentation : Ensure clear, comprehensive documentation is created and maintained for code, workflows, and processes. Skills And Qualifications C#/.NET : Strong experience in developing applications using C# and the .NET framework. Angular : Expertise in building dynamic single-page applications (SPAs) using Angular. SQL : Solid knowledge of database design, querying, and optimization with SQL databases. Azure : Familiarity with Azure services for cloud development, including hosting, storage, and CI/CD pipelines. Version Control : Experience with Git or other version control systems. Problem Solving : Strong analytical and troubleshooting skills. Communication : Excellent verbal and written communication skills. Education & Experience Bachelors or Masters degree in Computer Science, Information Technology, or a related field. 5+ years of hands-on experience as a Full Stack Developer or Software Engineer (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description : Power BI Developer Company : Mastech Digital & Mastech InfoTrellis Location : Chennai, Tamil Nadu, India (Hybrid : 3 days/week in office) Position : Power BI Developer Employment Type : Full-Time Experience : 5+ Years Notice Period : 15-30 Days (Non-Negotiable) About Mastech Digital & Mastech InfoTrellis Mastech Digital (www.mastechdigital.com) is a leading IT services and solutions provider, delivering digital transformation services to global enterprises. Mastech InfoTrellis, a subsidiary, specializes in data and analytics solutions, empowering clients to gain actionable insights from their data. Job Summary We are seeking a highly skilled and experienced Power BI Developer to join our dynamic team in Chennai. The ideal candidate will possess a strong background in business intelligence (BI) reporting, with a focus on Power BI development and implementation. You will be responsible for designing, developing, and deploying robust and scalable BI solutions, collaborating with cross-functional teams and stakeholders to deliver impactful insights. Responsibilities Design and develop interactive and visually appealing reports, dashboards, and scorecards using Power BI. Build complex visualizations and implement advanced DAX measures to address business requirements. Optimize Power BI reports and queries for performance and efficiency. Develop and maintain relational and dimensional data models to support reporting needs. Fine-tune reports/queries for performance optimization. Collaborate with the pre-sales team to provide technical expertise for RFPs, proposals, PoCs, and client demos. Contribute to high-level solution design, scoping, and effort estimation. Mentor, guide, and train team members on BI reporting best practices. Contribute to the knowledge base and thought leadership through assets, accelerators, articles, blogs, white papers, and webinars. Review BI projects for solution design, database modelling, best practices and standards. Administer Power BI tools, including installation, configuration, user setup, and security management. Stay up-to-date with the latest BI trends and technologies. Adapt to new reporting and integration skills as required. Collaborate effectively with business stakeholders, developers, and other team members. Communicate technical concepts clearly and concisely. Mandatory Required Skills & Experience : 5+ years of experience in BI reporting solution design and implementation. Extensive hands-on experience with Power BI development and administration. Strong proficiency in SQL for data querying and manipulation. Experience in relational and dimensional data modeling. Master's degree in computer science, Information Systems, or a related field. Good To Have Experience with Azure cloud platform and its BI services. Proficiency in Tableau or other relevant BI tools. Power BI/Tableau or relevant certifications. Understanding of ETL concepts and experience with data integration tools. Master's degree is a plus. Essential Qualifications Bachelor's degree in computer science, Information Systems, or a related field. Master's degree is a plus. Proven ability to work independently and as part of a team. Excellent communication and interpersonal skills. Strong analytical and problem-solving1 abilities. Proactive and results-driven with a strong sense of ownership and accountability. Key Competencies Data Visualization & Reporting Data Modeling & Database Design Performance Optimization Problem Solving & Analytical Thinking Communication & Collaboration Technical Leadership & Mentorship Why Join Mastech Digital & Mastech InfoTrellis? Opportunity to work on cutting-edge BI projects for global clients. Collaborative and supportive work environment. Career growth and development opportunities. Competitive salary and benefits package. Hybrid work schedule. (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Job Description Mactores is seeking an AWS Data Engineer (Senior) to join our team. The ideal candidate will have extensive experience in PySpark and SQL and have worked with data pipelines using Amazon EMR or Amazon Glue. The candidate must also have experience in data modeling and end-user querying using Amazon Redshift or Snowflake, Amazon Athena, Presto, and orchestration experience using Airflow. Responsibilities Develop and maintain data pipelines using Amazon EMR or Amazon Glue. Create data models and end-user querying using Amazon Redshift or Snowflake, Amazon Athena, and Presto. Build and maintain the orchestration of data pipelines using Airflow. Collaborate with other teams to understand their data needs and help design solutions. Troubleshoot and optimize data pipelines and data models. Write and maintain PySpark and SQL scripts to extract, transform, and load data. Document and communicate technical solutions to both technical and non-technical audiences. Stay up-to-date with new AWS data technologies and evaluate their impact on our existing systems. Requirements Bachelor's degree in Computer Science, Engineering, or a related field. 3+ years of experience working with PySpark and SQL. 2+ years of experience building and maintaining data pipelines using Amazon EMR or Amazon Glue. 2+ years of experience with data modeling and end-user querying using Amazon Redshift or Snowflake, Amazon Athena, and Presto. 1+ years of experience building and maintaining the orchestration of data pipelines using Airflow. Strong problem-solving and troubleshooting skills. Excellent communication and collaboration skills. Ability to work independently and within a team environment. You Are Preferred If You Have AWS Data Analytics Specialty Certification. Experience with Agile development methodology. (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Mactores is a trusted leader among businesses in providing modern data platform solutions. Since 2008, Mactores have been enabling businesses to accelerate their value through automation by providing End-to-End Data Solutions that are automated, agile, and secure. We collaborate with customers to strategize, navigate, and accelerate an ideal path forward with a digital transformation via assessments, migration, or modernization. Mactores is seeking an AWS Data Engineer (Senior) to join our team. The ideal candidate will have extensive experience in PySpark and SQL, and have worked with data pipelines using Amazon EMR or Amazon Glue. The candidate must also have experience in data modeling and end-user querying using Amazon Redshift or Snowflake, Amazon Athena, Presto, and orchestration experience using Airflow. What you will do ? Develop and maintain data pipelines using Amazon EMR or Amazon Glue. Create data models and end-user querying using Amazon Redshift or Snowflake, Amazon Athena, and Presto. Build and maintain the orchestration of data pipelines using Airflow. Collaborate with other teams to understand their data needs and help design solutions. Troubleshoot and optimize data pipelines and data models. Write and maintain PySpark and SQL scripts to extract, transform, and load data. Document and communicate technical solutions to both technical and non-technical audiences. Stay up-to-date with new AWS data technologies and evaluate their impact on our existing systems. What are we looking for? Bachelor's degree in Computer Science, Engineering, or a related field. 3+ years of experience working with PySpark and SQL. 2+ years of experience building and maintaining data pipelines using Amazon EMR or Amazon Glue. 2+ years of experience with data modeling and end-user querying using Amazon Redshift or Snowflake, Amazon Athena, and Presto. 1+ years of experience building and maintaining the orchestration of data pipelines using Airflow. Strong problem-solving and troubleshooting skills. Excellent communication and collaboration skills. Ability to work independently and within a team environment. You Are Preferred If You Have AWS Data Analytics Specialty Certification Experience with Agile development methodology Life at Mactores We care about creating a culture that makes a real difference in the lives of every Mactorian. Our 10 Core Leadership Principles that honor Decision-making, Leadership, Collaboration, and Curiosity drive how we work. Be one step ahead Deliver the best Be bold Pay attention to the detail Enjoy the challenge Be curious and take action Take leadership Own it Deliver value Be collaborative We would like you to read more details about the work culture on https://mactores.com/careers The Path to Joining the Mactores Team At Mactores, our recruitment process is structured around three distinct stages: Pre-Employment Assessment: You will be invited to participate in a series of pre-employment evaluations to assess your technical proficiency and suitability for the role. Managerial Interview: The hiring manager will engage with you in multiple discussions, lasting anywhere from 30 minutes to an hour, to assess your technical skills, hands-on experience, leadership potential, and communication abilities. HR Discussion: During this 30-minute session, you'll have the opportunity to discuss the offer and next steps with a member of the HR team. (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

1.0 - 2.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Job Description At Optimum Info, we are continually innovating and developing a range of software solutions empowering the Network Development and Field Operations professions in the Automotive, Power Sports and Equipment industries. Our integrated suite of comprehensive solutions provides a seamless and rich experience to our customers, helping them become more effective at their work and create an impact on the organization. Our sharp cultural focus on outstanding customer service and employee empowerment is core to our growth and success. As a growing company, we offer incredible opportunities for learning and growth with opportunity to manage high-impact business solution. Position Overview The Engineer - Applications Support is a specialist role that requires deep understanding of the supported applications, an ability to analyze issues and identify resolutions and to communicate clearly. The primary focus for this position is assisting users through resolving their queries or issues, raised using the organization's ticketing platform or other supported channels. For issues that require deeper technical knowledge or access to code, this role will initially escalate the tickets to higher levels of support but is expected to acquire technical skills and be able to support at all levels in due course. When issues are resolved, this role will participate in validating the resolution in pre-production and production environments. Key Responsibilities Receive issues and requests through the organization's ticketing system. (Log tickets when issues are reported through alternate supported channels). Timely Incident acknowledgement and response. Carry out classification of support tickets and prioritize for resolution. Providing functional clarification and responses to the end users' queries. Issue analysis and timely closure of tickets, within defined turnaround times. Issue investigation and resolution (or workarounds) through querying the databases. Forward identified bug reports to next level of support and provide functional workarounds to the users. Ticket / case escalation to the next level of support, as necessary. Assist the next level of support in issue resolution by coordinating with end users. Document the resolutions provided, to build a knowledge base over a period of time. Desired Qualifications And Experience Ability to quickly learn the features and functionality of applications. Ability to query databases and use tools to guide users in resolving issues. 1-2 years' overall experience in a professional services organization, with primary focus on using and working with IT systems. Experience in a customer facing IT support role. Excellent written, presentation, and oral communication skills. Experience with .NET framework using C# language, jQuery, Bootstrap and SQL Server OR web application testing. Exposure to any public cloud environment, preferably Azure. Education Bachelor's degree in engineering or computer science or a related field Other Attributes Knowledge of automotive sales and business processes desirable Strong customer service orientation Analytical, troubleshooting, and problem-solving skills Focus on maintaining detailed documentation Experience working in a team-oriented, collaborative environment Must be proficient in MS Office tools like Word, Excel, and PowerPoint Able to work in assigned shifts, with structured handovers at start and end of shift (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Title : Data : Bangalore : 3+ : the Opportunity : We Are Urgently Looking For Experienced Data Engineers To Join Our Team At Hexamobile, Bangalore. Ideal Candidates Will Have a Strong Background In Python, PySpark, And ETL Processes, With Azure Cloud Experience Being a Strong Design, develop, and maintain scalable and efficient data pipelines using Python and PySpark. Build and optimize ETL (Extract, Transform, Load) processes to ingest, clean, transform, and load data from various sources into data warehouses and data lakes. Work with large and complex datasets, ensuring data quality, integrity, and reliability. Collaborate closely with data scientists, analysts, and other stakeholders to understand their data requirements and provide them with clean and well-structured data. Monitor and troubleshoot data pipelines, identifying and resolving issues to ensure continuous data flow. Implement data quality checks and validation processes to maintain high data accuracy. Develop and maintain comprehensive documentation for data pipelines, ETL processes, and data models. Optimize data systems and pipelines for performance, scalability, and cost-efficiency. Implement data security and governance policies and procedures. Stay up-to-date with the latest advancements in data engineering technologies and best practices. Work in an agile environment, participating in sprint planning, daily stand-ups, and code reviews. Contribute to the design and architecture of our data Skills : Python : Strong proficiency in Python programming, including experience with data manipulation libraries (e.g., Pandas, NumPy). PySpark : Extensive hands-on experience with Apache Spark using PySpark for large-scale data processing and distributed computing. ETL Processes : Deep understanding of ETL concepts, methodologies, and best practices. Proven experience in designing, developing, and implementing ETL pipelines. SQ L: Solid understanding of SQL and experience in querying, manipulating, and transforming data in relational databases. Understanding of Databases : Strong understanding of various database systems, including relational databases (e.g., PostgreSQL, MySQL, SQL Server) and potentially NoSQL databases. Version Control : Experience with version control systems, particularly Git, and platforms like GitHub or GitLab (i.e., working with branches and pull Preferred Skills : Azure Cloud Experience: Hands-on experience with Microsoft Azure cloud services, particularly data-related services such as : Azure Data Factory Azure Databricks Azure Blob Storage Azure SQL Database Azure Data Lake Storage Experience with data warehousing concepts and : Bachelor's degree in Computer Science, Engineering, or a related field. Minimum of 3 years of professional experience as a Data Engineer. Proven experience in building and maintaining data pipelines using Python and PySpark. Strong analytical and problem-solving skills. Good verbal and written communication skills. Ability to work effectively both independently and as part of a team. Must be available to join Points : Experience with other big data technologies (Hadoop, Hive, Kafka, Apache Airflow). Knowledge of data governance and data quality frameworks. Experience with CI/CD pipelines for data engineering workflows. Familiarity with data visualization tools (Power BI, Tableau). Experience with other cloud platforms (AWS, GCP). (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Data Analysis And Assessment Conduct thorough data analysis of source systems to understand data structures, quality, and dependencies. Identify data quality issues and develop strategies to cleanse and standardize data before migration. Create data profiling reports to identify potential data migration challenges. Migration Design And Architecture Design comprehensive data migration plans, including data mapping, transformation rules, and loading procedures. Develop data migration architecture considering source and target systems, data volumes, and performance requirements. Select appropriate methods and patterns based on project needs. Data Mapping And Transformation Create detailed data mapping documents to define how data will be transformed and translated between source and target systems. Develop data cleansing and transformation logic to ensure data quality in the target system. Design data validation rules to identify and address data inconsistencies. Testing And Validation Work with the Testers to develop and execute comprehensive data migration test plans, including unit testing, integration testing, and user acceptance testing. Work with the testing and development teams to resolve defects. Stakeholder Management Collaborate with business stakeholders to understand data requirements and migration objectives. Communicate data migration plans and progress updates to relevant stakeholders. Address concerns and provide technical guidance throughout the migration process. Required Skills And Qualifications Knowledge related to computer technology, network infrastructure, systems and applications, security, and storage Intermediate knowledge and experience with Microsoft Office Suite with proficiency in Excel Intermediate knowledge and experience in Informatica ILM, AWS, Abinitio and Database [SQL/NoSQL] Concepts. Ability to collaborate, engage and manage resources outside of the Data centricity Team. General conceptual understanding of programming and DB querying. Ability to work collaboratively with cross-functional teams. Prior knowledge of Agile project management tools,such as Jira. Ability to work effectively with internal and external IT support, senior leadership, project teams and individuals Ability to perform in a dynamic project management environment. (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

58.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Role : Business Analyst Key Responsibilities Elicitation & Analysis Conduct stakeholder interviews, workshops, and JAD sessions to gather functional and non-functional requirements. Perform detailed analysis to understand end-user needs and define clear and comprehensive business requirements. Evaluate current systems/processes and propose enhancements. Product Specification & Documentation Convert requirements into User Stories, Use Cases, and Acceptance Criteria in tools like JIRA, Planner. Maintain Product Backlogs and contribute to Sprint Planning with the Agile team. Create supporting documents such as Process Flows, Wireframes, and Data Flow Diagrams. Stakeholder Management Collaborate with cross-functional teams including Product Owners, Developers, QA Engineers, and UI/UX Designers. Act as the bridge between technical teams and non-technical stakeholders to ensure mutual understanding. Product Lifecycle Management Support the entire product lifecycle from ideation to post-launch reviews. Participate in Product Roadmap discussions and strategic planning. Conduct GAP Analysis, Feasibility Studies, and Competitive Benchmarking. Testing & Quality Assurance Design and execute UAT plans, and support QA teams in developing test cases. Validate product releases and ensure alignment with business goals and compliance Skills & Tools : Strong knowledge of Agile (Scrum/Kanban) and SDLC methodologies. Expertise In Tools Like JIRA, Confluence, Trello. Figma, Balsamiq, Lucidchart (for wireframes and workflows). SQL (for data analysis and querying). Excellent documentation, presentation, and stakeholder communication skills. Ability to handle multiple projects simultaneously and work in a fast-paced environment. Qualifications Bachelors/Masters degree in Business Administration, Computer Science, Information Technology, or related field. 58 years of experience in Business Analysis, preferably in a product-based or SaaS environment. Professional certification is a plus: CBAP, PMI-PBA, CSPO, or Agile BA certifications. Preferred Domain Experience FinTech, HealthTech, EdTech, E-commerce, or SaaS platforms. Working with B2B/B2C product lines. (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Lucknow, Uttar Pradesh, India

On-site

Linkedin logo

About The Job Job Description : We are seeking a highly skilled and customer-focused GraphDB / Neo4J Solutions Engineer to join our team. This role is responsible for delivering high-quality solution implementation to our customers to implement GraphDB based product and collaborating with cross-functional teams to ensure customer success. Solution lead is expected to provide in-depth solutions on Data based Software Product to a global client base and partners. This role requires deep technical expertise, strong problem-solving skills, and the ability to communicate complex technical information effectively. Solution lead must have experience working with databases, specifically graph databases, and possess a strong background in Linux, networking, and scripting (bash/python). Roles And Responsibilities Collaborate with core engineering, Customers and solution engineering teams for functional and technical discovery sessions. Prepare product and live software demonstrations Create and maintain public documentation, internal knowledge base articles, and FAQs. Ability to design efficient graph schemas and develop prototypes that address customer requirements (e., Fraud Detection, Recommendation Engines, Knowledge Graphs). Knowledge of indexing strategies, partitioning, and query optimization in GraphDB. Candidate to work during EMEA time zone (2PM to 10 PM shift) Requirements Education and Experience : Education : B.Tech in computer engineering, Information Technology, or related field. Experience : 5+ years of experience in a Solution Lead role on Data based Software Product such as GraphDB, Neo4J Must Have Skills SQL Expertise : 4+ years of experience in SQL for database querying, performance tuning, and debugging. Graph Databases and GraphDB platforms : 4+ years of hands on experience with Neo4j, or similar graph database systems. Scripting & Automation : 4+ years with strong skills in C, C++, Python for automation, task management, and issue resolution. Virtualization and Cloud knowledge : 4+ years with Azure, GCP or AWS. Management skills : 3+ years Experience with data requirements gathering and data modeling, white boarding and developing/validating proposed solution architectures. The ability to communicate complex information and concepts to prospective users in a clear and effective way. Monitoring & Performance Tools : Experience with Grafana, Datadog, Prometheus, or similar tools for system and performance monitoring. Networking & Load Balancing : Proficient in TCP/IP, load balancing strategies, and troubleshooting network-related issues (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Lucknow, Uttar Pradesh, India

On-site

Linkedin logo

About The Job Job Description : We are seeking a highly skilled and customer-focused Technical Support Engineer to join our team. This role is responsible for delivering high-quality technical support to our customers to troubleshoot complex technical issues and collaborating with cross-functional teams to ensure customer success. Technical Support Engineer is expected to provide advanced technical support on Data based Software Product to a global client base and partners. This role requires deep technical expertise, strong problem-solving skills, and the ability to communicate complex technical information effectively. The primary responsibility is to troubleshoot and resolve technical issues, support product adoption, and ensure customer satisfaction. TSE must have experience working with databases, specifically graph databases, and possess a strong background in Linux, networking, and scripting (bash/python). They work collaboratively with engineering teams to escalate and resolve complex issues when necessary (i.e. code change required, first time seeing a behavior). Roles And Responsibilities Respond to customer inquiries and provide in-depth technical support via multiple communication channels. Collaborate with core engineering and solution engineering teams to diagnose and resolve complex technical problems. Create and maintain public documentation, internal knowledge base articles, and FAQs. Monitoring and meeting SLAs. Timely triage varying issues based on error messages, log files, threads dumps, stack traces, sample code, and other available data points. Efficiently troubleshoot cluster issues across multiple servers, data centers, and regions, in a variety of clouds (AWS, Azure, GCP, etc), virtual, and bare metal environments. Candidate to work during EMEA time zone (2PM to 10 PM shift) Requirements Must Have Skills : Education : B.Tech in computer engineering, Information Technology, or related field. Experience GraphDB experience is must 5+ years of experience in a Technical Support Role on Data based Software Product at least L3 level. Linux Expertise : 4+ years with in-depth understanding of Linux, including filesystem, process management, memory management, networking, and security. Graph Databases : 3+ years of experience with Neo4j, or similar graph database systems. SQL Expertise : 3+ years of experience in SQL for database querying, performance tuning, and debugging. Data Streaming & Processing : 2+ years hands-on experience with Kafka, Zookeeper, and Spark. Scripting & Automation : 2+ years with strong skills in Bash scripting and Python for automation, task management, and issue resolution. Containerization & Orchestration : 1+ year proficiency in Docker, Kubernetes, or other containerization technologies is essential. Monitoring & Performance Tools : Experience with Grafana, Datadog, Prometheus, or similar tools for system and performance monitoring. Networking & Load Balancing : Proficient in TCP/IP, load balancing strategies, and troubleshooting network-related issues. Web & API Technologies : Understanding of HTTP, SSL, REST APIs for debugging and troubleshooting API-related issues. Nice To Have Skills Familiarity with Data Science or ML will be an edge. Experience with LDAP, SSO, OAuth authentication. Strong understanding of database internals and system architecture. Cloud certification (at least DevOps Engineer level) (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

2.0 - 4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title : Mid-Level Data Analyst Location : Hyderabad, Telangana Job Type : On-Site Must be an immediate joiner Job Overview We are seeking a Mid-Level Data Analyst with strong SQL skills and proven analytical abilities to join our growing team. The ideal candidate will have a deep understanding of data manipulation and be skilled in providing actionable insights through data analysis. You will play a key role in transforming complex data into meaningful reports, dashboards, and visualizations that drive strategic decision-making. You will collaborate closely with cross-functional teams to identify business needs and contribute to optimizing operational efficiency. Key Responsibilities Data Analysis & Reporting : Analyze large datasets using SQL and other tools to identify trends, patterns, and insights that support business decision-making. SQL Querying : Write complex SQL queries to extract, manipulate, and summarize data from multiple databases. Optimize queries for performance and scalability. Cross-Functional Collaboration : Work closely with business units to understand their data needs and provide relevant insights for data-driven decisions. Education : Bachelor's degree in Data Science, Mathematics, Statistics, Computer Science, Business, or a related field. Experience : 2-4 years of experience in data analysis, data manipulation, or related fields. Technical Skills SQL : Strong proficiency in SQL for querying large datasets, including writing complex joins, subqueries, and aggregations. Analytical Skills : Strong problem-solving abilities focusing on identifying insights from data that drive business outcomes. Communication : Excellent written and verbal communication skills, with the ability to translate complex data findings into actionable insights for non-technical stakeholders. (ref:hirist.tech) Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Skills: Python, Spark, Data Engineer, Cloudera, Onpremise, Azure, Snlowfow, Kafka, Overview Of The Company Jio Platforms Ltd. is a revolutionary Indian multinational tech company, often referred to as India's biggest startup, headquartered in Mumbai. Launched in 2019, it's the powerhouse behind Jio, India's largest mobile network with over 400 million users. But Jio Platforms is more than just telecom. It's a comprehensive digital ecosystem, developing cutting-edge solutions across media, entertainment, and enterprise services through popular brands like JioMart, JioFiber, and JioSaavn. Join us at Jio Platforms and be part of a fast-paced, dynamic environment at the forefront of India's digital transformation. Collaborate with brilliant minds to develop next-gen solutions that empower millions and revolutionize industries. Team Overview The Data Platforms Team is the launchpad for a data-driven future, empowering the Reliance Group of Companies. We're a passionate group of experts architecting an enterprise-scale data mesh to unlock the power of big data, generative AI, and ML modelling across various domains. We don't just manage data we transform it into intelligent actions that fuel strategic decision-making. Imagine crafting a platform that automates data flow, fuels intelligent insights, and empowers the organization that's what we do. Join our collaborative and innovative team, and be a part of shaping the future of data for India's biggest digital revolution! About the role. Title: Lead Data Engineer Location: Mumbai Responsibilities End-to-End Data Pipeline Development: Design, build, optimize, and maintain robust data pipelines across cloud, on-premises, or hybrid environments, ensuring performance, scalability, and seamless data flow. Reusable Components & Frameworks: Develop reusable data pipeline components and contribute to the team's data pipeline framework evolution. Data Architecture & Solutions: Contribute to data architecture design, applying data modelling, storage, and retrieval expertise. Data Governance & Automation: Champion data integrity, security, and efficiency through metadata management, automation, and data governance best practices. Collaborative Problem Solving: Partner with stakeholders, data teams, and engineers to define requirements, troubleshoot, optimize, and deliver data-driven insights. Mentorship & Knowledge Transfer: Guide and mentor junior data engineers, fostering knowledge sharing and professional growth. Qualification Details Education: Bachelor's degree or higher in Computer Science, Data Science, Engineering, or a related technical field. Core Programming: Excellent command of a primary data engineering language (Scala, Python, or Java) with a strong foundation in OOPS and functional programming concepts. Big Data Technologies: Hands-on experience with data processing frameworks (e.g., Hadoop, Spark, Apache Hive, NiFi, Ozone, Kudu), ideally including streaming technologies (Kafka, Spark Streaming, Flink, etc.). Database Expertise: Excellent querying skills (SQL) and strong understanding of relational databases (e.g., MySQL, PostgreSQL). Experience with NoSQL databases (e.g., MongoDB, Cassandra) is a plus. End-to-End Pipelines: Demonstrated experience in implementing, optimizing, and maintaining complete data pipelines, integrating varied sources and sinks including streaming real-time data. Cloud Expertise: Knowledge of Cloud Technologies like Azure HDInsights, Synapse, EventHub and GCP DataProc, Dataflow, BigQuery. CI/CD Expertise: Experience with CI/CD methodologies and tools, including strong Linux and shell scripting skills for automation. Desired Skills & Attributes Problem-Solving & Troubleshooting: Proven ability to analyze and solve complex data problems, troubleshoot data pipeline issues effectively. Communication & Collaboration: Excellent communication skills, both written and verbal, with the ability to collaborate across teams (data scientists, engineers, stakeholders). Continuous Learning & Adaptability: A demonstrated passion for staying up-to-date with emerging data technologies and a willingness to adapt to new tools. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Skills: Python, Apache Spark, Snowflake, data engineer, spark, kafka, azure, Overview Of The Company Jio Platforms Ltd. is a revolutionary Indian multinational tech company, often referred to as India's biggest startup, headquartered in Mumbai. Launched in 2019, it's the powerhouse behind Jio, India's largest mobile network with over 400 million users. But Jio Platforms is more than just telecom. It's a comprehensive digital ecosystem, developing cutting-edge solutions across media, entertainment, and enterprise services through popular brands like JioMart, JioFiber, and JioSaavn. Join us at Jio Platforms and be part of a fast-paced, dynamic environment at the forefront of India's digital transformation. Collaborate with brilliant minds to develop next-gen solutions that empower millions and revolutionize industries. Team Overview The Data Platforms Team is the launchpad for a data-driven future, empowering the Reliance Group of Companies. We're a passionate group of experts architecting an enterprise-scale data mesh to unlock the power of big data, generative AI, and ML modelling across various domains. We don't just manage data we transform it into intelligent actions that fuel strategic decision-making. Imagine crafting a platform that automates data flow, fuels intelligent insights, and empowers the organization that's what we do. Join our collaborative and innovative team, and be a part of shaping the future of data for India's biggest digital revolution! About the role. Title : Lead Data Engineer Location: Mumbai Responsibilities End-to-End Data Pipeline Development: Design, build, optimize, and maintain robust data pipelines across cloud, on-premises, or hybrid environments, ensuring performance, scalability, and seamless data flow. Reusable Components & Frameworks: Develop reusable data pipeline components and contribute to the team's data pipeline framework evolution. Data Architecture & Solutions: Contribute to data architecture design, applying data modelling, storage, and retrieval expertise. Data Governance & Automation: Champion data integrity, security, and efficiency through metadata management, automation, and data governance best practices. Collaborative Problem Solving: Partner with stakeholders, data teams, and engineers to define requirements, troubleshoot, optimize, and deliver data-driven insights. Mentorship & Knowledge Transfer: Guide and mentor junior data engineers, fostering knowledge sharing and professional growth. Qualification Details Education: Bachelor's degree or higher in Computer Science, Data Science, Engineering, or a related technical field. Core Programming: Excellent command of a primary data engineering language (Scala, Python, or Java) with a strong foundation in OOPS and functional programming concepts. Big Data Technologies: Hands-on experience with data processing frameworks (e.g., Hadoop, Spark, Apache Hive, NiFi, Ozone, Kudu), ideally including streaming technologies (Kafka, Spark Streaming, Flink, etc.). Database Expertise: Excellent querying skills (SQL) and strong understanding of relational databases (e.g., MySQL, PostgreSQL). Experience with NoSQL databases (e.g., MongoDB, Cassandra) is a plus. End-to-End Pipelines: Demonstrated experience in implementing, optimizing, and maintaining complete data pipelines, integrating varied sources and sinks including streaming real-time data. Cloud Expertise: Knowledge of Cloud Technologies like Azure HDInsights, Synapse, EventHub and GCP DataProc, Dataflow, BigQuery. CI/CD Expertise: Experience with CI/CD methodologies and tools, including strong Linux and shell scripting skills for automation. Desired Skills & Attributes Problem-Solving & Troubleshooting: Proven ability to analyze and solve complex data problems, troubleshoot data pipeline issues effectively. Communication & Collaboration: Excellent communication skills, both written and verbal, with the ability to collaborate across teams (data scientists, engineers, stakeholders). Continuous Learning & Adaptability: A demonstrated passion for staying up-to-date with emerging data technologies and a willingness to adapt to new tools. Show more Show less

Posted 1 week ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Country India Location: Building No 12D, Floor 5, Raheja Mindspace, Cyberabad, Madhapur, Hyderabad - 500081, Telangana, India Role: Business Analyst Location: Hyderabad, India Full/ Part-time: Full-time Build a career with confidence Carrier is a leading provider of heating, ventilating, air conditioning and refrigeration systems, building controls and automation, and fire and security systems leading to safer, smarter, sustainable, and high-performance buildings. Carrier is on a mission to make modern life possible by delivering groundbreaking systems and services that help homes, buildings and shipping become safer, smarter, and more sustainable. Our teams exceed the expectations of our customers by anticipating industry trends, working tirelessly to master and revolutionize them. About The Role Experienced General Finance Management professional, who implements financial plans, analyzes financial processes and standards, and establishes financial indicators to forecast performance measures. Develops relationships with external financial consultants and advisors and provides technical advice to functional managers on financial matters. Key Responsibilities If you thrive in a fast-paced environment and are looking for an opportunity to develop your Analytics career in Shared Services, then we have a great opportunity for you. We are seeking a motivated Business Analyst to support the Global Business Services organization. Specific Responsibilities For This Position Include Manage end-to-end deployment of reporting structures, including data collection, transformation, visualization, and distribution, ensuring alignment with business needs. Manage implementations of business intelligence dashboards using BI tools, ensuring that data is presented in a meaningful and visually appealing manner. Collaborate with Global Process Owners from the Finance team to gather requirements, design KPI visualizations, and ensure data accuracy and quality. Deploy integrated reporting solutions, through MS tools such as Power Query and Power Automate workflows, to streamline data collection, processing, and dissemination. Collaborate with IT teams to establish new database connections, optimize SQL queries, and ensure smooth data integration from various sources. Conduct thorough data analysis, including forecast and projections, to identify trends, anomalies, and areas for process improvement. Provide recommendations to team leaders based on data insights, enabling informed decision-making and driving operational efficiencies. Support Continuous Improvement initiatives, including Kaizen events, by setting up performance measurement structures and tracking progress. Stay updated with emerging trends in business intelligence, data visualization, and project management to continually enhance reporting and analytical capabilities. Education / Certifications Bachelor’s degree in finance or accounting required Requirements 7+ years of experience in Finance processes, preferably in a Shared Service environment Proven experience in an analytical position; proficiently using finance concepts in to deliver business findings to the stakeholders. Proven track record of successfully managing projects related to KPI definition, measurement, and deployment. Experience in designing and developing BI dashboards using tools like Power BI, Tableau, or similar platforms. Strong background in data integration, database management, and SQL querying for efficient data retrieval and analysis. Proficiency in process improvement methodologies, such as Lean or Six Sigma, and the ability to drive continuous improvement initiatives. Proven analytical and quantitative skills, ability to use data and metrics to set-up and find data trends Benefits We are committed to offering competitive benefits programs for all of our employees, and enhancing our programs when necessary. Make yourself a priority with flexible schedules, parental leave and our holiday purchase scheme Drive forward your career through professional development opportunities Achieve your personal goals with our Employee Assistance Programme Our commitment to you Our greatest assets are the expertise, creativity and passion of our employees. We strive to provide a great place to work that attracts, develops and retains the best talent, promotes employee engagement, fosters teamwork and ultimately drives innovation for the benefit of our customers. We strive to create an environment where you feel that you belong, with diversity and inclusion as the engine to growth and innovation. We develop and deploy best-in-class programs and practices, providing enriching career opportunities, listening to employee feedback and always challenging ourselves to do better. This is The Carrier Way . Join us and make a difference. Apply Now! #cbsfinance Carrier is An Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or veteran status, age or any other federally protected class. Job Applicant's Privacy Notice Click on this link to read the Job Applicant's Privacy Notice Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Hyderabad, Telangana, India; Pune, Maharashtra, India . Minimum qualifications: Bachelor's degree in Computer Science or equivalent practical experience. Experience in architecting, developing, or maintaining secure cloud solutions. Experience with designing cloud enterprise solutions and supporting projects to completion. Experience with coding in one or more general purpose languages (e.g., Python, Java, Go, C or C++) including data structures, algorithms, and software design. Preferred qualifications: Experience in software development, managing Operating System (OS) or Linux environments, network design and deployment or storage systems. Experience with data migration and integration tools with the knowledge of data visualization tools and techniques. Experience in querying and managing relational and non-relational databases with data modeling and performance optimization. Experience with customer-facing migration including service discovery, assessment, planning, execution, and operations. Knowledge of data warehousing concepts with dimensional modeling, Extract, Transform, and Load (ETL) processes, and data governance. About the job The Google Cloud Platform team helps customers transform and build what's next for their business — all with technology built in the cloud. Our products are developed for security, reliability and scalability, running the full stack from infrastructure to applications to devices and hardware. Our teams are dedicated to helping our customers — developers, small and large businesses, educational institutions and government agencies — see the benefits of our technology come to life. As part of an entrepreneurial team in this rapidly growing business, you will play a key role in understanding the needs of our customers and help shape the future of businesses of all sizes use technology to connect with customers, employees and partners. Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Work with customer technical leads, client executives, and partners to manage and deliver implementations of cloud solutions and become a trusted advisor to decision makers throughout the engagement. Propose data solution architectures and manage the deployment of cloud data solutions according to customer requirements and implement best practices. Work with internal specialists, Product, and Engineering teams to package approaches, best practices, and lessons learned into thought leadership, methodologies, and published assets. Interact with Business, Partners, and customer technical stakeholders to manage project scope, priorities, deliverables, risks and issues, and timelines for successful client outcomes. Travel 30% of the time for client engagements. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form . Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 9 to 11 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills An inclination to mentor; an ability to lead and deliver medium sized components independently Technical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data: Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management Expertise around Data : Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Data Governance: A strong grasp of principles and practice including data quality, security, privacy and compliance Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes. File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Experience of using a Job scheduler e.g., Autosys. Exposure to Business Intelligence tools e.g., Tableau, Power BI Certification on any one or more of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Join our Team About This Opportunity Ericsson is a leading provider of telecommunications equipment and services to mobile and fixed network operators globally. We are seeking a highly skilled and experienced Data Scientist to join our dynamic team at Ericsson. As a Data Scientist, you will be responsible for leveraging advanced analytics and machine learning techniques to drive actionable insights and solutions for our telecom domain. This role requires a deep understanding of data science methodologies, strong programming skills, and proficiency in cloud-based environments. What You Will Do Develop and deploy machine learning models for various applications including chat-bot, XGBoost, random forest, NLP, computer vision, and generative AI. Utilize Python for data manipulation, analysis, and modeling tasks. Proficient in SQL for querying and analyzing large datasets. Experience with Docker and Kubernetes for containerization and orchestration of applications. Basic knowledge of PySpark for distributed computing and data processing. Collaborate with cross-functional teams to understand business requirements and translate them into analytical solutions. Deploy machine learning models into production environments and ensure scalability and reliability. Preferably have experience working with Google Cloud Platform (GCP) services for data storage, processing, and deployment. Experience in analysing complex problems and translate it into algorithms. Backend development in Rest APIs using Flask, Fast API Deployment experience with CI/CD pipelines Working knowledge of handling data sets and data pre-processing through PySpark Writing queries to target Casandra, PostgreSQL database. Design Principles in application development. The Skills You Bring Bachelor's degree in Computer Science, Statistics, Mathematics, or a related field. A Master's degree or PhD is preferred. 3-7 years of experience in data science and machine learning roles, preferably within the telecommunications or related industry. Proven experience in model development, evaluation, and deployment. Strong programming skills in Python and SQL. Familiarity with Docker, Kubernetes, and PySpark. Solid understanding of machine learning techniques and algorithms. Experience working with cloud platforms, preferably GCP. Excellent problem-solving skills and ability to work independently as well as part of a team. Strong communication and presentation skills, with the ability to explain complex analytical concepts to non-technical stakeholders. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Bangalore Req ID: 763993 Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Hyderabad, Telangana, India; Pune, Maharashtra, India . Minimum qualifications: Bachelor's degree in Computer Science or equivalent practical experience. Experience in architecting, developing, or maintaining secure cloud solutions. Experience with designing cloud enterprise solutions and supporting projects to completion. Experience with coding in one or more general purpose languages (e.g., Python, Java, Go, C or C++) including data structures, algorithms, and software design. Preferred qualifications: Experience in software development, managing Operating System (OS) or Linux environments, network design and deployment or storage systems. Experience with data migration and integration tools with the knowledge of data visualization tools and techniques. Experience in querying and managing relational and non-relational databases with data modeling and performance optimization. Experience with customer-facing migration including service discovery, assessment, planning, execution, and operations. Knowledge of data warehousing concepts with dimensional modeling, Extract, Transform, and Load (ETL) processes, and data governance. About the job The Google Cloud Platform team helps customers transform and build what's next for their business — all with technology built in the cloud. Our products are developed for security, reliability and scalability, running the full stack from infrastructure to applications to devices and hardware. Our teams are dedicated to helping our customers — developers, small and large businesses, educational institutions and government agencies — see the benefits of our technology come to life. As part of an entrepreneurial team in this rapidly growing business, you will play a key role in understanding the needs of our customers and help shape the future of businesses of all sizes use technology to connect with customers, employees and partners. Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Work with customer technical leads, client executives, and partners to manage and deliver implementations of cloud solutions and become a trusted advisor to decision makers throughout the engagement. Propose data solution architectures and manage the deployment of cloud data solutions according to customer requirements and implement best practices. Work with internal specialists, Product, and Engineering teams to package approaches, best practices, and lessons learned into thought leadership, methodologies, and published assets. Interact with Business, Partners, and customer technical stakeholders to manage project scope, priorities, deliverables, risks and issues, and timelines for successful client outcomes. Travel 30% of the time for client engagements. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form . Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Job Summary: We are looking for Node.js Backend Developer and this is remote position from India. Responsibilities: Collaborate with a close-knit team where your contributions will directly impact on the trajectory of our business. Design and build new features for our core delivery platform. Continuously evolve our hosted infrastructure by collaborating with the CTO and senior developers to implement industry best practices for security, maintainability, and efficiency. Troubleshoot bugs/issues and fix them the right way instead of the quick way to prevent recurring issues. Suggest process improvements that help the team deliver more high-quality software more often. Experience: Building with Typescript running on Node.js is strongly preferred, although other typed language experience (C#, Java, etc.) will be considered. AWS or other cloud platforms and building using Infrastructure as code. Building, monitoring, and maintaining microservices. Building and maintaining public and private APIs, including documentation and versioning. Relational databases (PostgreSQL preferred), including querying and optimizing. Experience with CI/CD pipelines, preferably GitHub actions and automated testing. Familiarity with NestJS as an architectural framework. Familiarity with building user interfaces utilizing React and Next.js. Bonus: Building and maintaining native iOS/Android applications utilizing React Native or similar framework. Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Mantas Scenario Developer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 5 to 8 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Hands on Mantas ( Oracle FCCM ) expert throughout the full development life cycle, including: requirements analysis, functional design, technical design, programming, testing, documentation, implementation, and on-going technical support Translate business needs (BRD) into effective technical solutions and documents (FRD/TSD) Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Mantas: Expert in Oracle Mantas/FCCM, Scenario Manager, Scenario Development, thorough knowledge and hands on experience in Mantas FSDM, DIS, Batch Scenario Manager Big Data: Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Mantas Scenario Developer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 5 to 8 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Hands on Mantas ( Oracle FCCM ) expert throughout the full development life cycle, including: requirements analysis, functional design, technical design, programming, testing, documentation, implementation, and on-going technical support Translate business needs (BRD) into effective technical solutions and documents (FRD/TSD) Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Mantas: Expert in Oracle Mantas/FCCM, Scenario Manager, Scenario Development, thorough knowledge and hands on experience in Mantas FSDM, DIS, Batch Scenario Manager Big Data: Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Join our Team About This Opportunity Ericsson is a leading provider of telecommunications equipment and services to mobile and fixed network operators globally. We are seeking a highly skilled and experienced Data Scientist to join our dynamic team at Ericsson. As a Data Scientist, you will be responsible for leveraging advanced analytics and machine learning techniques to drive actionable insights and solutions for our telecom domain. This role requires a deep understanding of data science methodologies, strong programming skills, and proficiency in cloud-based environments. What You Will Do Develop and deploy machine learning models for various applications including chat-bot, XGBoost, random forest, NLP, computer vision, and generative AI. Utilize Python for data manipulation, analysis, and modeling tasks. Proficient in SQL for querying and analyzing large datasets. Experience with Docker and Kubernetes for containerization and orchestration of applications. Basic knowledge of PySpark for distributed computing and data processing. Collaborate with cross-functional teams to understand business requirements and translate them into analytical solutions. Deploy machine learning models into production environments and ensure scalability and reliability. Preferably have experience working with Google Cloud Platform (GCP) services for data storage, processing, and deployment. Experience in analysing complex problems and translate it into algorithms. Backend development in Rest APIs using Flask, Fast API Deployment experience with CI/CD pipelines Working knowledge of handling data sets and data pre-processing through PySpark Writing queries to target Casandra, PostgreSQL database. Design Principles in application development. The Skills You Bring Bachelor's degree in Computer Science, Statistics, Mathematics, or a related field. A Master's degree or PhD is preferred. 3-7 years of experience in data science and machine learning roles, preferably within the telecommunications or related industry. Proven experience in model development, evaluation, and deployment. Strong programming skills in Python and SQL. Familiarity with Docker, Kubernetes, and PySpark. Solid understanding of machine learning techniques and algorithms. Experience working with cloud platforms, preferably GCP. Excellent problem-solving skills and ability to work independently as well as part of a team. Strong communication and presentation skills, with the ability to explain complex analytical concepts to non-technical stakeholders. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Bangalore Req ID: 763993 Show more Show less

Posted 1 week ago

Apply

3.0 - 6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

We are looking for a hands-on Data Engineer who is passionate about solving business problems through innovation and engineering practices. As a Data Engineer, the candidate will leverage deep technical knowledge and will apply knowledge of data architecture standards, data warehousing, data structures, and business intelligence to drive the creation of high-quality data products for data driven decision making. Required Qualifications 3-6 Years Experience of implementing data-intensive solutions using agile methodologies. Code contributing member of Agile teams, working to deliver sprint goals. Write clean, efficient, and maintainable code that meets the highest standards of quality. Very strong in coding Python/Pyspark, UNIX shell scripting Experience in cloud native technologies and patterns Ability to automate and streamline the build, test and deployment of data pipelines T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in data integration platforms such as Apache Spark Experienced in writing Pyspark code to handle large data set ,perform data transformation , familiarity with Pyspark integration with other Apache Spark component ,such as Spark SQL , Understanding of Pyspark optimization techniques Strong proficiency in working with relational databases and using SQL for data querying, transformation, and manipulation. Big Data: Exposure to ‘big data’ platforms such as Hadoop, Hive or Iceberg for data storage and processing Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, PySpark, UNIX Shell scripting DevOps: Exposure to concepts and enablers - CI/CD platforms, bitbucket/Github, JIRA, Jenkins, Tekton, Harness Technical Skills (Valuable) Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls, framework libraries like Deequ Federated Query: Starburst, Trino Containerization: Fair understanding of containerization platforms like Docker, Kubernetes, Openshift File Formats: Exposure in working on File/Table Formats such as Avro, Parquet, Iceberg, Delta Schedulers: Basics of Job scheduler like Autosys, Airflow Cloud: Experience in cloud native technologies and patterns (AWS, Google Cloud) Nice to have: Java, for REST API development Other skills : Strong project management and organizational skills. Excellent problem-solving, communication, and organizational skills. Proven ability to work independently and with a team. Experience in managing and implementing successful projects Ability to adjust priorities quickly as circumstances dictate Consistently demonstrates clear and concise written and verbal communication ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less

Posted 1 week ago

Apply

6.0 - 9.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Position Details EY’s GDS Assurance Digital team’s mission is to develop, implement and integrate technology solutions that better serve our audit clients and engagement teams. As a member of EY’s core Assurance practice, you’ll develop a deep Audit related technical knowledge and outstanding database, data analytics and programming skills. Ever-increasing regulations require audit departments to gather, organize and analyse more data than ever before. Often the data necessary to satisfy these ever-increasing and complex regulations must be collected from a variety of systems and departments throughout an organization. Effectively and efficiently handling the variety and volume of data is often extremely challenging and time consuming for a company. EY's GDS Assurance Digital team members work side-by-side with the firm's partners, clients and audit technical subject matter experts to develop and incorporate technology solutions that enhance value-add, improve efficiencies and enable our clients with disruptive and market leading tools supporting Assurance. GDS Assurance Digital provides solution architecture, application development, testing and maintenance support to the global Assurance service line both on a pro-active basis and in response to specific requests. EY is currently seeking a Big Data Developer to join the GDS Assurance Digital practice in Bangalore, India, to work on various Microsoft technology-based projects for customers across the globe. Requirements (including Experience, Skills And Additional Qualifications) A Bachelor's degree (BE/BTech/MCA & MBA) in Computer Science, Engineering, Information Systems Management, Accounting, Finance or a related field with adequate industry experience. BE/BTech/MCA with a sound industry experience of 6 to 9 years. Technical skills requirements: Experience with SQL, NoSQL databases such as HBase/Cassandra/MongoDB Good knowledge of Big Data querying tools, such as Pig, Hive ETL Implementation any tool like Alteryx or Azure Data Factory etc Good to have experience in NiFi Experience in any one of the reporting tool like Power BI/Tableau/Spot fire is must Analytical/Decision Making Responsibilities: An ability to quickly understand complex concepts and use technology to support data modeling, analysis, visualization or process automation Selects appropriately from applicable standards, methods, tools and applications and uses accordingly Ability to work within a multi-disciplinary team structure, but also independently Demonstrates analytical and systematic approach to problem solving Communicates fluently orally and in writing and can present complex technical information to both technical and non-technical audiences Able to plan, schedule and monitor work activities in to meet time and quality targets Able to absorb rapidly new technical information, business acumen, and apply it effectively Ability to work in a team environment with strong customer focus, good listening, negotiation and problem-resolution skills Additional skills requirements: The expectations are that a Senior will be able to maintain long-term client relationships and network and cultivate business development opportunities Should have understanding and experience of software development best practices Must be a team player EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Description You’re ready to gain the skills and experience needed to grow within your role and advance your career — and we have the perfect software engineering opportunity for you. As a Software Engineer III at JPMorgan Chase within the Consumer & Community Banking, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives. Job Responsibilities Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems Creates secure and high-quality production code and maintains algorithms that run synchronously with appropriate systems Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development Gathers, analyzes, synthesizes, and develops visualizations from large, diverse data sets in service of continuous improvement of software applications and systems Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture Contributes to software engineering communities of practice and events that explore new and emerging technologies Required Qualifications, Capabilities, And Skills Formal training or certification on software engineering concepts and 3+ years applied experience Hands-on practical experience in system design, application development, testing, and operational stability Hands-on practical experience of Amazon Web Services (AWS) and associated components Hands on experience of python and integrating python based applications with endpoints for UI Strong data warehouse knowledge such as Snowflake, or database knowledge such as Oracle or Postgres Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages Preferred Qualifications, Capabilities, And Skills Knowledge on Analytical App development: REACT / JavaScript / CSS Experience having been a technical lead/mentor for a team Banking domain expertise is a nice to have Must have an experience working in a team, and ability to tackle design and functionality problems independently with little to no oversight Strong written and oral communication skills; ability to communicate effectively with all levels of management and partners from a variety of business functions Show more Show less

Posted 1 week ago

Apply

Exploring Querying Jobs in India

The querying job market in India is thriving with opportunities for professionals skilled in database querying. With the increasing demand for data-driven decision-making, companies across various industries are actively seeking candidates who can effectively retrieve and analyze data through querying. If you are considering a career in querying in India, here is some essential information to help you navigate the job market.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Mumbai
  5. Delhi

Average Salary Range

The average salary range for querying professionals in India varies based on experience and skill level. Entry-level positions can expect to earn between INR 3-6 lakhs per annum, while experienced professionals can command salaries ranging from INR 8-15 lakhs per annum.

Career Path

In the querying domain, a typical career progression may look like: - Junior Querying Analyst - Querying Specialist - Senior Querying Consultant - Querying Team Lead - Querying Manager

Related Skills

Apart from strong querying skills, professionals in this field are often expected to have expertise in: - Database management - Data visualization tools - SQL optimization techniques - Data warehousing concepts

Interview Questions

  • What is the difference between SQL and NoSQL databases? (basic)
  • Explain the purpose of the GROUP BY clause in SQL. (basic)
  • How do you optimize a slow-performing SQL query? (medium)
  • What are the different types of joins in SQL? (medium)
  • Can you explain the concept of ACID properties in database management? (medium)
  • Write a query to find the second-highest salary in a table. (advanced)
  • What is a subquery in SQL? Provide an example. (advanced)
  • Explain the difference between HAVING and WHERE clauses in SQL. (advanced)

Closing Remark

As you venture into the querying job market in India, remember to hone your skills, stay updated with industry trends, and prepare thoroughly for interviews. By showcasing your expertise and confidence, you can position yourself as a valuable asset to potential employers. Best of luck on your querying job search journey!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies